#machine-learning
Every summary, chronological. Filter by category, tag, or source from the rail.
Gemma 4 MTP Drafters: 3x Faster Inference, No Quality Loss
Pair Gemma 4 with lightweight MTP drafters using speculative decoding to generate up to 3x more tokens per pass by drafting sequences and verifying in parallel, sharing KV cache for efficiency without altering outputs.
Generative AI: Prediction to Creation via Scale
Generative AI shifts machines from analyzing data (traditional AI's strength) to creating new content like text or images, powered by Markov chains, deep learning, and massive datasets/compute yielding $33.9B investment in 2024.
GPU Bandwidth Limits LLM Speed, Not FLOPS
Generating one token from a 70B model on H100 needs 140GB weight reads—one op per byte—making memory bandwidth the inference bottleneck, not compute throughput.
Synthetic Data Exposes Hidden ML Bias Before Production
Real training data hides bias via underrepresentation (e.g., rural at 9%), proxies, and skewed labels; generate synthetic data with controlled segments (e.g., rural at 25%) to reveal it through disaggregated AUC drops (0.791 to 0.768) and disparate impact <0.8, then retrain on mixed data to fix.
SIE: Dynamic Inference for Small Models on Shared GPUs
Open-source SIE engine from Superlinked enables hot-swapping small embedding models (e.g., Stella, ColBERT) on one GPU via LRU eviction, cutting costs and solving context rot in agents by preprocessing data.
AI EngineerVisual Primitives Solve LMM Reference Gap
DeepSeek's withdrawn paper introduces 'Thinking with Visual Primitives'—embedding bounding boxes and points into every reasoning step—to fix ambiguous referencing in multimodal models, achieving 77.2% on spatial benchmarks with 10x fewer tokens than rivals.
Momentum Dampens GD Zigzags via Gradient Averaging
On anisotropic loss surfaces (condition number 100), vanilla GD zigzags and takes 185 steps to converge (loss <0.001); momentum with β=0.9 converges in 159 steps by canceling steep-direction oscillations while accelerating flat directions—but β=0.99 diverges.
Databricks RAG: Low-Dim Qwen3 + Rerank for 89% Recall@10
Minimize embedding dims to 256 with Qwen3 MRL (self-managed path), set num_results=50, always rerank ANN top-50 candidates for +15pts recall@10 over 74% baseline.
Track One User-Feature Pair to Catch ML Pipeline Bugs
A rec model's 0.91 AUC failed in prod after 4 days due to 21-hour stale user_30d_purchases features. Track user U-9842 and this feature through every pipeline layer to expose and prevent such mismatches.
Production ML Pipelines with ZenML: Custom Materializers & HPO
ZenML enables end-to-end ML pipelines with custom DatasetBundle materializers for metadata-rich serialization, fan-out over 4 hyperparameter configs for RandomForest/GradientBoosting/LogisticRegression, fan-in best-model selection by ROC AUC, full artifact tracking, and cache-driven reproducibility on breast cancer dataset.
FinLLM Phases: Monoliths to Multi-Expert Traders
FinLLMs evolved from proprietary 50B-param giants like BloombergGPT, to open-source PEFT like FinGPT, to multimodal experts; fuse with diffusion synth data and RL for trading, but prioritize interpretability to dodge herding crashes.
LLM Scaling Works via Strong Superposition
LLMs pack all tokens into limited dimensions via overlapping vectors (strong superposition), causing prediction error to halve when model width doubles—explaining reliable power-law scaling.
KAME: Zero-Latency S2S with Real-Time LLM Oracles
KAME fuses fast direct speech-to-speech (S2S) with LLM smarts via asynchronous oracle injections, hitting 6.4/10 on MT-Bench at Moshi's near-zero latency vs. cascaded 7.7/10 at 2.1s delay.
SageMaker Fine-Tuning: LoRA Beats QLoRA on Cost-Perf Balance
LoRA cuts trainable params by 96% vs full fine-tuning, balancing cost savings and accuracy on Llama2-7B/Mistral7B; QLoRA saves 8x memory but trains slower due to dequantization overhead.
DeepSeek's Visual Primitives: 10x KV Cache Efficiency
DeepSeek's 'Thinking with Visual Primitives' embeds bounding boxes and points as inline chain-of-thought tokens to solve visual reference gaps, compressing KV cache 10x (90 entries vs. 870 for Sonnet on 80x80 images) for frontier-grade vision at 1/10th cost.
Prompt EngineeringH2E: Deterministic Safety via Riemannian Multimodal Fusion
H2E framework fuses text/audio/vision inputs from compressed models into a Riemannian manifold, enforcing safety with SROI Gate that rejects intents where exp(-d_M) < 0.9583, guaranteeing deterministic, auditable AI behavior on edge hardware.
Spec Decoding Accelerates RL Rollouts 1.8x at 8B, 2.5x at 235B
Integrate speculative decoding into NeMo RL training loops using a draft model verifier setup to cut rollout generation time by 1.8× at 8B scale—65-72% of RL steps—while preserving exact output distribution, projecting 2.5× end-to-end speedup at 235B.