Gemma 3: Open Multimodal Models from 270M to 27B Params

Gemma 3 provides lightweight, open-weight multimodal LLMs (text/image input, text output) in 270M-27B sizes with 128K context (32K for tiny), trained on 6-14T tokens across 140+ languages, ideal for resource-constrained deployment.

Core Specifications and Capabilities

Gemma 3 models process text and images (normalized to 896x896 pixels, encoded as 256 tokens each) to generate text outputs like answers, summaries, or image analyses. Larger variants (4B, 12B, 27B) support 128K token context windows for input and output (after subtracting input tokens); smaller ones (270M, 1B) limit to 32K tokens. They excel in question answering, summarization, reasoning, multilingual tasks (140+ languages), and image understanding, with pre-trained and instruction-tuned weights available for easy deployment on laptops, desktops, or cloud infra.

Trade-offs: Smaller sizes prioritize efficiency over capacity, enabling on-device use but capping context; multimodal design focuses on text+image, not audio/video natively.

Training Scale and Data Composition

Models trained on massive datasets with knowledge cutoff August 2024: 27B on 14T tokens, 12B on 12T, 4B on 4T, 1B on 2T, 270M on 6T. Data mix includes web documents (diverse styles/topics in 140+ languages), code (for syntax/patterns), mathematics (logical/symbolic reasoning), and images (visual analysis). Preprocessing applies cleaning/filtering (details truncated in source) to ensure quality across formats.

This yields strong generalization: code generation from patterns, math query handling from symbolic exposure, broad linguistic coverage from multilingual web text.

Evaluation, Safety, and Deployment Context

Benchmark results, ethics/safety evals (approach/results), intended uses, limitations, risks/benefits detailed in full card (source truncated). Suited for innovation via open access; cite Gemma 3 Technical Report for deeper metrics. Deploy via Keras, Hugging Face, PyTorch, or edge tools like LiteRT-LM/MediaPipe.

Summarized by x-ai/grok-4.1-fast via openrouter

9775 input / 1179 output tokens in 5229ms

© 2026 Edge