Meta: Llama 4 Maverick

Llama-4-Maverick

Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal model from Meta, built on a Mixture-of-Experts (MoE) architecture with 128 experts and 17B active parameters per forward pass (400B total). It supports multilingual text and image inputs and generates text and code outputs across 12 languages. Instruction-tuned for assistant-like interaction, it excels in vision-language tasks, image reasoning, and general-purpose multimodal applications.

Maverick introduces early fusion for native multimodality and supports a 1M-token context window. Trained on ~22T tokens from public, licensed, and Meta-platform data, it has a knowledge cutoff of August 2024. Released on April 5, 2025 under the Llama 4 Community License, Maverick is designed for both research and commercial use cases that demand advanced multimodal reasoning and high throughput.

Conversations

Download TXT
Download PDF
CreatorMeta
Release DateApril, 2025
LicenseLlama 4 Comunity Lisense Agreement
Context Window128,000
Image Input SupportYes
Open Source (Weights)Yes
Parameters402B, 17B active at inference time
Model WeightsClick here