Meta

llama-3.2-3b-instruct

Estimated pricingMedium memory

llama-3.2-3b-instruct is Meta's medium-memory model. This page shows current pricing, an interactive cost calculator, and a side-by-side with similar models.

Input

$0.30/1M tokens

Output

$0.50/1M tokens

Cached

β€”

Batch

β€”

Interactive

Calculate your llama-3.2-3b-instruct bill.

Adjust the workload below and watch the monthly cost update in real time.

What would llama-3.2-3b-instruct cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

llama-3.2-3b-instruct at a glance.

Memory

80,000

tokens

Max reply

β€”

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

default

Released

β€”

Training cutoff

β€”

Availability

Estimated

Status

active

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick llama-3.2-3b-instruct

  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images β€” pick a vision-capable model instead.
  • You need tool-use / function calling for agent workflows.

FAQ

llama-3.2-3b-instruct β€” the questions we see most.

Pricing, capabilities, alternatives β€” generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, llama-3.2-3b-instruct costs roughly $43 per month. Input is $0.30 /1M tokens and output is $0.50 /1M tokens.
llama-3.2-3b-instruct has a 80,000-token context window (medium memory β€” a long report or a codebase file). That means you can fit about 15,000 words of input and history in a single call.
Models in a similar class include llama-3-8b-instruct, llama-3.2-1b-instruct, llama-3.1-8b-instruct. The "Similar models" section below this FAQ links into each.
Open-weight model β€” price from a common hosting provider (Together, Fireworks, Replicate). We source estimates from the cheapest public hosting provider for that model and note it on the page.

Still unsure?

Compare llama-3.2-3b-instruct against 100+ other models.

Open the full wizard β€” pick a use case, set your usage, and see side-by-side monthly costs in under a minute.