← All Models
|
Llama - Meta's Large Language Model Meta AI:
llama-65b
llama-30b
llama-13b
llama-7b
Llama-2-7b-hf
Llama-2-13b-hf
Llama-2-70b-hf
Meta-Llama-3-8B
Meta-Llama-3-70B
Llama-3.1-405B
meta-llama/Meta-Llama-3-70B
📊 Model Parameters
Total Parameters
70,553,706,496
Context Length
8,192
Hidden Size
8192
Layers
80
Attention Heads
64
KV Heads
8
💾 Memory Requirements
FP32 (Full)
262.83 GB
FP16 (Half)
131.42 GB
INT8 (Quantized)
65.71 GB
INT4 (Quantized)
32.85 GB
🔑 KV Cache (Inference)
Per Token (FP16)
327.68 KB
Max Context FP32
5.00 GB
Max Context FP16
2.50 GB
Max Context INT8
1.25 GB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
128,256
Hidden Size
8,192
FFN Intermediate Size
28,672
Number of Layers
80
Attention Heads
64
KV Heads
8
Head Dimension
128
Context & Position
Max Context Length
8,192
RoPE Base Frequency
500000.0
RoPE Scaling
Not set
Attention Configuration
Attention Bias
No
Attention Dropout
0%
MLP Bias
No
Tied Embeddings
No
Activation & Normalization
Activation Function
silu
RMSNorm Epsilon
1e-05
Special Tokens
BOS Token ID
128,000
Pad Token ID
Not set
EOS Token ID
128001
Data Type
Model Dtype
bfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All