← All Models
|
Llama - Meta's Large Language Model Meta AI:
llama-65b
llama-30b
llama-13b
llama-7b
Llama-2-7b-hf
Llama-2-13b-hf
Llama-2-70b-hf
Meta-Llama-3-8B
Meta-Llama-3-70B
Llama-3.1-405B
meta-llama/Llama-3.1-405B
📊 Model Parameters
Total Parameters
405,853,388,800
Context Length
131,072
Hidden Size
16384
Layers
126
Attention Heads
128
KV Heads
8
💾 Memory Requirements
FP32 (Full)
1511.92 GB
FP16 (Half)
755.96 GB
INT8 (Quantized)
377.98 GB
INT4 (Quantized)
188.99 GB
🔑 KV Cache (Inference)
Per Token (FP16)
516.10 KB
Max Context FP32
126.00 GB
Max Context FP16
63.00 GB
Max Context INT8
31.50 GB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
128,256
Hidden Size
16,384
FFN Intermediate Size
53,248
Number of Layers
126
Attention Heads
128
KV Heads
8
Head Dimension
128
Context & Position
Max Context Length
131,072
RoPE Base Frequency
500000.0
RoPE Scaling
llama3 (factor: 8.0)
Attention Configuration
Attention Bias
No
Attention Dropout
0%
MLP Bias
No
Tied Embeddings
No
Activation & Normalization
Activation Function
silu
RMSNorm Epsilon
1e-05
Special Tokens
BOS Token ID
128,000
Pad Token ID
Not set
EOS Token ID
128001
Data Type
Model Dtype
bfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All