← All Models
|
Llama - Meta's Large Language Model Meta AI:
llama-65b
llama-30b
llama-13b
llama-7b
Llama-2-7b-hf
Llama-2-13b-hf
Llama-2-70b-hf
Meta-Llama-3-8B
Meta-Llama-3-70B
Llama-3.1-405B
meta-llama/Meta-Llama-3-8B
📊 Model Parameters
Total Parameters
8,030,261,248
Context Length
8,192
Hidden Size
4096
Layers
32
Attention Heads
32
KV Heads
8
💾 Memory Requirements
FP32 (Full)
29.92 GB
FP16 (Half)
14.96 GB
INT8 (Quantized)
7.48 GB
INT4 (Quantized)
3.74 GB
🔑 KV Cache (Inference)
Per Token (FP16)
131.07 KB
Max Context FP32
2.00 GB
Max Context FP16
1.00 GB
Max Context INT8
512.0 MB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
128,256
Hidden Size
4,096
FFN Intermediate Size
14,336
Number of Layers
32
Attention Heads
32
KV Heads
8
Head Dimension
128
Context & Position
Max Context Length
8,192
RoPE Base Frequency
500000.0
RoPE Scaling
Not set
Attention Configuration
Attention Bias
No
Attention Dropout
0%
MLP Bias
No
Tied Embeddings
No
Activation & Normalization
Activation Function
silu
RMSNorm Epsilon
1e-05
Special Tokens
BOS Token ID
128,000
Pad Token ID
Not set
EOS Token ID
128001
Data Type
Model Dtype
bfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All