← All Models
|
Llama - Meta's Large Language Model Meta AI:
llama-65b
llama-30b
llama-13b
llama-7b
Llama-2-7b-hf
Llama-2-13b-hf
Llama-2-70b-hf
Meta-Llama-3-8B
Meta-Llama-3-70B
Llama-3.1-405B
huggyllama/llama-30b
📊 Model Parameters
Total Parameters
32,528,943,616
Context Length
2,048
Hidden Size
6656
Layers
60
Attention Heads
52
KV Heads
52
💾 Memory Requirements
FP32 (Full)
121.18 GB
FP16 (Half)
60.59 GB
INT8 (Quantized)
30.29 GB
INT4 (Quantized)
15.15 GB
🔑 KV Cache (Inference)
Per Token (FP16)
1.60 MB
Max Context FP32
6.09 GB
Max Context FP16
3.05 GB
Max Context INT8
1.52 GB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
32,000
Hidden Size
6,656
FFN Intermediate Size
17,920
Number of Layers
60
Attention Heads
52
KV Heads
52
Head Dimension
128
Context & Position
Max Context Length
2,048
RoPE Base Frequency
10000.0
RoPE Scaling
Not set
Attention Configuration
Attention Bias
No
Attention Dropout
0%
MLP Bias
No
Tied Embeddings
No
Activation & Normalization
Activation Function
silu
RMSNorm Epsilon
1e-06
Special Tokens
BOS Token ID
1
Pad Token ID
0
EOS Token ID
2
Data Type
Model Dtype
float16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All