← All Models
|
Llama - Meta's Large Language Model Meta AI:
llama-65b
llama-30b
llama-13b
llama-7b
Llama-2-7b-hf
Llama-2-13b-hf
Llama-2-70b-hf
Meta-Llama-3-8B
Meta-Llama-3-70B
Llama-3.1-405B
huggyllama/llama-65b
📊 Model Parameters
Total Parameters
65,285,660,672
Context Length
2,048
Hidden Size
8192
Layers
80
Attention Heads
64
KV Heads
64
💾 Memory Requirements
FP32 (Full)
243.21 GB
FP16 (Half)
121.60 GB
INT8 (Quantized)
60.80 GB
INT4 (Quantized)
30.40 GB
🔑 KV Cache (Inference)
Per Token (FP16)
2.62 MB
Max Context FP32
10.00 GB
Max Context FP16
5.00 GB
Max Context INT8
2.50 GB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
32,000
Hidden Size
8,192
FFN Intermediate Size
22,016
Number of Layers
80
Attention Heads
64
KV Heads
64
Head Dimension
128
Context & Position
Max Context Length
2,048
RoPE Base Frequency
10000.0
RoPE Scaling
Not set
Attention Configuration
Attention Bias
No
Attention Dropout
0%
MLP Bias
No
Tied Embeddings
No
Activation & Normalization
Activation Function
silu
RMSNorm Epsilon
1e-05
Special Tokens
BOS Token ID
1
Pad Token ID
0
EOS Token ID
2
Data Type
Model Dtype
float16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All