meta-llama/Llama-2-70b-hf

📊 Model Parameters

Total Parameters 68,976,648,192
Context Length 4,096
Hidden Size 8192
Layers 80
Attention Heads 64
KV Heads 8

💾 Memory Requirements

FP32 (Full) 256.96 GB
FP16 (Half) 128.48 GB
INT8 (Quantized) 64.24 GB
INT4 (Quantized) 32.12 GB

🔑 KV Cache (Inference)

Per Token (FP16) 327.68 KB
Max Context FP32 2.50 GB
Max Context FP16 1.25 GB
Max Context INT8 640.0 MB

⚙️ Model Configuration

Core Architecture

Vocabulary Size32,000
Hidden Size8,192
FFN Intermediate Size28,672
Number of Layers80
Attention Heads64
KV Heads8
Head Dimension128

Context & Position

Max Context Length4,096
RoPE Base Frequency10000.0
RoPE ScalingNot set

Attention Configuration

Attention BiasNo
Attention Dropout0%
MLP BiasNo
Tied EmbeddingsNo

Activation & Normalization

Activation Functionsilu
RMSNorm Epsilon1e-05

Special Tokens

BOS Token ID1
Pad Token IDNot set
EOS Token ID2

Data Type

Model Dtypefloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding