huggyllama/llama-65b

📊 Model Parameters

Total Parameters 65,285,660,672
Context Length 2,048
Hidden Size 8192
Layers 80
Attention Heads 64
KV Heads 64

💾 Memory Requirements

FP32 (Full) 243.21 GB
FP16 (Half) 121.60 GB
INT8 (Quantized) 60.80 GB
INT4 (Quantized) 30.40 GB

🔑 KV Cache (Inference)

Per Token (FP16) 2.62 MB
Max Context FP32 10.00 GB
Max Context FP16 5.00 GB
Max Context INT8 2.50 GB

⚙️ Model Configuration

Core Architecture

Vocabulary Size32,000
Hidden Size8,192
FFN Intermediate Size22,016
Number of Layers80
Attention Heads64
KV Heads64
Head Dimension128

Context & Position

Max Context Length2,048
RoPE Base Frequency10000.0
RoPE ScalingNot set

Attention Configuration

Attention BiasNo
Attention Dropout0%
MLP BiasNo
Tied EmbeddingsNo

Activation & Normalization

Activation Functionsilu
RMSNorm Epsilon1e-05

Special Tokens

BOS Token ID1
Pad Token ID0
EOS Token ID2

Data Type

Model Dtypefloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding