Qwen/Qwen2.5-14B

📊 Model Parameters

Total Parameters 14,770,033,664
Context Length 131,072
Hidden Size 5120
Layers 48
Attention Heads 40
KV Heads 8

💾 Memory Requirements

FP32 (Full) 55.02 GB
FP16 (Half) 27.51 GB
INT8 (Quantized) 13.76 GB
INT4 (Quantized) 6.88 GB

🔑 KV Cache (Inference)

Per Token (FP16) 196.61 KB
Max Context FP32 48.00 GB
Max Context FP16 24.00 GB
Max Context INT8 12.00 GB

⚙️ Model Configuration

Core Architecture

Vocabulary Size152,064
Hidden Size5,120
FFN Intermediate Size13,824
Number of Layers48
Attention Heads40
KV Heads8

Context & Position

Max Context Length131,072
Uses Sliding WindowNo
Sliding Window SizeNot set
Window Attention Layers48
RoPE Base Frequency1000000.0
RoPE ScalingNot set
Layer Attention Types[48 items]

Attention Configuration

Attention Dropout0%
Tied EmbeddingsNo

Activation & Normalization

Activation Functionsilu
RMSNorm Epsilon1e-05

Special Tokens

BOS Token ID151,643
Pad Token IDNot set
EOS Token ID151643

Data Type

Model Dtypebfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding