Qwen/Qwen1.5-32B

📊 Model Parameters

Total Parameters 32,512,218,112
Context Length 32,768
Hidden Size 5120
Layers 64
Attention Heads 40
KV Heads 8

💾 Memory Requirements

FP32 (Full) 121.12 GB
FP16 (Half) 60.56 GB
INT8 (Quantized) 30.28 GB
INT4 (Quantized) 15.14 GB

🔑 KV Cache (Inference)

Per Token (FP16) 262.14 KB
Max Context FP32 16.00 GB
Max Context FP16 8.00 GB
Max Context INT8 4.00 GB

⚙️ Model Configuration

Core Architecture

Vocabulary Size152,064
Hidden Size5,120
FFN Intermediate Size27,392
Number of Layers64
Attention Heads40
KV Heads8

Context & Position

Max Context Length32,768
Uses Sliding WindowNo
Sliding Window SizeNot set
Window Attention Layers35
RoPE Base Frequency1000000.0
RoPE ScalingNot set
Layer Attention Types[64 items]

Attention Configuration

Attention Dropout0%
Tied EmbeddingsNo

Activation & Normalization

Activation Functionsilu
RMSNorm Epsilon1e-06

Special Tokens

BOS Token ID151,643
Pad Token IDNot set
EOS Token ID151643

Data Type

Model Dtypebfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding