Qwen/Qwen2-57B-A14B

📊 Model Parameters

Total Parameters 57,408,658,944
Context Length 131,072
Hidden Size 3584
Layers 28
Attention Heads 28
KV Heads 4

💾 Memory Requirements

FP32 (Full) 213.86 GB
FP16 (Half) 106.93 GB
INT8 (Quantized) 53.47 GB
INT4 (Quantized) 26.73 GB

🔑 KV Cache (Inference)

Per Token (FP16) 57.34 KB
Max Context FP32 14.00 GB
Max Context FP16 7.00 GB
Max Context INT8 3.50 GB

⚙️ Model Configuration

Core Architecture

Vocabulary Size151,936
Hidden Size3,584
FFN Intermediate Size18,944
Number of Layers28
Attention Heads28
KV Heads4

Context & Position

Max Context Length131,072
Uses Sliding WindowNo
Sliding Window SizeNot set
Window Attention Layers28
RoPE Base Frequency1000000.0
RoPE ScalingNot set

Attention Configuration

Attention Dropout0%
Attention BiasYes
Tied EmbeddingsNo

Mixture of Experts

MoE Layer Frequency1
Expert FFN Size2,560
Shared Expert FFN Size20,480
Experts per Token8
Number of Experts64
Normalize TopK ProbabilitiesNo

Activation & Normalization

Activation Functionsilu
RMSNorm Epsilon1e-06

Special Tokens

BOS Token ID151,643
Pad Token IDNot set
EOS Token ID151643

Data Type

Model Dtypebfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding