Qwen/Qwen1.5-1.8B

📊 Model Parameters

Total Parameters 1,836,828,672
Context Length 32,768
Hidden Size 2048
Layers 24
Attention Heads 16
KV Heads 16

💾 Memory Requirements

FP32 (Full) 6.84 GB
FP16 (Half) 3.42 GB
INT8 (Quantized) 1.71 GB
INT4 (Quantized) 875.9 MB

🔑 KV Cache (Inference)

Per Token (FP16) 196.61 KB
Max Context FP32 12.00 GB
Max Context FP16 6.00 GB
Max Context INT8 3.00 GB

⚙️ Model Configuration

Core Architecture

Vocabulary Size151,936
Hidden Size2,048
FFN Intermediate Size5,504
Number of Layers24
Attention Heads16
KV Heads16

Context & Position

Max Context Length32,768
Uses Sliding WindowNo
Sliding Window SizeNot set
Window Attention Layers21
RoPE Base Frequency1000000.0
RoPE ScalingNot set
Layer Attention Types[24 items]

Attention Configuration

Attention Dropout0%
Tied EmbeddingsNo

Activation & Normalization

Activation Functionsilu
RMSNorm Epsilon1e-06

Special Tokens

BOS Token ID151,643
Pad Token IDNot set
EOS Token ID151643

Data Type

Model Dtypebfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding