mistralai/Devstral-Small-2-24B-Instruct-2512

📊 Model Parameters

Total Parameters 23,340,272,640
Context Length 2,048
Hidden Size 0
Layers 0
Attention Heads 0
KV Heads 0

💾 Memory Requirements

FP32 (Full) 86.95 GB
FP16 (Half) 43.47 GB
INT8 (Quantized) 21.74 GB
INT4 (Quantized) 10.87 GB

🔑 KV Cache (Inference)

Per Token (FP16) 0 B
Max Context FP32 0.0 MB
Max Context FP16 0.0 MB
Max Context INT8 0.0 MB

⚙️ Model Configuration

Attention Configuration

Tied EmbeddingsNo

Special Tokens

BOS Token IDNot set
Pad Token IDNot set
EOS Token IDNot set

Data Type

Model Dtypebfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding