mistralai/Mistral-Small-24B-Base-2501

📊 Model Parameters

Total Parameters 23,572,403,200
Context Length 32,768
Hidden Size 5120
Layers 40
Attention Heads 32
KV Heads 8

💾 Memory Requirements

FP32 (Full) 87.81 GB
FP16 (Half) 43.91 GB
INT8 (Quantized) 21.95 GB
INT4 (Quantized) 10.98 GB

🔑 KV Cache (Inference)

Per Token (FP16) 163.84 KB
Max Context FP32 10.00 GB
Max Context FP16 5.00 GB
Max Context INT8 2.50 GB

⚙️ Model Configuration

Core Architecture

Vocabulary Size131,072
Hidden Size5,120
FFN Intermediate Size32,768
Number of Layers40
Attention Heads32
Head Dimension128
KV Heads8

Context & Position

Max Context Length32,768
Sliding Window SizeNot set
RoPE Base Frequency100000000.0

Attention Configuration

Attention Dropout0%
Tied EmbeddingsNo

Activation & Normalization

Activation Functionsilu
RMSNorm Epsilon1e-05

Special Tokens

BOS Token ID1
Pad Token IDNot set
EOS Token ID2

Data Type

Model Dtypebfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding