mistralai/Mixtral-8x22B-v0.1

📊 Model Parameters

Total Parameters 140,620,634,112
Context Length 65,536
Hidden Size 6144
Layers 56
Attention Heads 48
KV Heads 8

💾 Memory Requirements

FP32 (Full) 523.85 GB
FP16 (Half) 261.93 GB
INT8 (Quantized) 130.96 GB
INT4 (Quantized) 65.48 GB

🔑 KV Cache (Inference)

Per Token (FP16) 0 B
Max Context FP32 0.0 MB
Max Context FP16 0.0 MB
Max Context INT8 0.0 MB

⚙️ Model Configuration

Core Architecture

Vocabulary Size32,000
Hidden Size6,144
FFN Intermediate Size16,384
Number of Layers56
Attention Heads48
KV Heads8
Head DimensionNot set

Context & Position

Max Context Length65,536
Sliding Window SizeNot set
RoPE Base Frequency1,000,000

Attention Configuration

Attention Dropout0%
Tied EmbeddingsNo

Mixture of Experts

Experts per Token2
Number of Experts8

Activation & Normalization

Activation Functionsilu
RMSNorm Epsilon1e-05

Special Tokens

BOS Token ID1
Pad Token IDNot set
EOS Token ID2

Data Type

Model Dtypebfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding