mistralai/Mixtral-8x7B-v0.1

📊 Model Parameters

Total Parameters 46,702,792,704
Context Length 32,768
Hidden Size 4096
Layers 32
Attention Heads 32
KV Heads 8

💾 Memory Requirements

FP32 (Full) 173.98 GB
FP16 (Half) 86.99 GB
INT8 (Quantized) 43.50 GB
INT4 (Quantized) 21.75 GB

🔑 KV Cache (Inference)

Per Token (FP16) 0 B
Max Context FP32 0.0 MB
Max Context FP16 0.0 MB
Max Context INT8 0.0 MB

⚙️ Model Configuration

Core Architecture

Vocabulary Size32,000
Hidden Size4,096
FFN Intermediate Size14,336
Number of Layers32
Attention Heads32
KV Heads8
Head DimensionNot set

Context & Position

Max Context Length32,768
Sliding Window SizeNot set
RoPE Base Frequency1000000.0

Attention Configuration

Attention Dropout0%
Tied EmbeddingsNo

Mixture of Experts

Experts per Token2
Number of Experts8

Activation & Normalization

Activation Functionsilu
RMSNorm Epsilon1e-05

Special Tokens

BOS Token ID1
Pad Token IDNot set
EOS Token ID2

Data Type

Model Dtypebfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding