mistralai/Mistral-Large-3-675B-Base-2512

📊 Model Parameters

Total Parameters 722,158,138,880
Context Length 131,072
Hidden Size 4096
Layers 32
Attention Heads 32
KV Heads 8

💾 Memory Requirements

FP32 (Full) 2690.25 GB
FP16 (Half) 1345.12 GB
INT8 (Quantized) 672.56 GB
INT4 (Quantized) 336.28 GB

🔑 KV Cache (Inference)

Per Token (FP16) 0 B
Max Context FP32 0.0 MB
Max Context FP16 0.0 MB
Max Context INT8 0.0 MB

⚙️ Model Configuration

Core Architecture

Vocabulary Size131,072
Hidden Size7,168
FFN Intermediate Size14,336
Number of Layers61
Attention Heads32
Head DimensionNot set
KV Heads8

Context & Position

Max Context Length131,072
Sliding Window Size4,096
RoPE Base Frequency10000.0

Attention Configuration

Attention Dropout0%
Tied EmbeddingsNo

Activation & Normalization

Activation Functionsilu
RMSNorm Epsilon1e-06

Special Tokens

BOS Token ID1
Pad Token IDNot set
EOS Token ID2

Data Type

Model DtypeNot set
Layer Types:
Attention
MLP/FFN
Normalization
Embedding