← All Models
|
Mistral - French AI startup known for efficient open models (founded 2023):
Mistral-7B-v0.1
Mistral-Nemo-Base-2407
Codestral-22B-v0.1
Mistral-Small-24B-Base-2501
Ministral-8B-Instruct-2410
Mistral-Large-Instruct-2411
Mixtral-8x7B-v0.1
Mixtral-8x22B-v0.1
Mamba-Codestral-7B-v0.1
Pixtral-12B-Base-2409
Voxtral-Mini-3B-2507
Voxtral-Small-24B-2507
Mistral-Small-3.2-24B-Instruct-2506
Ministral-3-3B-Base-2512
Ministral-3-8B-Base-2512
Ministral-3-14B-Base-2512
Mistral-Large-3-675B-Base-2512
Devstral-Small-2-24B-Instruct-2512
Devstral-2-123B-Instruct-2512
mistralai/Mamba-Codestral-7B-v0.1
📊 Model Parameters
Total Parameters
7,285,403,648
Context Length
2,048
Hidden Size
4096
Layers
64
Attention Heads
0
KV Heads
0
💾 Memory Requirements
FP32 (Full)
27.14 GB
FP16 (Half)
13.57 GB
INT8 (Quantized)
6.79 GB
INT4 (Quantized)
3.39 GB
🔑 KV Cache (Inference)
Per Token (FP16)
0 B
Max Context FP32
0.0 MB
Max Context FP16
0.0 MB
Max Context INT8
0.0 MB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
32,768
Hidden Size
4,096
Number of Layers
64
Attention Heads
128
Head Dimension
64
FFN Intermediate Size
8,192
Attention Configuration
Tied Embeddings
No
Activation & Normalization
RMSNorm Epsilon
1e-05
Activation Function
silu
Special Tokens
BOS Token ID
0
EOS Token ID
0
Pad Token ID
0
Data Type
Model Dtype
bfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All