← All Models
|
Other Chinese/Asian AI Labs:
Seed-OSS-36B-Base
Seed-OSS-36B-Base-woSyn
Kimi-K2-Thinking
Kimi-K2.5
LongCat-Flash-Chat
GLM-4.5
GLM-4.5-Air
GLM-4.6
GLM-4.6V
GLM-4.6V-Flash
GLM-4.7
GLM-4.7-Flash
MiMo-7B-Base
dots.llm1.base
MiniMax-M2
MiniMax-M2.1
LongCat-Flash-Thinking-2601
MiniMaxAI/MiniMax-M2.1
📊 Model Parameters
Total Parameters
228,689,748,992
Context Length
196,608
Hidden Size
3072
Layers
62
Attention Heads
48
KV Heads
8
💾 Memory Requirements
FP32 (Full)
851.94 GB
FP16 (Half)
425.97 GB
INT8 (Quantized)
212.98 GB
INT4 (Quantized)
106.49 GB
🔑 KV Cache (Inference)
Per Token (FP16)
253.95 KB
Max Context FP32
93.00 GB
Max Context FP16
46.50 GB
Max Context INT8
23.25 GB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
200,064
Hidden Size
3,072
FFN Intermediate Size
1,536
Number of Layers
62
Attention Heads
48
KV Heads
8
Head Dimension
128
Context & Position
Max Context Length
196,608
Attention Configuration
Attention Dropout
0%
Tied Embeddings
No
Mixture of Experts
Experts per Token
8
Number of Experts
256
Router Scoring Function
sigmoid
Activation & Normalization
Activation Function
silu
RMSNorm Epsilon
1e-06
Special Tokens
Pad Token ID
Not set
BOS Token ID
200,034
EOS Token ID
200020
Data Type
Model Dtype
Not set
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All