← All Models
|
Other Chinese/Asian AI Labs:
Seed-OSS-36B-Base
Seed-OSS-36B-Base-woSyn
Kimi-K2-Thinking
Kimi-K2.5
LongCat-Flash-Chat
GLM-4.5
GLM-4.5-Air
GLM-4.6
GLM-4.6V
GLM-4.6V-Flash
GLM-4.7
GLM-4.7-Flash
MiMo-7B-Base
dots.llm1.base
MiniMax-M2
MiniMax-M2.1
LongCat-Flash-Thinking-2601
XiaomiMiMo/MiMo-7B-Base
📊 Model Parameters
Total Parameters
7,833,409,536
Context Length
32,768
Hidden Size
4096
Layers
36
Attention Heads
32
KV Heads
8
💾 Memory Requirements
FP32 (Full)
29.18 GB
FP16 (Half)
14.59 GB
INT8 (Quantized)
7.30 GB
INT4 (Quantized)
3.65 GB
🔑 KV Cache (Inference)
Per Token (FP16)
147.46 KB
Max Context FP32
9.00 GB
Max Context FP16
4.50 GB
Max Context INT8
2.25 GB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
151,680
Hidden Size
4,096
FFN Intermediate Size
11,008
Number of Layers
36
Attention Heads
32
KV Heads
8
Head Dimension
128
Context & Position
Max Context Length
32,768
Uses Sliding Window
No
Sliding Window Size
Not set
Window Attention Layers
32
Layer Attention Types
[36 items]
Attention Configuration
Attention Dropout
0%
Tied Embeddings
No
Attention Bias
Yes
Speculative Decoding
Next-N Prediction Layers
1
Activation & Normalization
Activation Function
silu
RMSNorm Epsilon
1e-05
Special Tokens
Pad Token ID
Not set
BOS Token ID
Not set
EOS Token ID
Not set
Data Type
Model Dtype
bfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All