← All Models
|
Other Chinese/Asian AI Labs:
Seed-OSS-36B-Base
Seed-OSS-36B-Base-woSyn
Kimi-K2-Thinking
Kimi-K2.5
LongCat-Flash-Chat
GLM-4.5
GLM-4.5-Air
GLM-4.6
GLM-4.6V
GLM-4.6V-Flash
GLM-4.7
GLM-4.7-Flash
MiMo-7B-Base
dots.llm1.base
MiniMax-M2
MiniMax-M2.1
LongCat-Flash-Thinking-2601
meituan-longcat/LongCat-Flash-Chat
📊 Model Parameters
Total Parameters
18,522,390,528
Context Length
131,072
Hidden Size
6144
Layers
28
Attention Heads
64
KV Heads
64
💾 Memory Requirements
FP32 (Full)
69.00 GB
FP16 (Half)
34.50 GB
INT8 (Quantized)
17.25 GB
INT4 (Quantized)
8.63 GB
🔑 KV Cache (Inference)
Per Token (FP16)
458.75 KB
Max Context FP32
112.00 GB
Max Context FP16
56.00 GB
Max Context INT8
28.00 GB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
131,072
Hidden Size
6,144
FFN Intermediate Size
12,288
Number of Layers
28
Attention Heads
64
Head Dimension
64
KV Heads
64
Context & Position
Max Context Length
131,072
RoPE Base Frequency
10000000.0
Attention Configuration
Attention Bias
No
Attention Dropout
0%
Tied Embeddings
No
Multi-Head Latent Attention
KV LoRA Rank
512
Query LoRA Rank
1,536
QK RoPE Head Dimension
64
Value Head Dimension
128
QK Non-RoPE Head Dimension
128
Mixture of Experts
Expert FFN Size
2,048
Number of Experts
512
Routing Scale Factor
6.0
Experts per Token
12
Normalize TopK Probabilities
No
Activation & Normalization
Activation Function
silu
RMSNorm Epsilon
1e-05
Special Tokens
Pad Token ID
Not set
BOS Token ID
1
EOS Token ID
2
Data Type
Model Dtype
Not set
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All