← All Models
|
Other Chinese/Asian AI Labs:
Seed-OSS-36B-Base
Seed-OSS-36B-Base-woSyn
Kimi-K2-Thinking
LongCat-Flash-Chat
GLM-4.5
GLM-4.5-Air
GLM-4.6
GLM-4.6V
GLM-4.6V-Flash
dots.llm1.base
zai-org/GLM-4.5
📊 Model Parameters
Total Parameters
352,797,814,784
Context Length
131,072
Hidden Size
5120
Layers
92
Attention Heads
96
KV Heads
8
💾 Memory Requirements
FP32 (Full)
1314.27 GB
FP16 (Half)
657.14 GB
INT8 (Quantized)
328.57 GB
INT4 (Quantized)
164.28 GB
🔑 KV Cache (Inference)
Per Token (FP16)
376.83 KB
Max Context FP32
92.00 GB
Max Context FP16
46.00 GB
Max Context INT8
23.00 GB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
151,552
Hidden Size
5,120
FFN Intermediate Size
12,288
Number of Layers
92
Attention Heads
96
KV Heads
8
Head Dimension
128
Context & Position
Max Context Length
131,072
RoPE Base Frequency
1,000,000
RoPE Scaling
Not set
Attention Configuration
Attention Bias
Yes
Attention Dropout
0%
Tied Embeddings
No
Mixture of Experts
Expert FFN Size
1,536
Experts per Token
8
Expert Groups
1
Groups per Token
1
Shared Experts
1
Number of Experts
160
Routing Scale Factor
2.5
Dense Initial Layers
3
Normalize TopK Probabilities
Yes
Speculative Decoding
Next-N Prediction Layers
1
Activation & Normalization
Activation Function
silu
RMSNorm Epsilon
1e-05
Special Tokens
BOS Token ID
Not set
Pad Token ID
151,329
EOS Token ID
151329, 151336, 151338
Data Type
Model Dtype
bfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All