← All Models
|
Other Chinese/Asian AI Labs:
Seed-OSS-36B-Base
Seed-OSS-36B-Base-woSyn
Kimi-K2-Thinking
LongCat-Flash-Chat
GLM-4.5
GLM-4.5-Air
GLM-4.6
GLM-4.6V
GLM-4.6V-Flash
dots.llm1.base
ByteDance-Seed/Seed-OSS-36B-Base-woSyn
📊 Model Parameters
Total Parameters
36,151,104,512
Context Length
524,288
Hidden Size
5120
Layers
64
Attention Heads
80
KV Heads
8
💾 Memory Requirements
FP32 (Full)
134.67 GB
FP16 (Half)
67.34 GB
INT8 (Quantized)
33.67 GB
INT4 (Quantized)
16.83 GB
🔑 KV Cache (Inference)
Per Token (FP16)
262.14 KB
Max Context FP32
256.00 GB
Max Context FP16
128.00 GB
Max Context INT8
64.00 GB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
155,136
Hidden Size
5,120
FFN Intermediate Size
27,648
Number of Layers
64
Attention Heads
80
KV Heads
8
Head Dimension
128
Context & Position
Max Context Length
524,288
RoPE Base Frequency
10000000.0
RoPE Scaling
default (factor: ?)
Attention Configuration
Attention Bias
Yes
Attention Dropout
10.0%
MLP Bias
No
Tied Embeddings
No
Activation & Normalization
Activation Function
silu
RMSNorm Epsilon
1e-06
Dropout (Training)
Residual Dropout
10.0%
Special Tokens
BOS Token ID
0
Pad Token ID
1
EOS Token ID
2
Data Type
Model Dtype
bfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All