← All Models
|
GPT - OpenAI's Generative Pre-trained Transformers:
whisper-base
whisper-tiny
whisper-small
whisper-medium
whisper-large-v2
whisper-large-v3
whisper-large-v3-turbo
openai-gpt
gpt2
gpt2-medium
gpt2-large
gpt2-xl
gpt-oss-20b
gpt-oss-120b
openai/gpt-oss-120b
📊 Model Parameters
Total Parameters
116,829,156,672
Context Length
131,072
Hidden Size
2880
Layers
36
Attention Heads
64
KV Heads
8
💾 Memory Requirements
FP32 (Full)
435.22 GB
FP16 (Half)
217.61 GB
INT8 (Quantized)
108.81 GB
INT4 (Quantized)
54.40 GB
🔑 KV Cache (Inference)
Per Token (FP16)
73.73 KB
Max Context FP32
18.00 GB
Max Context FP16
9.00 GB
Max Context INT8
4.50 GB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
201,088
Hidden Size
2,880
FFN Intermediate Size
2,880
Number of Layers
36
Attention Heads
64
KV Heads
8
Head Dimension
64
Context & Position
Sliding Window Size
128
RoPE Base Frequency
150,000
RoPE Scaling
yarn (factor: 32.0)
Layer Attention Types
[36 items]
Max Context Length
131,072
Attention Configuration
Attention Dropout
0%
Attention Bias
Yes
Tied Embeddings
No
Mixture of Experts
Number of Experts
128
Experts per Token
4
Activation & Normalization
Activation Function
silu
RMSNorm Epsilon
1e-05
Special Tokens
BOS Token ID
Not set
Pad Token ID
199,999
EOS Token ID
200002
Data Type
Model Dtype
Not set
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All