← All Models
|
GPT - OpenAI's Generative Pre-trained Transformers:
whisper-base
whisper-tiny
whisper-small
whisper-medium
whisper-large-v2
whisper-large-v3
whisper-large-v3-turbo
openai-gpt
gpt2
gpt2-medium
gpt2-large
gpt2-xl
gpt-oss-20b
gpt-oss-120b
openai/whisper-small
📊 Model Parameters
Total Parameters
193,413,120
Context Length
2,048
Hidden Size
768
Layers
12
Attention Heads
12
KV Heads
12
💾 Memory Requirements
FP32 (Full)
737.8 MB
FP16 (Half)
368.9 MB
INT8 (Quantized)
184.5 MB
INT4 (Quantized)
92.2 MB
🔑 KV Cache (Inference)
Per Token (FP16)
36.86 KB
Max Context FP32
144.0 MB
Max Context FP16
72.0 MB
Max Context INT8
36.0 MB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
51,865
Number of Layers
12
Attention Configuration
Attention Dropout
0%
Tied Embeddings
Yes
Activation & Normalization
Activation Function
gelu
Special Tokens
BOS Token ID
50,257
Pad Token ID
50,257
EOS Token ID
50257
Data Type
Model Dtype
float32
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All