← All Models
|
T5 - Google's Text-to-Text Transfer Transformer:
t5-small
t5-base
t5-large
google-t5/t5-large
📊 Model Parameters
Total Parameters
803,466,240
Context Length
2,048
Hidden Size
1024
Layers
24
Attention Heads
16
KV Heads
16
💾 Memory Requirements
FP32 (Full)
2.99 GB
FP16 (Half)
1.50 GB
INT8 (Quantized)
766.2 MB
INT4 (Quantized)
383.1 MB
🔑 KV Cache (Inference)
Per Token (FP16)
98.30 KB
Max Context FP32
384.0 MB
Max Context FP16
192.0 MB
Max Context INT8
96.0 MB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
32,128
FFN Intermediate Size
4,096
Number of Layers
24
Attention Heads
16
Context & Position
Max Context Length
512
Attention Configuration
Tied Embeddings
Yes
Activation & Normalization
RMSNorm Epsilon
1e-06
Activation Function
relu
Dropout (Training)
Hidden Dropout
10.0%
Special Tokens
BOS Token ID
Not set
Pad Token ID
0
EOS Token ID
1
Data Type
Model Dtype
Not set
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All