← All Models
|
EleutherAI - Non-profit AI research lab focused on open-source LLMs:
gpt-j-6b
gpt-neo-125m
gpt-neo-1.3B
gpt-neo-2.7B
gpt-neox-20b
comma-v0.1-2t
EleutherAI/gpt-neox-20b
📊 Model Parameters
Total Parameters
20,554,567,680
Context Length
2,048
Hidden Size
6144
Layers
44
Attention Heads
64
KV Heads
64
💾 Memory Requirements
FP32 (Full)
76.57 GB
FP16 (Half)
38.29 GB
INT8 (Quantized)
19.14 GB
INT4 (Quantized)
9.57 GB
🔑 KV Cache (Inference)
Per Token (FP16)
1.08 MB
Max Context FP32
4.12 GB
Max Context FP16
2.06 GB
Max Context INT8
1.03 GB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
50,432
Hidden Size
6,144
Number of Layers
44
Attention Heads
64
FFN Intermediate Size
24,576
Context & Position
Max Context Length
2,048
RoPE Base Frequency
10,000
RoPE Scaling
Not set
Attention Configuration
Tied Embeddings
No
Attention Dropout
0%
Attention Bias
Yes
Activation & Normalization
Activation Function
gelu_fast
RMSNorm Epsilon
1e-05
Dropout (Training)
Hidden Dropout
0%
Special Tokens
BOS Token ID
0
Pad Token ID
Not set
EOS Token ID
0
Data Type
Model Dtype
float16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All