← All Models
|
Gemma - Google's lightweight open models built from Gemini research:
gemma-2b
gemma-7b
gemma-2-2b
gemma-2-9b
gemma-2-27b
gemma-3-270m
gemma-3-1b-pt
gemma-3-4b-pt
gemma-3-12b-pt
gemma-3-27b-pt
google/gemma-2-27b
📊 Model Parameters
Total Parameters
28,406,776,320
Context Length
8,192
Hidden Size
4608
Layers
46
Attention Heads
32
KV Heads
16
💾 Memory Requirements
FP32 (Full)
105.82 GB
FP16 (Half)
52.91 GB
INT8 (Quantized)
26.46 GB
INT4 (Quantized)
13.23 GB
🔑 KV Cache (Inference)
Per Token (FP16)
376.83 KB
Max Context FP32
5.75 GB
Max Context FP16
2.88 GB
Max Context INT8
1.44 GB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
256,000
Hidden Size
4,608
FFN Intermediate Size
36,864
Number of Layers
46
Attention Heads
32
Head Dimension
128
KV Heads
16
Context & Position
Sliding Window Size
4,096
Max Context Length
8,192
RoPE Base Frequency
10000.0
Layer Attention Types
[46 items]
Attention Configuration
Tied Embeddings
Yes
Attention Bias
No
Attention Dropout
0%
Query Pre-Attention Scalar
144
Output Softcapping
30.0
Attention Softcapping
50.0
Activation & Normalization
Activation Function
gelu_pytorch_tanh
RMSNorm Epsilon
1e-06
Special Tokens
BOS Token ID
2
Pad Token ID
0
EOS Token ID
1
Data Type
Model Dtype
float32
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All