← All Models
|
Gemma - Google's lightweight open models built from Gemini research:
gemma-2b
gemma-7b
gemma-2-2b
gemma-2-9b
gemma-2-27b
gemma-3-270m
gemma-3-1b-pt
gemma-3-4b-pt
gemma-3-12b-pt
gemma-3-27b-pt
google/gemma-7b
📊 Model Parameters
Total Parameters
9,324,112,896
Context Length
8,192
Hidden Size
3072
Layers
28
Attention Heads
16
KV Heads
16
💾 Memory Requirements
FP32 (Full)
34.74 GB
FP16 (Half)
17.37 GB
INT8 (Quantized)
8.68 GB
INT4 (Quantized)
4.34 GB
🔑 KV Cache (Inference)
Per Token (FP16)
458.75 KB
Max Context FP32
7.00 GB
Max Context FP16
3.50 GB
Max Context INT8
1.75 GB
⚙️ Model Configuration
Core Architecture
Vocabulary Size
256,000
Hidden Size
3,072
FFN Intermediate Size
24,576
Number of Layers
28
Attention Heads
16
Head Dimension
256
KV Heads
16
Context & Position
Max Context Length
8,192
RoPE Base Frequency
10000.0
RoPE Scaling
Not set
Attention Configuration
Attention Bias
No
Attention Dropout
0%
Tied Embeddings
Yes
Activation & Normalization
Activation Function
gelu
RMSNorm Epsilon
1e-06
Special Tokens
BOS Token ID
2
Pad Token ID
0
EOS Token ID
1
Data Type
Model Dtype
bfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding
Attention
MLP
Norm
Embedding
Clear
Expand All
Collapse All