google/gemma-7b

📊 Model Parameters

Total Parameters 9,324,112,896
Context Length 8,192
Hidden Size 3072
Layers 28
Attention Heads 16
KV Heads 16

💾 Memory Requirements

FP32 (Full) 34.74 GB
FP16 (Half) 17.37 GB
INT8 (Quantized) 8.68 GB
INT4 (Quantized) 4.34 GB

🔑 KV Cache (Inference)

Per Token (FP16) 458.75 KB
Max Context FP32 7.00 GB
Max Context FP16 3.50 GB
Max Context INT8 1.75 GB

⚙️ Model Configuration

Core Architecture

Vocabulary Size256,000
Hidden Size3,072
FFN Intermediate Size24,576
Number of Layers28
Attention Heads16
Head Dimension256
KV Heads16

Context & Position

Max Context Length8,192
RoPE Base Frequency10000.0
RoPE ScalingNot set

Attention Configuration

Attention BiasNo
Attention Dropout0%
Tied EmbeddingsYes

Activation & Normalization

Activation Functiongelu
RMSNorm Epsilon1e-06

Special Tokens

BOS Token ID2
Pad Token ID0
EOS Token ID1

Data Type

Model Dtypebfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding