google/gemma-2b

📊 Model Parameters

Total Parameters 3,030,460,416
Context Length 8,192
Hidden Size 2048
Layers 18
Attention Heads 8
KV Heads 1

💾 Memory Requirements

FP32 (Full) 11.29 GB
FP16 (Half) 5.64 GB
INT8 (Quantized) 2.82 GB
INT4 (Quantized) 1.41 GB

🔑 KV Cache (Inference)

Per Token (FP16) 18.43 KB
Max Context FP32 288.0 MB
Max Context FP16 144.0 MB
Max Context INT8 72.0 MB

⚙️ Model Configuration

Core Architecture

Vocabulary Size256,000
Hidden Size2,048
FFN Intermediate Size16,384
Number of Layers18
Attention Heads8
Head Dimension256
KV Heads1

Context & Position

Max Context Length8,192
RoPE Base Frequency10000.0
RoPE ScalingNot set

Attention Configuration

Attention BiasNo
Attention Dropout0%
Tied EmbeddingsYes

Activation & Normalization

Activation Functiongelu
RMSNorm Epsilon1e-06

Special Tokens

BOS Token ID2
Pad Token ID0
EOS Token ID1

Data Type

Model Dtypebfloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding