huggyllama/llama-7b

📊 Model Parameters

Total Parameters 6,738,415,616
Context Length 2,048
Hidden Size 4096
Layers 32
Attention Heads 32
KV Heads 32

💾 Memory Requirements

FP32 (Full) 25.10 GB
FP16 (Half) 12.55 GB
INT8 (Quantized) 6.28 GB
INT4 (Quantized) 3.14 GB

🔑 KV Cache (Inference)

Per Token (FP16) 524.29 KB
Max Context FP32 2.00 GB
Max Context FP16 1.00 GB
Max Context INT8 512.0 MB

⚙️ Model Configuration

Core Architecture

Vocabulary Size32,000
Hidden Size4,096
FFN Intermediate Size11,008
Number of Layers32
Attention Heads32
KV Heads32
Head Dimension128

Context & Position

Max Context Length2,048
RoPE Base Frequency10000.0
RoPE ScalingNot set

Attention Configuration

Attention BiasNo
Attention Dropout0%
MLP BiasNo
Tied EmbeddingsNo

Activation & Normalization

Activation Functionsilu
RMSNorm Epsilon1e-06

Special Tokens

BOS Token ID1
Pad Token ID0
EOS Token ID2

Data Type

Model Dtypefloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding