huggyllama/llama-13b

📊 Model Parameters

Total Parameters 13,015,864,320
Context Length 2,048
Hidden Size 5120
Layers 40
Attention Heads 40
KV Heads 40

💾 Memory Requirements

FP32 (Full) 48.49 GB
FP16 (Half) 24.24 GB
INT8 (Quantized) 12.12 GB
INT4 (Quantized) 6.06 GB

🔑 KV Cache (Inference)

Per Token (FP16) 819.20 KB
Max Context FP32 3.12 GB
Max Context FP16 1.56 GB
Max Context INT8 800.0 MB

⚙️ Model Configuration

Core Architecture

Vocabulary Size32,000
Hidden Size5,120
FFN Intermediate Size13,824
Number of Layers40
Attention Heads40
KV Heads40
Head Dimension128

Context & Position

Max Context Length2,048
RoPE Base Frequency10000.0
RoPE ScalingNot set

Attention Configuration

Attention BiasNo
Attention Dropout0%
MLP BiasNo
Tied EmbeddingsNo

Activation & Normalization

Activation Functionsilu
RMSNorm Epsilon1e-06

Special Tokens

BOS Token ID1
Pad Token ID0
EOS Token ID2

Data Type

Model Dtypefloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding