huggyllama/llama-30b

📊 Model Parameters

Total Parameters 32,528,943,616
Context Length 2,048
Hidden Size 6656
Layers 60
Attention Heads 52
KV Heads 52

💾 Memory Requirements

FP32 (Full) 121.18 GB
FP16 (Half) 60.59 GB
INT8 (Quantized) 30.29 GB
INT4 (Quantized) 15.15 GB

🔑 KV Cache (Inference)

Per Token (FP16) 1.60 MB
Max Context FP32 6.09 GB
Max Context FP16 3.05 GB
Max Context INT8 1.52 GB

⚙️ Model Configuration

Core Architecture

Vocabulary Size32,000
Hidden Size6,656
FFN Intermediate Size17,920
Number of Layers60
Attention Heads52
KV Heads52
Head Dimension128

Context & Position

Max Context Length2,048
RoPE Base Frequency10000.0
RoPE ScalingNot set

Attention Configuration

Attention BiasNo
Attention Dropout0%
MLP BiasNo
Tied EmbeddingsNo

Activation & Normalization

Activation Functionsilu
RMSNorm Epsilon1e-06

Special Tokens

BOS Token ID1
Pad Token ID0
EOS Token ID2

Data Type

Model Dtypefloat16
Layer Types:
Attention
MLP/FFN
Normalization
Embedding