Skip to content

Files

Latest commit

stevhliuyounesbelkadaArthurZuckerqgallouedec
Mar 3, 2025
c0f8d05 · Mar 3, 2025

History

History
99 lines (58 loc) · 3.41 KB

zamba2.md

File metadata and controls

99 lines (58 loc) · 3.41 KB

Zamba2

PyTorch FlashAttention SDPA

Zamba2 is a large language model (LLM) trained by Zyphra, and made available under an Apache 2.0 license. Please see the Zyphra Hugging Face repository for model weights.

This model was contributed by pglo.

Model details

Zamba2-1.2B, Zamba2-2.7B and Zamba2-7B are hybrid models combining state-space models (Specifically Mamba) and transformer, and were trained using next-token prediction. Zamba2 uses shared transformer layers after every 6 mamba blocks. It uses the Mistral v0.1 tokenizer. We came to this architecture after a series of ablations at small scales. Zamba2-1.2B, Zamba2-2.7B and Zamba2-7B were pre-trained on 2T and 3T tokens, respectively.

Quick start

Presequities

Zamba2 requires you use transformers version 4.48.0 or higher:

pip install transformers>=4.48.0

Inference

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-7B")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-7B", device_map="cuda", torch_dtype=torch.bfloat16)

input_text = "What factors contributed to the fall of the Roman Empire?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))

Model card

The model cards can be found at:

Issues

For issues with model output, or community discussion, please use the Hugging Face community forum

License

The model weights are open-sourced via an Apache 2.0 license.

Zamba2Config

[[autodoc]] Zamba2Config

Zamba2Model

[[autodoc]] Zamba2Model - forward

Zamba2ForCausalLM

[[autodoc]] Zamba2ForCausalLM - forward

Zamba2ForSequenceClassification

[[autodoc]] transformers.Zamba2ForSequenceClassification - forward