Zamba2 is a large language model (LLM) trained by Zyphra, and made available under an Apache 2.0 license. Please see the Zyphra Hugging Face repository for model weights.
This model was contributed by pglo.
Zamba2-1.2B, Zamba2-2.7B and Zamba2-7B are hybrid models combining state-space models (Specifically Mamba) and transformer, and were trained using next-token prediction. Zamba2 uses shared transformer layers after every 6 mamba blocks. It uses the Mistral v0.1 tokenizer. We came to this architecture after a series of ablations at small scales. Zamba2-1.2B, Zamba2-2.7B and Zamba2-7B were pre-trained on 2T and 3T tokens, respectively.

Zamba2 requires you use transformers
version 4.48.0 or higher:
pip install transformers>=4.48.0
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-7B")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-7B", device_map="cuda", torch_dtype=torch.bfloat16)
input_text = "What factors contributed to the fall of the Roman Empire?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
The model cards can be found at:
For issues with model output, or community discussion, please use the Hugging Face community forum
The model weights are open-sourced via an Apache 2.0 license.
[[autodoc]] Zamba2Config
[[autodoc]] Zamba2Model - forward
[[autodoc]] Zamba2ForCausalLM - forward
[[autodoc]] transformers.Zamba2ForSequenceClassification - forward