Lighteval documentation
Lighteval
Lighteval
π€ Lighteval is your all-in-one toolkit for evaluating Large Language Models (LLMs) across multiple backends with ease. Dive deep into your modelβs performance by saving and exploring detailed, sample-by-sample results to debug and see how your models stack up.
Key Features
π Multi-Backend Support
Evaluate your models using the most popular and efficient inference backends:
eval: Use inspect-ai as backend to evaluate and inspect your models ! (prefered way)transformers: Evaluate models on CPU or one or more GPUs using π€ Acceleratenanotron: Evaluate models in distributed settings using β‘οΈ Nanotronvllm: Evaluate models on one or more GPUs using π VLLMcustom: Evaluate custom models (can be anything)sglang: Evaluate models using SGLang as backendinference-endpoint: Evaluate models using Hugging Faceβs Inference Endpoints APItgi: Evaluate models using π Text Generation Inference running locallylitellm: Evaluate models on any compatible API using LiteLLMinference-providers: Evaluate models using HuggingFaceβs inference providers as backend**: Distributed training and evaluation
π Comprehensive Evaluation
- Extensive Task Library: 1000s pre-built evaluation tasks
- Custom Task Creation: Build your own evaluation tasks
- Flexible Metrics: Support for custom metrics and scoring
- Detailed Analysis: Sample-by-sample results for deep insights
π§ Easy Customization
Customization at your fingertips: create new tasks, metrics or model tailored to your needs, or browse all our existing tasks and metrics.
βοΈ Seamless Integration
Seamlessly experiment, benchmark, and store your results on the Hugging Face Hub, S3, or locally.
Quick Start
Installation
pip install lighteval
Basic Usage
Find a task
Run your benchmark and push details to the hub
lighteval eval "hf-inference-providers/openai/gpt-oss-20b" \
"lighteval|gpqa:diamond|0" \
--bundle-dir gpt-oss-bundle \
--repo-id OpenEvals/evalsResulting Space:
Update on GitHub