Stars
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Models and examples built with TensorFlow
🌐 Make websites accessible for AI agents. Automate tasks online with ease.
A high-throughput and memory-efficient inference and serving engine for LLMs
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
利用AI大模型,一键生成高清短视频 Generate short videos with one click using AI LLM.
Get your documents ready for gen AI
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
TensorFlow code and pre-trained models for BERT
PyTorch Tutorial for Deep Learning Researchers
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
DSPy: The framework for programming—not prompting—language models
⚡ A Fast, Extensible Progress Bar for Python and CLI
Convert PDF to markdown + JSON quickly with high accuracy
Code and documentation to train Stanford's Alpaca models, and generate the data.
An LLM-powered knowledge curation system that researches a topic and generates a full-length report with citations.
[EMNLP2025] "LightRAG: Simple and Fast Retrieval-Augmented Generation"
Official inference repo for FLUX.1 models
An open-source RAG-based tool for chatting with your documents.
Code for the paper "Language Models are Unsupervised Multitask Learners"
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Intelligent automation and multi-agent orchestration for Claude Code
Graph Neural Network Library for PyTorch


