Stars
FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs o…
Exploring finetuning public checkpoints on filter 8K sequences on Pile
中文nlp解决方案(大模型、数据、模型、训练、推理)
AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features…
Survey of scaling laws for critic models under limited data.
UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.
Ready-for-use-colab tutorial for finetuning ruDialoGpt3 model on а telegram chat using HuggingFace and PyTorch
Residual Prompt Tuning: a method for faster and better prompt tuning.
[NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner"
Exploring the work from Lester et al. (The Power of Scale for Parameter-Efficient Prompt Tuning) and recreating the experiments run by Schucher et al. (The Power of Prompt Tuning for Low-Resource S…
Iterative prompt tuning for story generation
Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"
[Project] Tune LLaMA with Prefix/LoRA on English/Chinese instruction datasets
deep learning sandbox
Ask Me Anything language model prompting
Prefix-Tuning for Keyphrase-based recommendation and explanation generation
Parameter-efficient automation of data wrangling tasks with prefix-tuning and the T5 language model.
Repository for Prompt Tuning for Natural Language Generation tasks
code for named entity extraction based on GMB Data set
training for CES Data Science Telecom ParisTech