Fedbiot: Llm local fine-tuning in federated learning without full model
Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and …, 2024•dl.acm.org
Large language models (LLMs) show amazing performance on many domain-specific tasks
after fine-tuning with some appropriate data. However, many domain-specific data are
privately distributed across multiple owners. Thus, this dilemma raises the interest in how to
perform LLM fine-tuning in federated learning (FL). However, confronted with limited
computation and communication capacities, FL clients struggle to fine-tune an LLM
effectively. To this end, we introduce FedBiOT, a resource-efficient LLM fine-tuning approach …
after fine-tuning with some appropriate data. However, many domain-specific data are
privately distributed across multiple owners. Thus, this dilemma raises the interest in how to
perform LLM fine-tuning in federated learning (FL). However, confronted with limited
computation and communication capacities, FL clients struggle to fine-tune an LLM
effectively. To this end, we introduce FedBiOT, a resource-efficient LLM fine-tuning approach …
Large language models (LLMs) show amazing performance on many domain-specific tasks after fine-tuning with some appropriate data. However, many domain-specific data are privately distributed across multiple owners. Thus, this dilemma raises the interest in how to perform LLM fine-tuning in federated learning (FL). However, confronted with limited computation and communication capacities, FL clients struggle to fine-tune an LLM effectively. To this end, we introduce FedBiOT, a resource-efficient LLM fine-tuning approach to FL. Specifically, our method involves the server generating a compressed LLM and aligning its performance with the full model. Subsequently, the clients fine-tune a lightweight yet important part of the compressed model, referred to as an adapter. Notice that as the server has no access to the private data owned by the clients, the data used for alignment by the server has a different distribution from the one used for fine-tuning by clients. We formulate the problem into a bi-level optimization problem to minimize the negative effect of data discrepancy and derive the updating rules for the server and clients. We conduct extensive experiments on LLaMA-2, empirically showing that the adapter has exceptional performance when reintegrated into the global LLM. The results also indicate that the proposed FedBiOT significantly reduces resource consumption compared to existing benchmarks, all while achieving comparable performance levels.
