Towards sustainable learning: Coresets for data-efficient deep learning
International Conference on Machine Learning, 2023•proceedings.mlr.press
To improve the efficiency and sustainability of learning deep models, we propose CREST, 
the first scalable framework with rigorous theoretical guarantees to identify the most valuable
examples for training non-convex models, particularly deep networks. To guarantee
convergence to a stationary point of a non-convex function, CREST models the non-convex
loss as a series of quadratic functions and extracts a coreset for each quadratic sub-region.
In addition, to ensure faster convergence of stochastic gradient methods such as (mini …
the first scalable framework with rigorous theoretical guarantees to identify the most valuable
examples for training non-convex models, particularly deep networks. To guarantee
convergence to a stationary point of a non-convex function, CREST models the non-convex
loss as a series of quadratic functions and extracts a coreset for each quadratic sub-region.
In addition, to ensure faster convergence of stochastic gradient methods such as (mini …
Abstract
To improve the efficiency and sustainability of learning deep models, we propose CREST, the first scalable framework with rigorous theoretical guarantees to identify the most valuable examples for training non-convex models, particularly deep networks. To guarantee convergence to a stationary point of a non-convex function, CREST models the non-convex loss as a series of quadratic functions and extracts a coreset for each quadratic sub-region. In addition, to ensure faster convergence of stochastic gradient methods such as (mini-batch) SGD, CREST iteratively extracts multiple mini-batch coresets from larger random subsets of training data, to ensure nearly-unbiased gradients with small variances. Finally, to further improve scalability and efficiency, CREST identifies and excludes the examples that are learned from the coreset selection pipeline. Our extensive experiments on several deep networks trained on vision and NLP datasets, including CIFAR-10, CIFAR-100, TinyImageNet, and SNLI, confirm that CREST speeds up training deep networks on very large datasets, by 1.7 x to 2.5 x with minimum loss in the performance. By analyzing the learning difficulty of the subsets selected by CREST, we show that deep models benefit the most by learning from subsets of increasing difficulty levels.
proceedings.mlr.press
Showing the best result for this search. See all results