说到小样本学习一定要先看Meta Learning
MAML算法提供一个模型无关计算框架,怎么做到模型无关,主要是loss计算不同,计算框架类似adaboost,里面可以换各种算法
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
原始论文必看
meta.py的forward可以看出loss的计算
for k in range(1, self.update_step):
# 1. run the i-th task and compute loss for k=1~K-1
logits = self.net(x_spt[i], fast_weights, bn_training=True)
loss = F.cross_entropy(logits, y_spt[i])
# 2. compute grad on theta_pi
grad = torch.autograd.grad(loss, fast_weights)
# 3. theta_pi = theta_pi - train_lr * grad
fast_weights = list(map(lambda p: p[1] - self.update_lr * p[0], zip(grad, f