MultiTask DL
MultiTask DL
Multi-task Learning
Sargur N. Srihari
[email protected]
1
Deep Learning Srihari
Regularization Strategies
1. Parameter Norm Penalties 8. Early Stopping
2. Norm Penalties as 6. Parameter tying and
Constrained Optimization parameter sharing
3. Regularization and Under- 7. Sparse representations
constrained Problems 8. Bagging and other
4. Data Set Augmentation ensemble methods
5. Noise Robustness 9. Dropout
6. Semi-supervised learning 10. Adversarial training
7. Multi-task learning 11. Tangent methods
2
Deep Learning Srihari
4
Deep Learning Srihari
6
Deep Learning Srihari
7
Deep Learning Srihari
8
Deep Learning Srihari
Benefits of multi-tasking
• Improved generalization and generalization
error bounds
– achieved due to shared parameters
• For which statistical strength can be greatly improved
– In proportion to the increased no. of examples for the shared
parameters compared to the scenario of single-task models