One Person, One Model, One World: Learning Continual User Representation Without Forgetting
One Person, One Model, One World: Learning Continual User Representation Without Forgetting
Motivation
Related Work
Conure
Experiments
Our Motivation
A person has different roles to play in life!
But all these roles may have some
commonalities, such as personalization,
habits, preference.
Our Focus:
Whether we can build a user representation model that could
keep learning throughout all sequential tasks without forgetting One Person, One Model, One World
News Rec
Video Rec
Search
A person has different roles to
play in life!But all these roles Engine
Music Rec
may have some commonalities,
such as personalization,
habits, preference.
Browser
Social APP
Video Rec
Social APP
Clicking logs
No interaction
TikTok -- warm user Amazon —cold users Ads --- new users
Using Lifelong learning techniques to solve recommendation tasks
Keypoints
• Necessity and possibility why lifelong learning for UR learning?
• Lifelong learning paradigm throughout all tasks.
• Performance gain for tasks have certain correlations.
Outline
Motivation
Related Work
Conure
Experiments
• Classical UR models (works well but is specific to only one task)
Last hidden
Vector Changes
• Over-parameterization:
—(1) Conure largely outperforms other models on T3 because of the positive transfer from T1 and T2
—(2) Conure, PeterRec and FineAll largely outperforms SimMo because of of the positive transfer from T1
—(3) SinMoAll performs much worse on most tasks (except the last one) because of catastrophic forgetting
• Ablation study- T2 for T3: