Misha Bilenko, Principal Researcher, Microsoft at MLconf SEA - 5/01/15
The document discusses the differences between big data and big learning, highlighting the need for scalable training and prediction methods in machine learning. It emphasizes the importance of learning with counts for handling high-cardinality attributes and suggests using efficient algorithms and count-based features to improve model accuracy. The author also explores the framework of Azure ML for scalable machine learning, advocating for asynchronous processes to optimize resource use.
Introduction to Big Learning, Big Data, and the scaling issues involved in machine learning. Differentiates Big Learning from traditional data approaches.
Introduces DRACuLa for distributed count-based learning and the significance of scaling features in machine learning.
Focuses on relational data learning with examples from information retrieval and the challenges of representing high-cardinality attributes.
Standard methods such as binary feature mapping and feature hashing, highlighting their scalability and efficiency issues.
Detailed examination of learning using counts, discussing features and their aggregation for non-linear learners.
Introduces advanced techniques like Count-Min Sketches for feature extraction and the method of predicting with continuous counts.
Discussion on the need to separate counting and training to avoid leakage and the use of differential privacy to enhance security.
Highlights the advantages of using counts in machine learning, including accuracy, ease of implementation, and modularity.
Examines collaboration and resource management for scaling machine learning across teams using Azure ML and cloud resources.
Demonstrates practical scalability of learning with counts using a large dataset in Azure ML and highlights pipeline effectiveness.
Explores state-of-the-art techniques in asynchronous optimization for machine learning models to improve efficiency.
Summarizes the resource demands of big features, learners, and pipelines, emphasizing collaborative cloud computing solutions.
Misha Bilenko, Principal Researcher, Microsoft at MLconf SEA - 5/01/15
1.
Many Shades ofScale:
Big Learning
Beyond Big Data
Misha Bilenko
Principal Researcher
Microsoft Azure Machine Learning
2.
ML ♥ MoreData
What we see in production
[Banko and Brill, 2001]
What we [used to] learn in school
[Mooney, 1996]
3.
ML ♥ MoreData
What we see in production
[Banko and Brill, 2001]
Is training on
more examples
all there is to it?
4.
Big Learning ≠Learning(BigData)
• Big data: size → distributing storage and processing
• Big learning: scale bottlenecks in training and prediction
• Classic bottlenecks: bytes and cycles
Large datasets → distribute training on larger hardware (FPGAs, GPUs, cores, clusters)
• Other scaling dimensions
Features Components/People
Learning with relationaldata
𝑝(𝑐𝑙𝑖𝑐𝑘|𝑢𝑠𝑒𝑟,𝑐𝑜𝑛𝑡𝑒𝑥𝑡,𝑎𝑑)
adid: 1010054353
adText: Fall ski sale!
adURL: www.k2.com/sale
userid 0xb49129827048dd9b
IP 131.107.65.14
query powder skis
qCategories {skiing, outdoor gear}
7
• Problem: representing high-cardinality attributes as features
• Scalable: to billions of attribute values
• Efficient: ~105+
predictions/sec/node
• Flexible: for a variety of downstream learners
• Adaptive: to distribution change
• Standard approaches: binary features, hashing
• What everyone should use in industry: learning with counts
• Formalization and generalization
8.
Standard approach 1:binary (one-hot, indicator)
Attributes are mapped to indices based on lookup tables
- Not scalable cannot support high-cardinality attributes
- Not efficient large value-index dictionary must be retained
- Not flexible only linear learners are practical
- Not adaptive doesn’t support drift in attribute values
0010000..00 0..01000000 00000..001 0..00001000
#userIPs #ads #queries #queries x #ads
𝑖𝑑𝑥 𝑢 131.107.65.14 𝑖𝑑𝑥 𝑞 𝑝𝑜𝑤𝑑𝑒𝑟 𝑠𝑘𝑖𝑠𝑖𝑑𝑥 𝑎 𝑘2. 𝑐𝑜𝑚 𝑖𝑑𝑥 𝑝𝑜𝑤𝑑𝑒𝑟 𝑠𝑘𝑖𝑠, 𝑘2. 𝑐𝑜𝑚
8
9.
Standard approach 1+:feature hashing
Attributes are mapped to indices via hashing: ℎ 𝑥𝑖 = ℎ𝑎𝑠ℎ 𝑥𝑖 mod 𝑚
• Collisions are rare; dot products unbiased
+ Scalable no mapping tables
+ Efficient low cost, preserves sparsity
- Not flexible only linear learners are practical
± Adaptive new values ok, no temporal effects
0000010..0000010000..0000010...000001000
ℎ powder skis + k2. com
ℎ powder skis
ℎ k2. com
ℎ 131.107.65.14
𝑚 ∼ 107
[Moody ‘89, Tarjan-Skadron ‘05, Weinberger+ ’08]
9
𝜙(𝑥)
10.
Learning with counts
•Features are per-label counts [+odds] [+backoff]
𝝓 = [N+ N- log(N+)-log(N-) IsRest]
• log(N+)-log(N-) = log
𝒑(+)
𝒑(−)
: log-odds/Naïve Bayes estimate
• N+, N-: indicators of confidence of the naïve estimate
• IsFromRest: indicator of back-off vs. “real count”
131.107.65.14
𝐶𝑜𝑢𝑛𝑡𝑠(131.107.65.14) 𝐶𝑜𝑢𝑛𝑡𝑠(k2.com)
k2.com
𝐶𝑜𝑢𝑛𝑡𝑠(powder skis)
powder skis
𝐶𝑜𝑢𝑛𝑡𝑠(powder skis, k2.com)
powder skis, k2.com
IP 𝑵+ 𝑵−
173.194.33.9 46964 993424
87.250.251.11 31 843
131.107.65.14 12 430
… … …
REST 745623 13964931
𝝓(𝑪𝒐𝒖𝒏𝒕𝒔 (𝑰𝑷)) 𝝓(𝑪𝒐𝒖𝒏𝒕𝒔 (𝒂𝒅)) 𝝓(𝑪𝒐𝒖𝒏𝒕𝒔 (𝒒𝒖𝒆𝒓𝒚)) 𝝓(𝑪𝒐𝒖𝒏𝒕𝒔 (𝒒𝒖𝒆𝒓𝒚, 𝒂𝒅))
Backoff is apain. Count-Min Sketches to the Rescue!
[Cormode-Muthukrishnan ‘04]
Intuition: correct for collisions by using multiple hashes
Featurize: 𝑚𝑖𝑛𝑗 (𝑀[𝑗][ℎ𝑗(𝑖)]) Estimation Time : O(d)
= M (d x w)
Count: for each hash function M[j][hj(i)] ++ Update Time: O(d)
Learning from counts:combiner training
IP 𝑵+ 𝑵−
173.194.33.9 46964 993424
87.250.251.11 31 843
131.253.13.32 12 430
… … …
REST 745623 13964931
query 𝑵+ 𝑵−
facebook 281912 7957321
dozen roses 32791 640964
… … …
REST 6321789 43477252
timeTnow
Train predictor
….
IsBackoff
ln 𝑁+
− ln 𝑁−
Aggregated
features
Original numeric features
𝑁−
𝑁+
Counting
Train non-linear model on count-based features
• Counts, transforms, lookup properties
• Additional features can be injected
Query × AdId 𝑵+ 𝑵−
facebook, ad1 54546 978964
facebook, ad2 232343 8431467
dozen roses, ad3 12973 430982
… … …
REST 4419312 52754683
14
15.
Prediction with counts
IP𝑵+ 𝑵−
173.194.33.9 46964 993424
87.250.251.11 31 843
131.253.13.32 12 430
… … …
REST 745623 13964931
query 𝑵+ 𝑵−
facebook 281912 7957321
dozen roses 32791 640964
… … …
REST 6321789 43477252
URL × Country 𝑵+ 𝑵−
url1, US 54546 978964
url2, CA 232343 8431467
url3, FR 12973 430982
… … …
REST 4419312 52754683
time
Tnow
….
IsBackoff
ln 𝑁+
− ln 𝑁−
Aggregated
features
𝑁−
𝑁+
Counting →
• Counts are updated continuously
• Combiner re-training infrequent
Ttrain
Original numeric features
16.
Where did itcome from?
Li et al. 2010
Pavlov et al. 2009
Lee et al. 1998
Yeh and Patt, 1991
16
Hillard et al. 2011
• De-facto standard in online advertising industry
• Rediscovered by everyone who really cares about accuracy
17.
Do we needto separate counting and training?
• Can we use use same data for both counting and featurization
• Bad idea: leakage = count features contain labels → overfitting
• Combiner dedicates capacity to decoding example’s label from features
• Can we hold out each example’s label during train-set featurization?
• Bad idea: leakage and bias
• Illustration: two examples, same feature values, different labels (click and non-click)
• Different representations are inconsistent and allow decoding the label
Train predictorCounting
Example ID Label N+[a] N-[a]
1 + 𝑁𝑎
+
− 1 𝑁 𝑎
−
2 - 𝑁 𝑎
+
𝑁 𝑎
−
-1
18.
Solution via Differentialprivacy
• What is leakage? Revealing information about any individual label
• Formally: count table cT is ε-leakage-proof if same features for ∀𝑥, 𝑇, 𝑇′ = 𝑇(𝑥𝑖, 𝑦𝑖)
• Theorem: adding noise sampled from Laplace(k/𝜖) makes counts 𝜖-leakage-proof
• Typically 1 ≤ 𝑘 ≤ 100
• Concretely: N+ = N+ + LaplaceRand(0,10k) N- = N- + LaplaceRand(0,10k)
• In practice: LaplaceRand(0,1) sufficient
19.
Learning from counts:why it works
• State-of-the-art accuracy
• Easy to implement on standard clusters
• Monitorable and debuggable
• Temporal changes easy to monitor
• Easy emergency recovery (bot attacks, etc.)
• Error debugging (which feature to blame)
• Modular (vs. monolithic)
• Components: learners and count features
• People: multiple feature/learner authors
19
20.
Big Learning: Pipelinesand Teams
Ravi: text features in R
Jim: matrix projections
Vera: sweeping boosted trees
Steph: count features
on Hadoop
How to scale up Machine Learning to
Parallel and Distributed Data Scientists?
21.
AzureML
• Cloud-hosted, graphicalenvironment
for creating, training, evaluating, sharing, and deploying
machine learning models
• Supports versioning and collaboration
• Dozens of ML algorithms, extensible via R and Python
Learning with Countsin Azure ML
Criteo 1TB dataset
Counting:
an hour on HDInsight Hadoop cluster
Training:
minutes in AzureML Studio
Deployment
one click to RRS service
In closing: BigLearning = Streetfighting
• Big features are resource-hungry: learning with counts, projections…
• Make them distributed and easy to compute/monitor
• Big learners are resource-hungry
• Parallelize them (preferably asynchronously)
• Big pipelines are resource-hungry: authored by many humans
• Run them a collaborative cloud environment