BAYESIAN GLOBAL OPTIMIZATION
Using Optimal Learning to Deep Learning / AI Models
Scott Clark
scott@sigopt.com
OUTLINE
1. Why is Tuning Models Hard?
2. Comparison of Tuning Methods
3. Bayesian Global Optimization
4. Deep Learning Examples
Deep Learning / AI is
extremely powerful
Tuning these systems is
extremely non-intuitive
https://www.quora.com/What-is-the-most-important-unresolved-problem-in-machine-learning-3
What is the most important unresolved
problem in machine learning?
“...we still don't really know why some configurations of
deep neural networks work in some case and not others,
let alone having a more or less automatic approach to
determining the architectures and the
hyperparameters.”
Xavier Amatriain, VP Engineering at Quora
(former Director of Research at Netflix)
Photo: Joe Ross
TUNABLE PARAMETERS IN DEEP LEARNING
TUNABLE PARAMETERS IN DEEP LEARNING
Photo: Tammy Strobel
STANDARD METHODS
FOR HYPERPARAMETER SEARCH
STANDARD TUNING METHODS
Parameter
Configuration
?
Grid Search Random Search
Manual Search
- Weights
- Thresholds
- Window sizes
- Transformations
ML / AI
Model
Testing
Data
Cross
Validation
Training
Data
OPTIMIZATION FEEDBACK LOOP
Objective Metric
Better
Results
REST API
New configurations
ML / AI
Model
Testing
Data
Cross
Validation
Training
Data
BAYESIAN GLOBAL OPTIMIZATION
… the challenge of how to collect information as efficiently
as possible, primarily for settings where collecting information
is time consuming and expensive.
Prof. Warren Powell - Princeton
What is the most efficient way to collect information?
Prof. Peter Frazier - Cornell
How do we make the most money, as fast as possible?
Scott Clark - CEO, SigOpt
OPTIMAL LEARNING
● Optimize objective function
○ Loss, Accuracy, Likelihood
● Given parameters
○ Hyperparameters, feature/architecture params
● Find the best hyperparameters
○ Sample function as few times as possible
○ Training on big data is expensive
BAYESIAN GLOBAL OPTIMIZATION
SMBO
Sequential Model-Based Optimization
HOW DOES IT WORK?
1. Build Gaussian Process (GP) with points
sampled so far
2. Optimize the fit of the GP (covariance
hyperparameters)
3. Find the point(s) of highest Expected
Improvement within parameter domain
4. Return optimal next best point(s) to sample
GP/EI SMBO
GAUSSIAN PROCESSES
GAUSSIAN PROCESSES
overfit good fit underfit
GAUSSIAN PROCESSES
EXPECTED IMPROVEMENT
DEEP LEARNING EXAMPLES
● Classify movie reviews
using a CNN in MXNet
SIGOPT + MXNET
TEXT CLASSIFICATION PIPELINE
ML / AI
Model
(MXNet)
Testing
Text
Validation
Accuracy
Better
Results
REST API
Hyperparameter
Configurations
and
Feature
Transformations
Training
Text
TUNABLE PARAMETERS IN DEEP LEARNING
● Comparison of several RMSProp SGD parametrizations
STOCHASTIC GRADIENT DESCENT
ARCHITECTURE PARAMETERS
Grid Search Random Search
?
TUNING METHODS
SPEED UP #2: RANDOM/GRID -> SIGOPT
CONSISTENTLY BETTER AND FASTER
● Classify house numbers in
an image dataset (SVHN)
SIGOPT + TENSORFLOW
COMPUTER VISION PIPELINE
ML / AI
Model
(Tensorflow)
Testing
Images
Cross
Validation
Accuracy
Better
Results
REST API
Hyperparameter
Configurations
and
Feature
Transformations
Training
Images
METRIC OPTIMIZATION
● All convolutional neural network
● Multiple convolutional and dropout layers
● Hyperparameter optimization mixture of
domain expertise and grid search (brute force)
SIGOPT + NEON
http://arxiv.org/pdf/1412.6806.pdf
COMPARATIVE PERFORMANCE
● Expert baseline: 0.8995
○ (using neon)
● SigOpt best: 0.9011
○ 1.6% reduction in
error rate
○ No expert time
wasted in tuning
SIGOPT + NEON
http://arxiv.org/pdf/1512.03385v1.pdf
● Explicitly reformulate the layers as learning residual
functions with reference to the layer inputs, instead of learning unreferenced functions
● Variable depth
● Hyperparameter optimization mixture of domain expertise and grid search (brute force)
COMPARATIVE PERFORMANCE
Standard Method
● Expert baseline: 0.9339
○ (from paper)
● SigOpt best: 0.9436
○ 15% relative error
rate reduction
○ No expert time
wasted in tuning
SIGOPT SERVICE
OPTIMIZATION FEEDBACK LOOP
Objective Metric
Better
Results
REST API
New configurations
ML / AI
Model
Testing
Data
Cross
Validation
Training
Data
SIMPLIFIED OPTIMIZATION
Client Libraries
● Python
● Java
● R
● Matlab
● And more...
Framework Integrations
● TensorFlow
● scikit-learn
● xgboost
● Keras
● Neon
● And more...
Live Demo
DISTRIBUTED TRAINING
● SigOpt serves as a distributed
scheduler for training models
across workers
● Workers access the SigOpt API
for the latest parameters to
try for each model
● Enables easy distributed
training of non-distributed
algorithms across any number
of models
https://sigopt.com/getstarted
Try it yourself!
Questions?
contact@sigopt.com
https://sigopt.com
@SigOpt

MLconf 2017 Seattle Lunch Talk - Using Optimal Learning to tune Deep Learning / AI Models