This document discusses using SigOpt to tune deep learning models. It notes that tuning deep learning systems is non-intuitive and expert-intensive using traditional random search or grid search methods. SigOpt provides a more efficient approach using Bayesian optimization to suggest optimal hyperparameters after each trial, reducing wasted expert time and computation. The document provides examples applying SigOpt to tune convolutional neural networks on CIFAR10, demonstrating a 1.6% reduction in error rate over expert tuning with no wasted trials.