IoT - Lecture 11
IoT - Lecture 11
Internet of Things
The slides are made by Andreas Karagounis and John Ramey,
adapted by Tan Le
source: https://www.zmescience.com/science/news-science/self-driving-car-how-it-sees/
source: http://www.businessinsider.com/how-googles-self-driving-cars-see-the-world-2015-
10
source: https://gfycat.com/gifs/detail/AntiqueFavoriteJackal
source: https://research.googleblog.com/2016/10/supercharging-style-transfer.html
source: https://deepdreamgenerator.com/
source: http://barabeke.it/portfolio/deep-dream-generator
source: https://deepdreamgenerator.com/
source: https://nicolovaligi.com/deep-learning-models-semantic-segmentation.html
source: https://www.suasnews.com/2017/08/yolo-open-source-real-time-image-recognition/
source:
http://68.media.tumblr.com/b813b4afff4b1bc74dd553158fe4aca5/tumblr_oiuv3zLd
u51qav3uso4_r1_540.gif
What we will accomplish in this hour:
source: https://cdn-images-1.medium.com/max/2000/1*bvXWvdDMk8FTS1WWd9_JEg.jpeg
Our 506 samples
Data we are using to build our model The continuous value that we are
trying to predict
Zero indexed
13 Features: crime, age, number of rooms 1 Label: The price of the home
etc... (median value in $1000’s)
Loading our data and splitting into training, validation and test
(x_train, y_train), (x_val, y_val), (x_test, y_test) = np.load('housing.npy')
Final stage
● Out of our 506 samples we will allocate: 350 to training, 50 to validating and
106 to testing
● Very important in machine learning that we have separate data which we
don’t learn from so that we can make sure that what we’ve learned actually
generalizes!
Preprocessing stage: Always normalize your data!
source: http://cs231n.github.io/neural-networks-2/
● Normalization of a dataset is a common requirement for many machine learning estimators: they might behave
badly if the individual feature do not more or less look like standard normally distributed data
● Z-score normalization is the most popular method
Linear Regression
What is simple linear regression? What is multiple linear regression?
source: http://sphweb.bumc.bu.edu/otlt/MPH-
Modules/BS/R/R5_Correlation-Regression/R5_Correlation-
source: https://sebastianraschka.com/faq/docs/closed-form-vs-gd.html Regression_print.html
We want to minimize these vertical offsets
● We want to minimize the following cost function
Learned weights
vertical offsets
What are neurons in a neural network
source: http://cs231n.github.io/neural-networks-1/
Example of an activation function
source: https://ayearofai.com/rohan-4-the-vanishing-gradient-problem-ec68f76ffb9b
Ok let’s define our model in Keras
first_model = Sequential()
first_model.add(Dense(13, input_dim=13, kernel_initializer='random_uniform', activation='relu'))
first_model.add(Dense(1, kernel_initializer='random_uniform'))
first_model = Sequential()
Number of features Activation function
Number of neurons
Randomly initializing weights
first_model.add(Dense(1, kernel_initializer='uniform'))
What does our model even look like?
Input layer
Output layer
Defining an optimizer and compiling our model
Learning rate
sgd = SGD(lr=0.03)
first_model.compile(loss='mean_squared_error', optimizer=sgd)
What is SGD?
● SGD (Stochastic Gradient Descent) is a variant of Gradient Descent
Learning rate
source: https://sebastianraschka.com/faq/docs/closed-form-vs-gd.html
Training and testing our model
first_model.fit(x_train, y_train, batch_size=5, validation_data=(x_val, y_val), epochs=30)
test_score = first_model.evaluate(x_test, y_test)
source: https://www.codeproject.com/KB/dotnet/predictor/learn.gif
source: http://knowyourmeme.com/memes/we-need-to-go-deeper
Jokes aside this is what we do!
We tune the hyperparameters:
Test MSE
Training MSE
source: http://www-bcf.usc.edu/~gareth/ISL/
Add regularization: dropout!
regularized_model.add(Dropout(0.5))
source: https://medium.com/@amarbudhiraja/https-medium-com-amarbudhiraja-learning-less-to-learn-better-dropout-in-deep-machine-learning-74334da4bfc5
Overfitting in our model
Why is deep learning all the hype?