Workshop about TensorFlow usage for AI Ukraine 2016. Brief tutorial with source code example. Described TensorFlow main ideas, terms, parameters. Example related with linear neuron model and learning using Adam optimization algorithm.
This document provides an overview and introduction to TensorFlow. It describes that TensorFlow is an open source software library for numerical computation using data flow graphs. The graphs are composed of nodes, which are operations on data, and edges, which are multidimensional data arrays (tensors) passing between operations. It also provides pros and cons of TensorFlow and describes higher level APIs, requirements and installation, program structure, tensors, variables, operations, and other key concepts.
Introduction to Machine Learning with TensorFlowPaolo Tomeo
This document introduces TensorFlow, an open source machine learning library for deep learning. It discusses how TensorFlow uses data flow graphs to optimize objective functions and allows computation across CPU and GPU devices. It provides an example of classifying the Iris dataset using TensorFlow's high-level tf.contrib.learn API. It concludes with pointers to additional TensorFlow tutorials and guides.
Introduction to Deep Learning, Keras, and TensorFlowSri Ambati
This meetup was recorded in San Francisco on Jan 9, 2019.
Video recording of the session can be viewed here: https://youtu.be/yG1UJEzpJ64
Description:
This fast-paced session starts with a simple yet complete neural network (no frameworks), followed by an overview of activation functions, cost functions, backpropagation, and then a quick dive into CNNs. Next, we'll create a neural network using Keras, followed by an introduction to TensorFlow and TensorBoard. For best results, familiarity with basic vectors and matrices, inner (aka "dot") products of vectors, and rudimentary Python is definitely helpful. If time permits, we'll look at the UAT, CLT, and the Fixed Point Theorem. (Bonus points if you know Zorn's Lemma, the Well-Ordering Theorem, and the Axiom of Choice.)
Oswald's Bio:
Oswald Campesato is an education junkie: a former Ph.D. Candidate in Mathematics (ABD), with multiple Master's and 2 Bachelor's degrees. In a previous career, he worked in South America, Italy, and the French Riviera, which enabled him to travel to 70 countries throughout the world.
He has worked in American and Japanese corporations and start-ups, as C/C++ and Java developer to CTO. He works in the web and mobile space, conducts training sessions in Android, Java, Angular 2, and ReactJS, and he writes graphics code for fun. He's comfortable in four languages and aspires to become proficient in Japanese, ideally sometime in the next two decades. He enjoys collaborating with people who share his passion for learning the latest cool stuff, and he's currently working on his 15th book, which is about Angular 2.
The document describes how to use TensorBoard, TensorFlow's visualization tool. It outlines 5 steps: 1) annotate nodes in the TensorFlow graph to visualize, 2) merge summaries, 3) create a writer, 4) run the merged summary and write it, 5) launch TensorBoard pointing to the log directory. TensorBoard can visualize the TensorFlow graph, plot metrics over time, and show additional data like histograms and scalars.
TensorFlow is an open source neural network library for Python and C++. It defines data flows as graphs with nodes representing operations and edges representing multidimensional data arrays called tensors. It supports supervised learning algorithms like gradient descent to minimize cost functions. TensorFlow automatically computes gradients so the user only needs to define the network structure, cost function, and optimization algorithm. An example shows training various neural network models on the MNIST handwritten digit dataset, achieving up to 99.2% accuracy. TensorFlow can implement other models like recurrent neural networks and is a simple yet powerful framework for neural networks.
Introduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLabCloudxLab
This document provides instructions for getting started with TensorFlow using a free CloudxLab. It outlines the following steps:
1. Open CloudxLab and enroll if not already enrolled. Otherwise go to "My Lab".
2. In "My Lab", open Jupyter and run commands to clone an ML repository containing TensorFlow examples.
3. Go to the deep learning folder in Jupyter and open the TensorFlow notebook to get started with examples.
An introduction to Google's AI Engine, look deeper into Artificial Networks and Machine Learning. Appreciate how our simplest neural network be codified and be used to data analytics.
This slides explains how Convolution Neural Networks can be coded using Google TensorFlow.
Video available at : https://www.youtube.com/watch?v=EoysuTMmmMc
The release of TensorFlow 2.0 comes with a significant number of improvements over its 1.x version, all with a focus on ease of usability and a better user experience. We will give an overview of what TensorFlow 2.0 is and discuss how to get started building models from scratch using TensorFlow 2.0’s high-level api, Keras. We will walk through an example step-by-step in Python of how to build an image classifier. We will then showcase how to leverage a transfer learning to make building a model even easier! With transfer learning, we can leverage other pretrained models such as ImageNet to drastically speed up the training time of our model. TensorFlow 2.0 makes this incredibly simple to do.
Rajat Monga at AI Frontiers: Deep Learning with TensorFlowAI Frontiers
In this talk at AI Frontiers Conference, Rajat Monga shares about TensorFlow that has enabled cutting-edge machine learning research at the top AI labs in the world. At the same time it has made the technology accessible to a large audience leading to some amazing uses. TensorFlow is used for classification, recommendation, text parsing, sentiment analysis and more. This talk goes over the design that makes it fast, flexible, and easy to use, and describe how we continue to make it better.
An introductory presentation covered key concepts in deep learning including neural networks, activation functions, cost functions, and optimization methods. Popular deep learning frameworks TensorFlow and tensorflow.js were discussed. Common deep learning architectures like convolutional neural networks and generative adversarial networks were explained. Examples and code snippets in Python demonstrated fundamental deep learning concepts.
Introduction to Deep Learning, Keras, and TensorflowOswald Campesato
A fast-paced introduction to Deep Learning concepts, such as activation functions, cost functions, back propagation, and then a quick dive into CNNs. Basic knowledge of vectors, matrices, and derivatives is helpful in order to derive the maximum benefit from this session. Then we'll see how to create a Convolutional Neural Network in Keras, followed by a quick introduction to TensorFlow and TensorBoard.
Nick McClure gave an introduction to neural networks using Tensorflow. He explained the basic unit of neural networks as operational gates and how multiple gates can be combined. He discussed loss functions, learning rates, and activation functions. McClure also covered convolutional neural networks, recurrent neural networks, and applications such as image captioning and style transfer. He concluded by discussing resources for staying up to date with advances in machine learning.
Abstract: This PDSG workshop introduces basic concepts on TensorFlow. The course covers fundamentals. Concepts covered are Vectors/Matrices/Vectors, Design&Run, Constants, Operations, Placeholders, Bindings, Operators, Loss Function and Training.
Level: Fundamental
Requirements: Some basic programming knowledge is preferred. No prior statistics background is required.
Introduction to TensorFlow, by Machine Learning at BerkeleyTed Xiao
A workshop introducing the TensorFlow Machine Learning framework. Presented by Brenton Chu, Vice President of Machine Learning at Berkeley.
This presentation cover show to construct, train, evaluate, and visualize neural networks in TensorFlow 1.0
http://ml.berkeley.edu
The document discusses a TensorFlow session that merges summary data and runs an agenda. Key topics from the document include TensorFlow sessions, summary data, and running agendas.
Try to imagine the amount of time and effort it would take you to write a bug-free script or application that will accept a URL, port scan it, and for each HTTP service that it finds, it will create a new thread and perform a black box penetration testing while impersonating a Blackberry 9900 smartphone. While you’re thinking, Here’s how you would have done it in Hackersh:
“http://localhost” \
-> url \
-> nmap \
-> browse(ua=”Mozilla/5.0 (BlackBerry; U; BlackBerry 9900; en) AppleWebKit/534.11+ (KHTML, like Gecko) Version/7.1.0.346 Mobile Safari/534.11+”) \
-> w3af
Meet Hackersh (“Hacker Shell”) – A new, free and open source cross-platform shell (command interpreter) with built-in security commands and Pythonect-like syntax.
Aside from being interactive, Hackersh is also scriptable with Pythonect. Pythonect is a new, free, and open source general-purpose dataflow programming language based on Python, written in Python. Hackersh is inspired by Unix pipeline, but takes it a step forward by including built-in features like remote invocation and threads. This 120 minute lab session will introduce Hackersh, the automation gap it fills, and its features. Lots of demonstrations and scripts are included to showcase concepts and ideas.
This session for beginners introduces tf.data APIs for creating data pipelines by combining various "lazy operators" in tf.data, such as filter(), map(), batch(), zip(), flatmap(), take(), and so forth.
Familiarity with method chaining and TF2 is helpful (but not required). If you are comfortable with FRP, the code samples in this session will be very familiar to you.
- TensorFlow is Google's open source machine learning library for developing and training neural networks and deep learning models. It operates using data flow graphs to represent computation.
- TensorFlow can be used across many platforms including data centers, CPUs, GPUs, mobile phones, and IoT devices. It is widely used at Google across many products and research areas involving machine learning.
- The TensorFlow library is used along with higher level tools in Google's machine learning platform including TensorFlow Cloud, Machine Learning APIs, and Cloud Machine Learning Platform to make machine learning more accessible and scalable.
A fast-paced introduction to Deep Learning concepts, such as activation functions, cost functions, back propagation, and then a quick dive into CNNs, followed by a Keras code sample for defining a CNN. Basic knowledge of vectors, matrices, and derivatives is helpful in order to derive the maximum benefit from this session. Then we'll see a short introduction to TensorFlow 1.x and some insights into TF 2 that will be released some time this year.
A fast-paced introduction to Deep Learning that starts with a simple yet complete neural network (no frameworks), followed by an overview of activation functions, cost functions, backpropagation, and then a quick dive into CNNs. Next we'll create a neural network using Keras, followed by an introduction to TensorFlow and TensorBoard. For best results, familiarity with basic vectors and matrices, inner (aka "dot") products of vectors, and rudimentary Python is definitely helpful.
This document outlines an agenda for a DEV SUMMIT on TensorFlow for Windows. It discusses the required software tools, how to install Python and TensorFlow via pip, and demonstrates a simple "Hello World" TensorFlow program. It then describes an image recognition demo using TensorFlow's Inception model to classify images into 1000 categories and provides instructions for running the demo. Contact information is given at the end for following up.
Natural language processing open seminar For Tensorflow usagehyunyoung Lee
This is presentation for Natural Language Processing open seminar in Kookmin University.
The open seminar reference : https://cafe.naver.com/nlpk
My presentation about how to use tensorflow for NLP open seminar for newbies for tensorflow.
This document provides an overview and introduction to TensorFlow 2. It discusses major changes from TensorFlow 1.x like eager execution and tf.function decorator. It covers working with tensors, arrays, datasets, and loops in TensorFlow 2. It also demonstrates common operations like arithmetic, reshaping and normalization. Finally, it briefly introduces working with Keras and neural networks in TensorFlow 2.
TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Py...Edureka!
This Edureka TensorFlow Tutorial (Blog: https://goo.gl/HTE7uB) will help you in understanding various important basics of TensorFlow. It also includes a use-case in which we will create a model that will differentiate between a rock and a mine using TensorFlow. Below are the topics covered in this tutorial:
1. What are Tensors?
2. What is TensorFlow?
3. TensorFlow Code-basics
4. Graph Visualization
5. TensorFlow Data structures
6. Use-Case Naval Mine Identifier (NMI)
Introducton to Convolutional Nerural Network with TensorFlowEtsuji Nakai
Explaining basic mechanism of the Convolutional Neural Network with sample TesnsorFlow codes.
Sample codes: https://github.com/enakai00/cnn_introduction
TensorFlow Tutorial given by Dr. Chung-Cheng Chiu at Google Brain on Dec. 29, 2015
http://datasci.tw/event/google_deep_learning
TensorFlow & TensorFrames w/ Apache Spark presents Marco Saviano. It discusses numerical computing with Apache Spark and Google TensorFlow. TensorFrames allows manipulating Spark DataFrames with TensorFlow programs. It provides most operations in row-based and block-based versions. Row-based processes rows individually while block-based processes blocks of rows together for better efficiency. Reduction operations coalesce rows until one row remains. Future work may improve communication between Spark and TensorFlow through direct memory copying and using columnar storage formats.
This slides explains how Convolution Neural Networks can be coded using Google TensorFlow.
Video available at : https://www.youtube.com/watch?v=EoysuTMmmMc
The release of TensorFlow 2.0 comes with a significant number of improvements over its 1.x version, all with a focus on ease of usability and a better user experience. We will give an overview of what TensorFlow 2.0 is and discuss how to get started building models from scratch using TensorFlow 2.0’s high-level api, Keras. We will walk through an example step-by-step in Python of how to build an image classifier. We will then showcase how to leverage a transfer learning to make building a model even easier! With transfer learning, we can leverage other pretrained models such as ImageNet to drastically speed up the training time of our model. TensorFlow 2.0 makes this incredibly simple to do.
Rajat Monga at AI Frontiers: Deep Learning with TensorFlowAI Frontiers
In this talk at AI Frontiers Conference, Rajat Monga shares about TensorFlow that has enabled cutting-edge machine learning research at the top AI labs in the world. At the same time it has made the technology accessible to a large audience leading to some amazing uses. TensorFlow is used for classification, recommendation, text parsing, sentiment analysis and more. This talk goes over the design that makes it fast, flexible, and easy to use, and describe how we continue to make it better.
An introductory presentation covered key concepts in deep learning including neural networks, activation functions, cost functions, and optimization methods. Popular deep learning frameworks TensorFlow and tensorflow.js were discussed. Common deep learning architectures like convolutional neural networks and generative adversarial networks were explained. Examples and code snippets in Python demonstrated fundamental deep learning concepts.
Introduction to Deep Learning, Keras, and TensorflowOswald Campesato
A fast-paced introduction to Deep Learning concepts, such as activation functions, cost functions, back propagation, and then a quick dive into CNNs. Basic knowledge of vectors, matrices, and derivatives is helpful in order to derive the maximum benefit from this session. Then we'll see how to create a Convolutional Neural Network in Keras, followed by a quick introduction to TensorFlow and TensorBoard.
Nick McClure gave an introduction to neural networks using Tensorflow. He explained the basic unit of neural networks as operational gates and how multiple gates can be combined. He discussed loss functions, learning rates, and activation functions. McClure also covered convolutional neural networks, recurrent neural networks, and applications such as image captioning and style transfer. He concluded by discussing resources for staying up to date with advances in machine learning.
Abstract: This PDSG workshop introduces basic concepts on TensorFlow. The course covers fundamentals. Concepts covered are Vectors/Matrices/Vectors, Design&Run, Constants, Operations, Placeholders, Bindings, Operators, Loss Function and Training.
Level: Fundamental
Requirements: Some basic programming knowledge is preferred. No prior statistics background is required.
Introduction to TensorFlow, by Machine Learning at BerkeleyTed Xiao
A workshop introducing the TensorFlow Machine Learning framework. Presented by Brenton Chu, Vice President of Machine Learning at Berkeley.
This presentation cover show to construct, train, evaluate, and visualize neural networks in TensorFlow 1.0
http://ml.berkeley.edu
The document discusses a TensorFlow session that merges summary data and runs an agenda. Key topics from the document include TensorFlow sessions, summary data, and running agendas.
Try to imagine the amount of time and effort it would take you to write a bug-free script or application that will accept a URL, port scan it, and for each HTTP service that it finds, it will create a new thread and perform a black box penetration testing while impersonating a Blackberry 9900 smartphone. While you’re thinking, Here’s how you would have done it in Hackersh:
“http://localhost” \
-> url \
-> nmap \
-> browse(ua=”Mozilla/5.0 (BlackBerry; U; BlackBerry 9900; en) AppleWebKit/534.11+ (KHTML, like Gecko) Version/7.1.0.346 Mobile Safari/534.11+”) \
-> w3af
Meet Hackersh (“Hacker Shell”) – A new, free and open source cross-platform shell (command interpreter) with built-in security commands and Pythonect-like syntax.
Aside from being interactive, Hackersh is also scriptable with Pythonect. Pythonect is a new, free, and open source general-purpose dataflow programming language based on Python, written in Python. Hackersh is inspired by Unix pipeline, but takes it a step forward by including built-in features like remote invocation and threads. This 120 minute lab session will introduce Hackersh, the automation gap it fills, and its features. Lots of demonstrations and scripts are included to showcase concepts and ideas.
This session for beginners introduces tf.data APIs for creating data pipelines by combining various "lazy operators" in tf.data, such as filter(), map(), batch(), zip(), flatmap(), take(), and so forth.
Familiarity with method chaining and TF2 is helpful (but not required). If you are comfortable with FRP, the code samples in this session will be very familiar to you.
- TensorFlow is Google's open source machine learning library for developing and training neural networks and deep learning models. It operates using data flow graphs to represent computation.
- TensorFlow can be used across many platforms including data centers, CPUs, GPUs, mobile phones, and IoT devices. It is widely used at Google across many products and research areas involving machine learning.
- The TensorFlow library is used along with higher level tools in Google's machine learning platform including TensorFlow Cloud, Machine Learning APIs, and Cloud Machine Learning Platform to make machine learning more accessible and scalable.
A fast-paced introduction to Deep Learning concepts, such as activation functions, cost functions, back propagation, and then a quick dive into CNNs, followed by a Keras code sample for defining a CNN. Basic knowledge of vectors, matrices, and derivatives is helpful in order to derive the maximum benefit from this session. Then we'll see a short introduction to TensorFlow 1.x and some insights into TF 2 that will be released some time this year.
A fast-paced introduction to Deep Learning that starts with a simple yet complete neural network (no frameworks), followed by an overview of activation functions, cost functions, backpropagation, and then a quick dive into CNNs. Next we'll create a neural network using Keras, followed by an introduction to TensorFlow and TensorBoard. For best results, familiarity with basic vectors and matrices, inner (aka "dot") products of vectors, and rudimentary Python is definitely helpful.
This document outlines an agenda for a DEV SUMMIT on TensorFlow for Windows. It discusses the required software tools, how to install Python and TensorFlow via pip, and demonstrates a simple "Hello World" TensorFlow program. It then describes an image recognition demo using TensorFlow's Inception model to classify images into 1000 categories and provides instructions for running the demo. Contact information is given at the end for following up.
Natural language processing open seminar For Tensorflow usagehyunyoung Lee
This is presentation for Natural Language Processing open seminar in Kookmin University.
The open seminar reference : https://cafe.naver.com/nlpk
My presentation about how to use tensorflow for NLP open seminar for newbies for tensorflow.
This document provides an overview and introduction to TensorFlow 2. It discusses major changes from TensorFlow 1.x like eager execution and tf.function decorator. It covers working with tensors, arrays, datasets, and loops in TensorFlow 2. It also demonstrates common operations like arithmetic, reshaping and normalization. Finally, it briefly introduces working with Keras and neural networks in TensorFlow 2.
TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Py...Edureka!
This Edureka TensorFlow Tutorial (Blog: https://goo.gl/HTE7uB) will help you in understanding various important basics of TensorFlow. It also includes a use-case in which we will create a model that will differentiate between a rock and a mine using TensorFlow. Below are the topics covered in this tutorial:
1. What are Tensors?
2. What is TensorFlow?
3. TensorFlow Code-basics
4. Graph Visualization
5. TensorFlow Data structures
6. Use-Case Naval Mine Identifier (NMI)
Introducton to Convolutional Nerural Network with TensorFlowEtsuji Nakai
Explaining basic mechanism of the Convolutional Neural Network with sample TesnsorFlow codes.
Sample codes: https://github.com/enakai00/cnn_introduction
TensorFlow Tutorial given by Dr. Chung-Cheng Chiu at Google Brain on Dec. 29, 2015
http://datasci.tw/event/google_deep_learning
TensorFlow & TensorFrames w/ Apache Spark presents Marco Saviano. It discusses numerical computing with Apache Spark and Google TensorFlow. TensorFrames allows manipulating Spark DataFrames with TensorFlow programs. It provides most operations in row-based and block-based versions. Row-based processes rows individually while block-based processes blocks of rows together for better efficiency. Reduction operations coalesce rows until one row remains. Future work may improve communication between Spark and TensorFlow through direct memory copying and using columnar storage formats.
TensorFlow is an open source software library for machine learning developed by Google. It provides primitives for defining functions on tensors and automatically computing their derivatives. TensorFlow represents computations as data flow graphs with nodes representing operations and edges representing tensors. It is widely used for neural networks and deep learning tasks like image classification, language processing, and speech recognition. TensorFlow is portable, scalable, and has a large community and support for deployment compared to other frameworks. It works by constructing a computational graph during modeling, and then executing operations by pushing data through the graph.
Large Scale Deep Learning with TensorFlow Jen Aman
Large-scale deep learning with TensorFlow allows storing and performing computation on large datasets to develop computer systems that can understand data. Deep learning models like neural networks are loosely based on what is known about the brain and become more powerful with more data, larger models, and more computation. At Google, deep learning is being applied across many products and areas, from speech recognition to image understanding to machine translation. TensorFlow provides an open-source software library for machine learning that has been widely adopted both internally at Google and externally.
Introduction To Using TensorFlow & Deep Learningali alemi
This document provides an introduction to using TensorFlow. It begins with an overview of TensorFlow and what it is. It then discusses TensorFlow code basics, including building computational graphs and running sessions. It provides examples of using placeholders, constants, and variables. It also gives an example of linear regression using TensorFlow. Finally, it discusses deep learning techniques like convolutional neural networks (CNNs) and recurrent neural networks (RNNs), providing examples of CNNs for image classification. It concludes with an example of using a multi-layer perceptron for MNIST digit classification in TensorFlow.
Title
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTorch + XGBoost + Airflow + MLflow + Spark + Jupyter + TPU
Video
https://youtu.be/vaB4IM6ySD0
Description
In this workshop, we build real-world machine learning pipelines using TensorFlow Extended (TFX), KubeFlow, and Airflow.
Described in the 2017 paper, TFX is used internally by thousands of Google data scientists and engineers across every major product line within Google.
KubeFlow is a modern, end-to-end pipeline orchestration framework that embraces the latest AI best practices including hyper-parameter tuning, distributed model training, and model tracking.
Airflow is the most-widely used pipeline orchestration framework in machine learning.
Pre-requisites
Modern browser - and that's it!
Every attendee will receive a cloud instance
Nothing will be installed on your local laptop
Everything can be downloaded at the end of the workshop
Location
Online Workshop
Agenda
1. Create a Kubernetes cluster
2. Install KubeFlow, Airflow, TFX, and Jupyter
3. Setup ML Training Pipelines with KubeFlow and Airflow
4. Transform Data with TFX Transform
5. Validate Training Data with TFX Data Validation
6. Train Models with Jupyter, Keras/TensorFlow 2.0, PyTorch, XGBoost, and KubeFlow
7. Run a Notebook Directly on Kubernetes Cluster with KubeFlow
8. Analyze Models using TFX Model Analysis and Jupyter
9. Perform Hyper-Parameter Tuning with KubeFlow
10. Select the Best Model using KubeFlow Experiment Tracking
11. Reproduce Model Training with TFX Metadata Store and Pachyderm
12. Deploy the Model to Production with TensorFlow Serving and Istio
13. Save and Download your Workspace
Key Takeaways
Attendees will gain experience training, analyzing, and serving real-world Keras/TensorFlow 2.0 models in production using model frameworks and open-source tools.
Related Links
1. PipelineAI Home: https://pipeline.ai
2. PipelineAI Community Edition: http://community.pipeline.ai
3. PipelineAI GitHub: https://github.com/PipelineAI/pipeline
4. Advanced Spark and TensorFlow Meetup (SF-based, Global Reach): https://www.meetup.com/Advanced-Spark-and-TensorFlow-Meetup
5. YouTube Videos: https://youtube.pipeline.ai
6. SlideShare Presentations: https://slideshare.pipeline.ai
7. Slack Support: https://joinslack.pipeline.ai
8. Web Support and Knowledge Base: https://support.pipeline.ai
9. Email Support: [email protected]
This fast-paced session starts with an introduction to neural networks and linear regression models, along with a quick view of TensorFlow, followed by some Scala APIs for TensorFlow. You'll also see a simple dockerized image of Scala and TensorFlow code and how to execute the code in that image from the command line. No prior knowledge of NNs, Keras, or TensorFlow is required (but you must be comfortable with Scala).
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...Simplilearn
This presentation on TensorFlow will help you in understanding what exactly is TensorFlow and how it is used in Deep Learning. TensorFlow is a software library developed by Google for the purposes of conducting machine learning and deep neural network research. In this tutorial, you will learn the fundamentals of TensorFlow concepts, functions, and operations required to implement deep learning algorithms and leverage data like never before. This TensorFlow tutorial is ideal for beginners who want to pursue a career in Deep Learning. Now, let us deep dive into this TensorFlow tutorial and understand what TensorFlow actually is and how to use it.
Below topics are explained in this TensorFlow presentation:
1. What is Deep Learning?
2. Top Deep Learning Libraries
3. Why TensorFlow?
4. What is TensorFlow?
5. What are Tensors?
6. What is a Data Flow Graph?
7. Program Elements in TensorFlow
8. Use case implementation using TensorFlow
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
7. Build your own deep learning project
8. Differentiate between machine learning, deep learning and artificial intelligence
Learn more at: https://www.simplilearn.com
LIST OF EXPERIMENTS:
1. Implement simple vector addition in Tensor Flow.
2. Implement a regression model in Keras.
3. Implement a perception in TensorFlow/Keras Environment.
4. Implement a Feed Forward Network in TensorFlow/Keras.
5. Implement an image classifier using CNN in TensorFlow/Keras.
6. Improve the deep Learning model by fine tuning hyper parameters.
7. Implement a Transfer Learning concept in image classification.
8. Using a pre trained model on Keras for transfer learning.
9. Perform Sentimental Analysis using RNN.
10. Implement an LSTM based Auto encoding inTensorflow/Keras.
11. Image generation using GAN.
ADDITIONAL EXPERIMENTS
12. Train a deep Learning model to classify a given image using pre trained model.
13. Recommendation system from sales data using Deep Learning.
14. Implement Object detection using CNN.
15. Implement any simple Reinforcement Algorithm for an NLP problem.
The document discusses setting up and using Keras and TensorFlow libraries for machine learning. It provides instructions on installing the libraries, preparing data, defining a model with sequential layers, compiling the model to configure the learning process, training the model on data, and evaluating the trained model on test data. A sample program is included that uses a fashion MNIST dataset to classify images into 10 categories using a simple sequential model.
This document summarizes TensorFlow's APIs, beginning with an overview of the low-level API using computational graphs and sessions. It then discusses higher-level APIs like Keras, TensorFlow Datasets for input pipelines, and Estimators which hide graph and session details. Datasets improve training speed by up to 300% by enabling parallelism. Estimators resemble scikit-learn and separate model definition from training, making code more modular and reusable. The document provides examples of using Datasets and Estimators with TensorFlow.
Slides used at the Tensorflow Belgium meetup titled running Tensorflow in Production https://www.meetup.com/TensorFlow-Belgium/events/252679670/
A fast-paced introduction to Deep Learning concepts, such as activation functions, cost functions, back propagation, and then a quick dive into CNNs. Basic knowledge of vectors, matrices, and derivatives is helpful in order to derive the maximum benefit from this session. Then we'll see a short introduction to TensorFlow and TensorBoard.
This document provides an introduction and overview of TensorFlow, a popular deep learning library developed by Google. It begins with administrative announcements for the class and then discusses key TensorFlow concepts like tensors, variables, placeholders, sessions, and computation graphs. It provides examples comparing TensorFlow and NumPy for common deep learning tasks like linear regression. It also covers best practices for debugging TensorFlow and introduces TensorBoard for visualization. Overall, the document serves as a high-level tutorial for getting started with TensorFlow.
This document provides an overview of running an image classification workload using IBM PowerAI and the MNIST dataset. It discusses deep learning concepts like neural networks and training flows. It then demonstrates how to set up TensorFlow on an IBM PowerAI trial server, load the MNIST dataset, build and train a basic neural network model for image classification, and evaluate the trained model's accuracy on test data.
The document discusses TensorFlow and machine learning algorithms. It covers defining and running TensorFlow operations like addition and multiplication. TensorFlow can efficiently evaluate operations during execution time. It also discusses linear regression, data generation, model construction, and learning stages when training regression and neural network models on automatically generated data. Hidden layers are mentioned when discussing predicting sin(x) functions with neural networks.
"A fast-paced introduction to Deep Learning (DL) concepts, such as neural networks, back propagation, activation functions, and CNNs. We'll also look at JavaScript-based toolkits (such as TensorFire and deeplearning.js) that leverage the power of WebGL. Basic knowledge of elementary calculus (e.g., derivatives) is recommended in order to derive the maximum benefit from this session.
Tensorflow in practice by Engineer - donghwi chaDonghwi Cha
- Tensorflow is an introduction to the machine learning framework Tensorflow covering key concepts like computation graphs, operations, sessions, training, replication, and clustering.
- Key aspects discussed include how Tensorflow executes operations as a static computation graph, uses sessions to run graphs and tensors to hold values, and supports data parallelism through replication across devices/workers.
- The document provides examples of building neural network models in Tensorflow and discusses techniques for training models like backpropagation and distributing training using data parallelism.
Overview of TensorFlow For Natural Language Processingananth
TensorFlow open sourced recently by Google is one of the key frameworks that support development of deep learning architectures. In this slideset, part 1, we get started with a few basic primitives of TensorFlow. We will also discuss when and when not to use TensorFlow.
A fast-paced introduction to TensorFlow 2 about some important new features (such as generators and the @tf.function decorator) and TF 1.x functionality that's been removed from TF 2 (yes, tf.Session() has retired).
Some concise code samples are presented to illustrate how to use new features of TensorFlow 2.
TensorFlow is an open-source machine learning framework developed by Google. It provides tools for performing numerical computation and defining, training, and evaluating machine learning models. TensorFlow's flexible architecture allows models to be deployed on CPUs, GPUs, and TPUs. It has a large ecosystem of tools and an active community. The TensorFlow architecture consists of a backend for efficient computation and a frontend Python API for building models. Key program elements include operations, graphs, sessions, tensors, variables, and placeholders.
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...Databricks
We all know what they say – the bigger the data, the better. But when the data gets really big, how do you mine it and what deep learning framework to use? This talk will survey, with a developer’s perspective, three of the most popular deep learning frameworks—TensorFlow, Keras, and PyTorch—as well as when to use their distributed implementations.
We’ll compare code samples from each framework and discuss their integration with distributed computing engines such as Apache Spark (which can handle massive amounts of data) as well as help you answer questions such as:
As a developer how do I pick the right deep learning framework?
Do I want to develop my own model or should I employ an existing one?
How do I strike a trade-off between productivity and control through low-level APIs?
What language should I choose?
In this session, we will explore how to build a deep learning application with Tensorflow, Keras, or PyTorch in under 30 minutes. After this session, you will walk away with the confidence to evaluate which framework is best for you.
Pragya Champion's Chalice is the annual Intra Pragya General Quiz hosted by the club's outgoing President and Vice President. The prelims and finals are both given in the singular set.
Introduction to Online CME for Nurse Practitioners.pdfCME4Life
Online CME for nurse practitioners provides a flexible, cost-effective way to stay current with evidence-based practices and earn required credits without interrupting clinical duties. Accredited platforms offer a wide range of self-paced courses—complete with interactive case studies, downloadable resources, and immediate digital certificates—that fit around demanding schedules. By choosing trusted providers, practitioners gain in-depth knowledge on emerging treatments, refine diagnostic and patient-management skills, and build professional credibility. Know more at https://cme4life.com/the-benefits-of-online-cme-for-nurse-practitioners/
Christian education is an important element in forming moral values, ethical Behaviour and
promoting social unity, especially in diverse nations like in the Caribbean. This study examined
the impact of Christian education on the moral growth in the Caribbean, characterized by
significant Christian denomination, like the Orthodox, Catholic, Methodist, Lutheran and
Pentecostal. Acknowledging the historical and social intricacies in the Caribbean, this study
tends to understand the way in which Christian education mold ethical decision making, influence interpersonal relationships and promote communal values. These studies’ uses, qualitative and quantitative research method to conduct semi-structured interviews for twenty
(25) Church respondents which cut across different age groups and genders in the Caribbean. A
thematic analysis was utilized to identify recurring themes related to ethical Behaviour, communal values and moral development. The study analyses the three objectives of the study:
how Christian education Mold’s ethical Behaviour and enhance communal values, the role of
Christian educating in promoting ecumenism and the effect of Christian education on moral
development. Moreover, the findings show that Christian education serves as a fundamental role
for personal moral evaluation, instilling a well-structured moral value, promoting good
Behaviour and communal responsibility such as integrity, compassion, love and respect. However, the study also highlighted challenges including biases in Christian teachings, exclusivity and misconceptions about certain practices, which impede the actualization of
How to Configure Add to Cart in Odoo 18 WebsiteCeline George
In this slide, we’ll discuss how to configure the Add to Cart functionality in the Odoo 18 Website. This feature enhances the shopping experience by offering three flexible options: Stay on the Product Page, Go to the Cart, or Let the User Decide through a dialog box.
Order Lepidoptera: Butterflies and Moths.pptxArshad Shaikh
Lepidoptera is an order of insects comprising butterflies and moths. Characterized by scaly wings and a distinct life cycle, Lepidoptera undergo metamorphosis from egg to larva (caterpillar) to pupa (chrysalis or cocoon) and finally to adult. With over 180,000 described species, they exhibit incredible diversity in form, behavior, and habitat, playing vital roles in ecosystems as pollinators, herbivores, and prey. Their striking colors, patterns, and adaptations make them a fascinating group for study and appreciation.
"Orthoptera: Grasshoppers, Crickets, and Katydids pptxArshad Shaikh
Orthoptera is an order of insects that includes grasshoppers, crickets, and katydids. Characterized by their powerful hind legs, Orthoptera are known for their impressive jumping ability. With diverse species, they inhabit various environments, playing important roles in ecosystems as herbivores and prey. Their sounds, often produced through stridulation, are distinctive features of many species.
Forestry Model Exit Exam_2025_Wollega University, Gimbi Campus.pdfChalaKelbessa
This is Forestry Exit Exam Model for 2025 from Department of Forestry at Wollega University, Gimbi Campus.
The exam contains forestry courses such as Dendrology, Forest Seed and Nursery Establishment, Plantation Establishment and Management, Silviculture, Forest Mensuration, Forest Biometry, Agroforestry, Biodiversity Conservation, Forest Business, Forest Fore, Forest Protection, Forest Management, Wood Processing and others that are related to Forestry.
Order: Odonata Isoptera and Thysanoptera.pptxArshad Shaikh
*Odonata*: Odonata is an order of insects that includes dragonflies and damselflies. Characterized by their large, compound eyes and agile flight, they are predators that feed on other insects, playing a crucial role in maintaining ecological balance.
*Isoptera*: Isoptera is an order of social insects commonly known as termites. These eusocial creatures live in colonies with complex social hierarchies and are known for their ability to decompose wood and other cellulose-based materials, playing a significant role in ecosystem nutrient cycling.
*Thysanoptera*: Thysanoptera, or thrips, are tiny insects with fringed wings. Many species are pests that feed on plant sap, transmitting plant viruses and causing damage to crops and ornamental plants. Despite their small size, they have significant impacts on agriculture and horticulture.
How to Setup Lunch in Odoo 18 - Odoo guidesCeline George
In Odoo 18, the Lunch application allows users a convenient way to order food and pay for their meal directly from the database. Lunch in Odoo 18 is a handy application designed to streamline and manage employee lunch orders within a company.
Jack Lutkus is an education champion, community-minded innovator, and cultural enthusiast. A social work graduate student at Aurora University, he also holds a BA from the University of Iowa.
Types of Actions in Odoo 18 - Odoo SlidesCeline George
In Odoo, actions define the system's response to user interactions, like logging in or clicking buttons. They can be stored in the database or returned as dictionaries in methods. Odoo offers various action types for different purposes.
RELATIONS AND FUNCTIONS
1. Cartesian Product of Sets:
If A and B are two non-empty sets, then their Cartesian product is:
A × B = {(a, b) | a ∈ A, b ∈ B}
Number of elements: |A × B| = |A| × |B|
2. Relation:
A relation R from set A to B is a subset of A × B.
Domain: Set of all first elements.
Range: Set of all second elements.
Codomain: Set B.
3. Types of Relations:
Empty Relation: No element in R.
Universal Relation: R = A × A.
Identity Relation: R = {(a, a) | a ∈ A}
Reflexive: (a, a) ∈ R ∀ a ∈ A
Symmetric: (a, b) ∈ R ⇒ (b, a) ∈ R
Transitive: (a, b), (b, c) ∈ R ⇒ (a, c) ∈ R
Equivalence Relation: Reflexive, symmetric, and transitive
4. Function (Mapping):
A relation f: A → B is a function if every element of A has exactly one image in B.
Domain: A, Codomain: B, Range ⊆ B
5. Types of Functions:
One-one (Injective): Different inputs give different outputs.
Onto (Surjective): Every element of codomain is mapped.
One-one Onto (Bijective): Both injective and surjective.
Constant Function: f(x) = c ∀ x ∈ A
Identity Function: f(x) = x
Polynomial Function: e.g., f(x) = x² + 1
Modulus Function: f(x) = |x|
Greatest Integer Function: f(x) = [x]
Signum Function: f(x) =
-1 if x < 0,
0 if x = 0,
1 if x > 0
6. Graphs of Functions:
Learn shapes of basic graphs: modulus, identity, step function, etc.
How to Create Time Off Request in Odoo 18 Time OffCeline George
Odoo 18 provides an efficient way to manage employee leave through the Time Off module. Employees can easily submit requests, and managers can approve or reject them based on company policies.
Coleoptera, commonly known as beetles, is the largest order of insects, comprising approximately 400,000 described species. Beetles can be found in almost every habitat on Earth, exhibiting a wide range of morphological, behavioral, and ecological diversity. They have a hardened exoskeleton, with the forewings modified into elytra that protect the hind wings. Beetles play important roles in ecosystems as decomposers, pollinators, and food sources for other animals, while some species are considered pests in agriculture and forestry.
Stewart Butler - OECD - How to design and deliver higher technical education ...EduSkills OECD
Stewart Butler, Labour Market Economist at the OECD presents at the webinar 'How to design and deliver higher technical education to develop in-demand skills' on 3 June 2025. You can check out the webinar recording via our website - https://oecdedutoday.com/webinars/ .
You can check out the Higher Technical Education in England report via this link 👉 - https://www.oecd.org/en/publications/higher-technical-education-in-england-united-kingdom_7c00dff7-en.html
You can check out the pathways to professions report here 👉 https://www.oecd.org/en/publications/pathways-to-professions_a81152f4-en.html
How to Create a Stage or a Pipeline in Odoo 18 CRMCeline George
In Odoo, the CRM (Customer Relationship Management) module’s pipeline is a visual representation of a company's sales process that helps sales teams track and manage their interactions with potential customers.
Based in Wauconda, Diana Enriquez teaches dual-language social studies at West Oak Middle School, guiding students in grades 6-8. With a degree from Illinois State University and an ESL/Bilingual certification, she champions diversity and equity in education. Diana’s early experience as a special education paraprofessional shaped her commitment to inclusive and engaging learning.
POS Reporting in Odoo 18 - Odoo 18 SlidesCeline George
To view all the available reports in Point of Sale, navigate to Point of Sale > Reporting. In this section, you will find detailed reports such as the Orders Report, Sales Details Report, and Session Report, as shown below.
2. 2
Motivation
1. Data and model parallelism
2. TensorBoard for visualization
3. Computational graph abstraction
4. Python + Numpy
5. Great documentation and examples
6. More than deep learning framework
+ Now conception of ‘Python front-end’ for hard backend is trending
TinyFlow
http://dmlc.ml/2016/09/30/build-your-own-tensorflow-with-nnvm-and-torch.html
Syntax (‘Frontend’) like TensorFlow but interpretation is … Torch!
NNVM inspired by LLVM…
It provides ways to construct, represent and transform computation graphs
invariant of how it is executed.
3. 3
TensorFlow basic concepts
A TensorFlow computation is described by a directed graph , which is
composed of a set of nodes
Library user construct a computational graph using one of the supported
frontend languages (C++ or Python)
In a TensorFlow graph, each node has zero or more inputs and zero or
more outputs, and represents the instantiation of an operation
Values that flow along normal edges in the graph (from outputs to inputs)
are tensors - arbitrary dimensionality arrays where the underlying el-
ement type is specified or inferred at graph-construction time
Special edges, called control dependencies , can also exist in the
graph
http://download.tensorflow.org/paper/whitepaper2015.pdf
4. TensorFlow architecture
Python C++ …
TensorFlow core execution language
CPU GPU Android …
There is the client, which uses the session interface to communicate with
the master and one or more worker processes
Each worker process responsible for arbitrating access to one or more
computational devices (such as CPU cores and GPU cards)
6. TensorFlow basic concepts. Tensor
A Tensor is a typed multi-dimensional array. For example, a 4-D array of
floating point numbers representing a mini-batch of images with dimensions
[batch, height, width, channel].
In a launched graph: Type of the data that flow between nodes.
In the Python API: class used to represent the output and inputs of ops added
to the graph tf.Tensor. Instances of this class do not hold data.
In the C++ API: class used to represent tensors returned from a Session::Run()
call tensorflow::Tensor. Instances of this class hold data.
https://www.tensorflow.org/versions/r0.9/resources/glossary.html#glossary
7. TensorFlow basic concepts. Operations
An operation has a name and represents an abstract computation (e.g.,
“matrix multiply”, or “add”). An operation can have attributes.
•One common use of attributes is to make operations polymorphic over
different tensor element types.
A kernel is a particular implementation of an operation that can be run on a
particular type of device (e.g., CPU or GPU).
TensorFlow binary defines the sets of operations and kernels available
via a registration mechanism, and this set can be extended by linking
in additional operation and/or kernel definitions/registration
http://download.tensorflow.org/paper/whitepaper2015.pdf
Variable is a special kind of operation that returns a handle to a persistent
mutable tensor that survives across executions of a graph.
9. 9
TensorFlow. Execution of computation graph
Single device (for example we have only one core CPU for computation)
The nodes of the graph are executed in an order that
respects the dependencies between nodes
Multi-device execution
•Select device to place the computation for each node in the graph
•Managing the required communication of data
across device boundaries implied by these
placement decisions
http://download.tensorflow.org/paper/whitepaper2015.pdf
12. 12
TensorFlow. Extensions
•Automatic Differentiation – Automatically computes gradients for data flow graphs.
•Partial Execution – Allows TensorFlow clients to execute a subgraph of the entire
execution graph.
•Device Constraints – Allows TensorFlow clients to control the placement of nodes
on a device.
•Control Flow – Enables support for conditionals and loops in data flow graphs.
•Input Operations – Facilitate efficient loading of data into large scale models from
the storage system.
•Queues – Allow different portions of the graph to execute asynchronously and to
hand off data through Enqueue and Dequeue operation. Enqueue and Dequeue
operations are blocking.
•Containers – The mechanism within TensorFlow for managing longer-lived
mutable stat
13. 13
TensorFlow. Session
A Session object encapsulates the environment in which Tensor objects are
evaluated - TensorFlow Docs
import tensorflow as tf
a = tf.constant(5.0)
b = tf.constant(3.0)
c = a +b
with tf.Session() as sess:
print (sess.run(c)) # print(c.eval()) – will do the same (for current opened
session)
tf.InteractiveSession()
is just convenient synonym for keeping a default session open in ipython
sess.run(c)
is an example of a TensorFlow Fetch
14. 14
TensorFlow. Variable
Variables are in-memory buffers
containing tensors.
They must be explicitly initialized and can
be saved to disk during and after training
TensorFlow Docs
import tensorflow as tf
weights = tf.Variable(tf.random_normal([100, 150], stddev=0.5), name="weights")
biases = tf.Variable(tf.zeros([150]), name="biases")
#https://www.tensorflow.org/versions/r0.11/api_docs/python/constant_op.html#random_normal
# Pin a variable to GPU.
with tf.device("/gpu:0"):
v = tf.Variable(...)
# Pin a variable to a particular parameter server task.
with tf.device("/job:ps/task:7"):
v = tf.Variable(...)
variable
assign
zeros, random_normal…
15. 15
TensorFlow. Variable
Variable initializers must be run explicitly before other ops in your model can be
run. The easiest way to do that is to add an op that runs all the variable
initializers, and run that op before using the model. - TensorFlow Docs
init_op = tf.initialize_all_variables()
saver = tf.train.Saver()
# Later, when launching the model
with tf.Session() as sess:
# Run the init operation.
sess.run(init_op)
...
# Use the model
…
# Save the variables to disk.
save_path = saver.save(sess, "/tmp/model.ckpt")
print("Model saved in file: %s" % save_path)
Or we can init variable from value of other variable (it should be initialized before):
w2 = tf.Variable(weights.initialized_value(), name="w2")
tf.train.Saver object have restore method: saver.restore(sess, "/tmp/model.ckpt")
https://www.tensorflow.org/versions/r0.11/how_tos/variables/index.html
16. 16
TensorFlow. Common syntax examples
Fill array with zeros and ones: a = tf.zeros((3,3)), b = tf.ones((3,3))
Sum of array, axis = 1: tf.reduce_sum(a,reduction_indices=[1])
Shape of array: a.get_shape()
Re-shape: array: tf.reshape(a,(1,4))
Basic arithmetic: a*3+ 2
Multiplication: tf.matmul(c, d)
Element accessing: a[0,0], a[:,0], a[0,:]
17. 17
TensorFlow data input
How can we input external data into TensorFlow?
Simple solution: Import from Numpy:
a = np.zeros((3,3))
ta = tf.convert_to_tensor(a)
Simple, but does not scale
18. 18
TensorFlow data input
Use tf.placeholder variables (dummy nodes that provide entry points for data
to computational graph).
A feed_dict is a python dictionary mapping from
tf.placeholder vars (or their names) to data (numpy arrays, lists, etc.)
Example:
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.mul(input1, input2)
with tf.Session()as sess:
print(sess.run([output], feed_dict={input1:[6.], input2:[3.]}))
20. 20
TensorFlow namespaces & get_variable
Variable Scope mechanism in TensorFlow consists of 2 main functions:
tf.get_variable(<name>, <shape>, <initializer>): Creates or returns a variable
with a given name.
tf.variable_scope(<scope_name>): Manages namespaces for names passed to
tf.get_variable().
Case 1: the scope is set for creating new variables, as evidenced by
tf.get_variable_scope().reuse == False.
tf.get_varible two cases:
Case 2: the scope is set for reusing variables, as evidenced by
tf.get_variable_scope().reuse == True.
24. 24
Example
# Tensorflow is sensitive to shapes, so reshaping without data change
# It were (n_samples,), now should be (n_samples, 1)
X_gen = np.reshape(X_gen, (n_samples,1))
Y_gen = np.reshape(Y_gen, (n_samples,1))
# Preparing placeholders
X = tf.placeholder(tf.float32, shape=(batch_size, 1))
Y = tf.placeholder(tf.float32, shape=(batch_size, 1))
25. 25
Example
# Define variables to be learned
with tf.variable_scope("linear-regression"):
k = tf.get_variable("weights", (1, 1),
initializer=tf.random_normal_initializer())
b = tf.get_variable("bias", (1,),
initializer=tf.constant_initializer(0.0))
y_predicted = tf.matmul(X, k) + b
loss = tf.reduce_sum((Y - y_predicted)**2)
26. 26
Example
# Sample code to solve this problem
# Define optimizer properties – optimization type – minimization, variable
opt_operation = tf.train.AdamOptimizer().minimize(loss)
with tf.Session() as sess:
# Initialize Variables in graph
sess.run(tf.initialize_all_variables())
# Optimization loop for steps_number steps
for i in range(steps_number):
# Select random minibatch
indices = np.random.choice(n_samples, batch_size)
X_batch, y_batch = X_gen[indices], Y_gen[indices]
# Do optimization step
sess.run([opt_operation, loss],
feed_dict={X: X_batch, Y: y_batch})
27. 27
Example
# Gradient descent loop for steps_number steps
for i in range(steps_number):
# Select random minibatch
batch_indices = np.random.choice(n_samples, batch_size)
X_batch, y_batch = X_gen[batch_indices], Y_gen[batch_indices]
# Do optimization step
sess.run([opt_operation, loss],
feed_dict={X: X_batch, Y: y_batch})
Preparing mini-batches
Inside sess.run – feed data to TensorFlow
28. 28
Example
feed_dict={X: X_batch, Y: y_batch})
y_predicted = tf.matmul(X, k) + b
loss = tf.reduce_sum((Y - y_predicted)**2)
k = tf.get_variable("weights", (1, 1),
initializer=tf.random_normal_initializer())
b = tf.get_variable("bias", (1,),
initializer=tf.constant_initializer(0.0))
30. 30
TensorFlow auto-differentiation and gradient
Automatic differentiation computes gradients without user input
TensorFlow nodes in computation graph have attached gradient operations.
Use backpropagation (using node-specific gradient ops) to compute required
gradients for all variables in graph
31. 31
TensorFlow
1. TensorFlow has good computational graph visualization.
2. Support from such a huge company as Google is a plus for TensorFlow.
3. TensorFlow has C++ and Python interfaces.
4. TensorFlow has benefits on large computation problems and distributed
heterogeneus computation enviroment
5. TensorFlow not so good on 1-GPU / single host hardware as Theano/Torch
6. TensorFlow base can be extended to the wide range of new hardware
32. 32
References
1. http://download.tensorflow.org/paper/whitepaper2015.pdf
2. https://www.tensorflow.org/
3. Getting Started with TensorFlow by Giancarlo Zaccone
4. TensorFlow Machine Learning Cookbook Paperback by Nick McClure
5. https://github.com/aymericdamien/TensorFlow-Examples
6. https://www.tensorflow.org/versions/r0.10/tutorials/index.html
7. http://bcomposes.com/2015/11/26/simple-end-to-end-tensorflow-examples/
8. https://github.com/anrew-git/tf_linear
9. Основные концепции нейронных сетей. Р. Каллан