SlideShare a Scribd company logo
Introduction to Tensors
-Jayesh Sukdeo Patil
Introduction
• TensorFlow is a software library or framework, designed by the
Google team to implement machine learning and deep learning
concepts in the easiest manner. It combines the computational
algebra of optimization techniques for easy calculation of many
mathematical expressions.
• Features of TensorFlow:
 It includes a feature of that defines, optimizes and calculates
mathematical expressions easily with the help of multi-
dimensional arrays called tensors.
 It includes a programming support of deep neural networks and
machine learning techniques.
 It includes a high scalable feature of computation with various data
sets.
 TensorFlow uses GPU computing, automating management. It also
includes a unique feature of optimization of same memory and the
data used.
Why is TensorFlow So
Popular?
• TensorFlow is well-documented and includes
plenty of machine learning libraries. It offers a
few important functionalities and methods for the
same. TensorFlow is also called a “Google”
product. It includes a variety of machine learning
and deep learning algorithms.
• TensorFlow can train and run deep neural
networks for handwritten digit classification,
image recognition, word embedding and creation
of various sequence models.
Tensor Data
Structure
• Tensors are used as the basic data structures in TensorFlow language.
Tensors represent the connecting edges in any flow diagram called the
Data Flow Graph. Tensors are defined as multidimensional array or list.
Tensors are identified by the following three parameters:
 Rank : Unit of dimensionality described within tensor is called rank. It
identifies the number of dimensions of the tensor. A rank of a tensor can
be described as the order or n-dimensions of a tensor defined.
 Shape : The number of rows and columns together define the shape of
Tensor.
 Type : Type describes the data type assigned to Tensor’s elements. A user
needs to consider the following activities for building a Tensor:
 Build an n-dimensional array
 Convert the n-dimensional array.
Mathematical
Foundations
• Vector An array of numbers, which is either continuous or
discrete, is defined as a vector. Machine learning algorithms
deal with fixed length vectors for better output generation.
Machine learning algorithms deal with multidimensional
data so vectors play a crucial role.
• Scalar: Scalar can be defined as one-
dimensional vector. Scalars are those, which
include only magnitude and no direction.
With scalars, we are only concerned with
the magnitude. Examples of scalar include
weight and height parameters of children.
• Matrix: Matrix can be defined as multi-
dimensional arrays, which are arranged in
the format of rows and columns. The size of
matrix is defined by row length and column
length. Following figure shows the
representation of any specified matrix.
• Consider the matrix with “m” rows and “n”
columns as mentioned above, the matrix
representation will be specified as “m*n
matrix” which defined the length of matrix
as well.
Mathematical Computations
• Addition of matrices Addition of two or more matrices is possible
if the matrices are of the same dimension. The addition implies
addition of each element as per the given position. Consider the
following example to understand how addition of matrices works:
• Subtraction of matrices The subtraction of matrices operates in
similar fashion like the addition of two matrices. The user can
subtract two matrices provided the dimensions are equal.
• Multiplication of matrices For two matrices A m*n and B p*q to
be multipliable, n should be equal to p. The resulting matrix is: C
m*q
• Transpose of matrix The transpose of a matrix A, m*n is generally
represented by AT (transpose) n*m and is obtained by transposing
the column vectors as row vectors.
• Dot product of vectors Any vector of dimension n can be
represented as a matrix v = R^n*1.
TensorFlow.pptx
Convolutional Neural Networks
• Convolutional Neural networks are designed to process data through
multiple layers of arrays. This type of neural networks is used in
applications like image recognition or face recognition. The primary
difference between CNN and any other ordinary neural network is that
CNN takes input as a two-dimensional array and operates directly on the
images rather than focusing on feature extraction which other neural
networks focus on.
• A convolutional neural network uses three basic ideas:
• Local respective fields
• Convolution
• Pooling
• CNN utilizes spatial correlations that exist within
the input data. Each concurrent layer of a neural
network connects some input neurons. This specific
region is called local receptive field. Local receptive
field focusses on the hidden neurons. The hidden
neurons process the input data inside the mentioned
field not realizing the changes outside the specific
boundary.
• If we observe the above representation, each
connection learns a weight of the hidden neuron
with an associated connection with movement from
one layer to another. Here, individual neurons
perform a shift from time to time. This process is
called “convolution”.
• The mapping of connections from the input layer to
the hidden feature map is defined as “shared
weights” and bias included is called “shared bias”.
CNN or convolutional neural networks use pooling
layers, which are the layers, positioned immediately
after CNN declaration. It takes the input from the
user as a feature map that comes out of
convolutional networks and prepares a condensed
feature map. Pooling layers helps in creating layers
with neurons of previous layers.
Recurrent Neural
Networks
• Recurrent neural networks is a type of deep learning-oriented algorithm, which follows a sequential
approach. In neural networks, we always assume that each input and output is independent of all
other layers. These type of neural networks are called recurrent because they perform mathematical
computations in sequential manner. Consider the following steps to train a recurrent neural
network:
• Step 1: Input a specific example from dataset.
• Step 2: Network will take an example and compute some calculations using randomly initialized
variables.
• Step 3: A predicted result is then computed.
• Step 4: The comparison of actual result generated with the expected value will produce an error.
• Step 5: To trace the error, it is propagated through same path where the variables are also adjusted.
• Step 6: The steps from 1 to 5 are repeated until we are confident that the variables declared to get
the output are defined properly.
• Step 7: A systematic prediction is made by applying these variables to get new unseen input.
TensorBoard
Visualization
• TensorFlow includes a visualization tool, which is
called the TensorBoard. It is used for analyzing Data
Flow Graph and also used to understand machine-
learning models.
• The important feature of TensorBoard includes a view
of different types of statistics about the parameters
and details of any graph in vertical alignment. Deep
neural network includes up to 36,000 nodes.
• TensorBoard helps in collapsing these nodes in high-
level blocks and highlighting the identical structures.
This allows better analysis of graph focusing on the
primary sections of the computation graph. The
TensorBoard visualization is said to be very
interactive where a user can pan, zoom and expand
the nodes to display the details.
Keras
• Keras is compact, easy to learn, high-level Python library run on top of TensorFlow framework. It is made with focus of
understanding deep learning techniques, such as creating layers for neural networks maintaining the concepts of shapes and
mathematical details.
• The creation of framework can be of the following two types:
 Sequential API
 Functional API
• Consider the following eight steps to create deep learning model in Keras:
 Loading the data
 Preprocess the loaded data
 Definition of model
 Compiling the model
 Fit the specified model
 Evaluate it
 Make the required predictions
 Save the model
• Optimizers are the extended class, which
include added information to train a
specific model.
• The optimizer class is initialized with
given parameters but it is important to
remember that no Tensor is needed.
• The optimizers are used for improving
speed and performance for training a
specific model.
• Following are some optimizers in
Tensorflow:
 Stochastic Gradient descent
 Momentum
 Nesterov momentum
 Adagrad
 Adadelta
 RMSProp
 Adam
 Adamax
 SMORMS3
Recommendations
for Neural Network
Training
• Back Propagation Back
propagation is a simple
method to compute
partial derivatives, which
includes the basic form
of composition best
suitable for neural nets.
Stochastic
Gradient Descent
• Stochastic Gradient Descent
In stochastic gradient descent,
a batch is the total number of
examples, which a user uses to
calculate the gradient in a
single iteration. So far, it is
assumed that the batch has
been the entire data set. The
best illustration is working at
Google scale; data sets often
contain billions or even
hundreds of billions
of examples.
Learning Rate
Decay
• Adapting the
learning rate is one
of the most important
features of gradient
descent optimization.
This is crucial to
TensorFlow
implementation.
Dropout
• Deep neural nets
with a large
number of
parameters form
powerful machine
learning systems.
However, over
fitting is a serious
problem in such
networks.
Max Pooling
• Max pooling is a
sample-based
discretization
process. The object is
to down-sample an
input representation,
which reduces the
dimensionality with
the required
assumptions.
Long Short Term
Memory (LSTM)
• LSTM controls the decision
on what inputs should be
taken within the specified
neuron. It includes the
control on deciding what
should be computed and
what output should be
generated.
TensorFlow.pptx
TensorFlow.pptx
References:
• https://www.tutorialspoint.com/tensorflow/tensorflow_tutorial.pdf
• https://www.youtube.com/watch?v=tXVNS-V39A0 - Explain
• https://www.youtube.com/watch?v=yjprpOoH5c8&t=78s -Intro
• https://www.youtube.com/watch?v=OmSLter9lyE -Architecture
• https://colab.research.google.com/github/tensorflow/docs/blob/mas
ter/site/en/guide/basics.ipynb - colab basics
• https://medium.com/nybles/create-your-first-image-recognition-
classifier-using-cnn-keras-and-tensorflow-backend-6eaab98d14dd -
Identify
TensorFlow.pptx

More Related Content

PPTX
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...
PDF
DyCode Engineering - Machine Learning with TensorFlow
PPTX
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
PDF
1645 goldenberg using our laptop
PPTX
Unit-5.pptx notes for artificial intelligence
PPTX
AD3501 - DL Unit-1 PPT.pptx python syllabus
PPTX
Lecture Note DL&NN Tensorflow.pptx
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...
DyCode Engineering - Machine Learning with TensorFlow
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
1645 goldenberg using our laptop
Unit-5.pptx notes for artificial intelligence
AD3501 - DL Unit-1 PPT.pptx python syllabus
Lecture Note DL&NN Tensorflow.pptx

Similar to TensorFlow.pptx (20)

PPTX
Neural Networks with Google TensorFlow
PPTX
Build a Neural Network for ITSM with TensorFlow
PDF
TensorFlow and Keras: An Overview
PPTX
MachinaFiesta: A Vision into Machine Learning 🚀
PPTX
Neural networks and google tensor flow
PPTX
Automatic Attendace using convolutional neural network Face Recognition
PDF
Machine Learning with TensorFlow 2
PPTX
Introduction to Tensor Flow-v1.pptx
PPTX
Deep Learning, Keras, and TensorFlow
PPTX
tensorflow.pptx
DOCX
artificial-neural-network-seminar-report.docx
PDF
Separating Hype from Reality in Deep Learning with Sameer Farooqui
PPTX
Introduction to Deep Learning and Tensorflow
PDF
Neural network book. Interesting and precise
PPTX
Introduction to Machine Learning basics.pptx
PPTX
Introduction to Machine learning & Neural Networks
PDF
A Look at TensorFlow.js
PDF
Deep learning architectures
PDF
Data Science, Machine Learning and Neural Networks
PDF
Neural networks across space & time : Deep learning in java
Neural Networks with Google TensorFlow
Build a Neural Network for ITSM with TensorFlow
TensorFlow and Keras: An Overview
MachinaFiesta: A Vision into Machine Learning 🚀
Neural networks and google tensor flow
Automatic Attendace using convolutional neural network Face Recognition
Machine Learning with TensorFlow 2
Introduction to Tensor Flow-v1.pptx
Deep Learning, Keras, and TensorFlow
tensorflow.pptx
artificial-neural-network-seminar-report.docx
Separating Hype from Reality in Deep Learning with Sameer Farooqui
Introduction to Deep Learning and Tensorflow
Neural network book. Interesting and precise
Introduction to Machine Learning basics.pptx
Introduction to Machine learning & Neural Networks
A Look at TensorFlow.js
Deep learning architectures
Data Science, Machine Learning and Neural Networks
Neural networks across space & time : Deep learning in java
Ad

More from Jayesh Patil (10)

PPTX
AWS EC2 JSP.pptx
PPTX
AWS Cloudtrail JSP.pptx
PPTX
Basics of cloud - AWS.pptx
PPTX
Cloud Roles.pptx
PPTX
ML Softmax JP 24.pptx
PPTX
IOT EDGE SS JP.pptx
PPTX
Flume DS -JSP.pptx
PPTX
Blom Scheme CT -JSP.pptx
PPTX
AZURE CC JP.pptx
PPTX
ATHLETICS - SD.pptx
AWS EC2 JSP.pptx
AWS Cloudtrail JSP.pptx
Basics of cloud - AWS.pptx
Cloud Roles.pptx
ML Softmax JP 24.pptx
IOT EDGE SS JP.pptx
Flume DS -JSP.pptx
Blom Scheme CT -JSP.pptx
AZURE CC JP.pptx
ATHLETICS - SD.pptx
Ad

Recently uploaded (20)

PPTX
Open Quiz Monsoon Mind Game Prelims.pptx
PPTX
UNDER FIVE CLINICS OR WELL BABY CLINICS.pptx
PDF
The Final Stretch: How to Release a Game and Not Die in the Process.
PDF
3.The-Rise-of-the-Marathas.pdfppt/pdf/8th class social science Exploring Soci...
PDF
Module 3: Health Systems Tutorial Slides S2 2025
PPTX
IMMUNIZATION PROGRAMME pptx
PPTX
HISTORY COLLECTION FOR PSYCHIATRIC PATIENTS.pptx
PPTX
Odoo 18 Sales_ Managing Quotation Validity
PDF
Cell Biology Basics: Cell Theory, Structure, Types, and Organelles | BS Level...
PPTX
family health care settings home visit - unit 6 - chn 1 - gnm 1st year.pptx
PDF
UTS Health Student Promotional Representative_Position Description.pdf
PDF
Sunset Boulevard Student Revision Booklet
PPTX
NOI Hackathon - Summer Edition - GreenThumber.pptx
PPTX
An introduction to Prepositions for beginners.pptx
PPTX
ACUTE NASOPHARYNGITIS. pptx
PPTX
Congenital Hypothyroidism pptx
PDF
Types of Literary Text: Poetry and Prose
PPTX
Introduction and Scope of Bichemistry.pptx
PPTX
Information Texts_Infographic on Forgetting Curve.pptx
PDF
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
Open Quiz Monsoon Mind Game Prelims.pptx
UNDER FIVE CLINICS OR WELL BABY CLINICS.pptx
The Final Stretch: How to Release a Game and Not Die in the Process.
3.The-Rise-of-the-Marathas.pdfppt/pdf/8th class social science Exploring Soci...
Module 3: Health Systems Tutorial Slides S2 2025
IMMUNIZATION PROGRAMME pptx
HISTORY COLLECTION FOR PSYCHIATRIC PATIENTS.pptx
Odoo 18 Sales_ Managing Quotation Validity
Cell Biology Basics: Cell Theory, Structure, Types, and Organelles | BS Level...
family health care settings home visit - unit 6 - chn 1 - gnm 1st year.pptx
UTS Health Student Promotional Representative_Position Description.pdf
Sunset Boulevard Student Revision Booklet
NOI Hackathon - Summer Edition - GreenThumber.pptx
An introduction to Prepositions for beginners.pptx
ACUTE NASOPHARYNGITIS. pptx
Congenital Hypothyroidism pptx
Types of Literary Text: Poetry and Prose
Introduction and Scope of Bichemistry.pptx
Information Texts_Infographic on Forgetting Curve.pptx
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table

TensorFlow.pptx

  • 2. Introduction • TensorFlow is a software library or framework, designed by the Google team to implement machine learning and deep learning concepts in the easiest manner. It combines the computational algebra of optimization techniques for easy calculation of many mathematical expressions. • Features of TensorFlow:  It includes a feature of that defines, optimizes and calculates mathematical expressions easily with the help of multi- dimensional arrays called tensors.  It includes a programming support of deep neural networks and machine learning techniques.  It includes a high scalable feature of computation with various data sets.  TensorFlow uses GPU computing, automating management. It also includes a unique feature of optimization of same memory and the data used.
  • 3. Why is TensorFlow So Popular? • TensorFlow is well-documented and includes plenty of machine learning libraries. It offers a few important functionalities and methods for the same. TensorFlow is also called a “Google” product. It includes a variety of machine learning and deep learning algorithms. • TensorFlow can train and run deep neural networks for handwritten digit classification, image recognition, word embedding and creation of various sequence models.
  • 4. Tensor Data Structure • Tensors are used as the basic data structures in TensorFlow language. Tensors represent the connecting edges in any flow diagram called the Data Flow Graph. Tensors are defined as multidimensional array or list. Tensors are identified by the following three parameters:  Rank : Unit of dimensionality described within tensor is called rank. It identifies the number of dimensions of the tensor. A rank of a tensor can be described as the order or n-dimensions of a tensor defined.  Shape : The number of rows and columns together define the shape of Tensor.  Type : Type describes the data type assigned to Tensor’s elements. A user needs to consider the following activities for building a Tensor:  Build an n-dimensional array  Convert the n-dimensional array.
  • 5. Mathematical Foundations • Vector An array of numbers, which is either continuous or discrete, is defined as a vector. Machine learning algorithms deal with fixed length vectors for better output generation. Machine learning algorithms deal with multidimensional data so vectors play a crucial role.
  • 6. • Scalar: Scalar can be defined as one- dimensional vector. Scalars are those, which include only magnitude and no direction. With scalars, we are only concerned with the magnitude. Examples of scalar include weight and height parameters of children. • Matrix: Matrix can be defined as multi- dimensional arrays, which are arranged in the format of rows and columns. The size of matrix is defined by row length and column length. Following figure shows the representation of any specified matrix. • Consider the matrix with “m” rows and “n” columns as mentioned above, the matrix representation will be specified as “m*n matrix” which defined the length of matrix as well.
  • 7. Mathematical Computations • Addition of matrices Addition of two or more matrices is possible if the matrices are of the same dimension. The addition implies addition of each element as per the given position. Consider the following example to understand how addition of matrices works: • Subtraction of matrices The subtraction of matrices operates in similar fashion like the addition of two matrices. The user can subtract two matrices provided the dimensions are equal. • Multiplication of matrices For two matrices A m*n and B p*q to be multipliable, n should be equal to p. The resulting matrix is: C m*q • Transpose of matrix The transpose of a matrix A, m*n is generally represented by AT (transpose) n*m and is obtained by transposing the column vectors as row vectors. • Dot product of vectors Any vector of dimension n can be represented as a matrix v = R^n*1.
  • 9. Convolutional Neural Networks • Convolutional Neural networks are designed to process data through multiple layers of arrays. This type of neural networks is used in applications like image recognition or face recognition. The primary difference between CNN and any other ordinary neural network is that CNN takes input as a two-dimensional array and operates directly on the images rather than focusing on feature extraction which other neural networks focus on. • A convolutional neural network uses three basic ideas: • Local respective fields • Convolution • Pooling
  • 10. • CNN utilizes spatial correlations that exist within the input data. Each concurrent layer of a neural network connects some input neurons. This specific region is called local receptive field. Local receptive field focusses on the hidden neurons. The hidden neurons process the input data inside the mentioned field not realizing the changes outside the specific boundary. • If we observe the above representation, each connection learns a weight of the hidden neuron with an associated connection with movement from one layer to another. Here, individual neurons perform a shift from time to time. This process is called “convolution”. • The mapping of connections from the input layer to the hidden feature map is defined as “shared weights” and bias included is called “shared bias”. CNN or convolutional neural networks use pooling layers, which are the layers, positioned immediately after CNN declaration. It takes the input from the user as a feature map that comes out of convolutional networks and prepares a condensed feature map. Pooling layers helps in creating layers with neurons of previous layers.
  • 11. Recurrent Neural Networks • Recurrent neural networks is a type of deep learning-oriented algorithm, which follows a sequential approach. In neural networks, we always assume that each input and output is independent of all other layers. These type of neural networks are called recurrent because they perform mathematical computations in sequential manner. Consider the following steps to train a recurrent neural network: • Step 1: Input a specific example from dataset. • Step 2: Network will take an example and compute some calculations using randomly initialized variables. • Step 3: A predicted result is then computed. • Step 4: The comparison of actual result generated with the expected value will produce an error. • Step 5: To trace the error, it is propagated through same path where the variables are also adjusted. • Step 6: The steps from 1 to 5 are repeated until we are confident that the variables declared to get the output are defined properly. • Step 7: A systematic prediction is made by applying these variables to get new unseen input.
  • 12. TensorBoard Visualization • TensorFlow includes a visualization tool, which is called the TensorBoard. It is used for analyzing Data Flow Graph and also used to understand machine- learning models. • The important feature of TensorBoard includes a view of different types of statistics about the parameters and details of any graph in vertical alignment. Deep neural network includes up to 36,000 nodes. • TensorBoard helps in collapsing these nodes in high- level blocks and highlighting the identical structures. This allows better analysis of graph focusing on the primary sections of the computation graph. The TensorBoard visualization is said to be very interactive where a user can pan, zoom and expand the nodes to display the details.
  • 13. Keras • Keras is compact, easy to learn, high-level Python library run on top of TensorFlow framework. It is made with focus of understanding deep learning techniques, such as creating layers for neural networks maintaining the concepts of shapes and mathematical details. • The creation of framework can be of the following two types:  Sequential API  Functional API • Consider the following eight steps to create deep learning model in Keras:  Loading the data  Preprocess the loaded data  Definition of model  Compiling the model  Fit the specified model  Evaluate it  Make the required predictions  Save the model
  • 14. • Optimizers are the extended class, which include added information to train a specific model. • The optimizer class is initialized with given parameters but it is important to remember that no Tensor is needed. • The optimizers are used for improving speed and performance for training a specific model. • Following are some optimizers in Tensorflow:  Stochastic Gradient descent  Momentum  Nesterov momentum  Adagrad  Adadelta  RMSProp  Adam  Adamax  SMORMS3
  • 15. Recommendations for Neural Network Training • Back Propagation Back propagation is a simple method to compute partial derivatives, which includes the basic form of composition best suitable for neural nets.
  • 16. Stochastic Gradient Descent • Stochastic Gradient Descent In stochastic gradient descent, a batch is the total number of examples, which a user uses to calculate the gradient in a single iteration. So far, it is assumed that the batch has been the entire data set. The best illustration is working at Google scale; data sets often contain billions or even hundreds of billions of examples.
  • 17. Learning Rate Decay • Adapting the learning rate is one of the most important features of gradient descent optimization. This is crucial to TensorFlow implementation.
  • 18. Dropout • Deep neural nets with a large number of parameters form powerful machine learning systems. However, over fitting is a serious problem in such networks.
  • 19. Max Pooling • Max pooling is a sample-based discretization process. The object is to down-sample an input representation, which reduces the dimensionality with the required assumptions.
  • 20. Long Short Term Memory (LSTM) • LSTM controls the decision on what inputs should be taken within the specified neuron. It includes the control on deciding what should be computed and what output should be generated.
  • 23. References: • https://www.tutorialspoint.com/tensorflow/tensorflow_tutorial.pdf • https://www.youtube.com/watch?v=tXVNS-V39A0 - Explain • https://www.youtube.com/watch?v=yjprpOoH5c8&t=78s -Intro • https://www.youtube.com/watch?v=OmSLter9lyE -Architecture • https://colab.research.google.com/github/tensorflow/docs/blob/mas ter/site/en/guide/basics.ipynb - colab basics • https://medium.com/nybles/create-your-first-image-recognition- classifier-using-cnn-keras-and-tensorflow-backend-6eaab98d14dd - Identify