DL Practical 02 Binary Class Classifier Using ANN
DL Practical 02 Binary Class Classifier Using ANN
Aim:- To implement Python code for Binary Class Classifier using ANN
ANN stands for Artificial Neural Network, which is a type of machine learning model inspired by the
structure and function of the human brain. ANN consists of interconnected processing nodes, called
neurons, which can receive, process, and transmit information.
In an ANN, data is fed into the input layer, which is then processed through a series of hidden layers
before producing an output at the final layer. Each neuron in the hidden layer applies a mathematical
function to the data it receives and passes the result on to the next layer. The weights and biases
associated with each neuron are adjusted through a process called back propagation, which enables
the network to learn and improve its accuracy over time.
ANNs are widely used in various fields such as image and speech recognition, natural language
processing, and predictive analytics. They have proven to be effective in solving complex problems
that are difficult for traditional algorithms to handle.
ANN Structure
An artificial neuron, also known as a perceptron, is a basic unit of an artificial neural network (ANN).
It is inspired by the structure and function of biological neurons in the human brain.
An artificial neuron takes in input signals, performs a weighted sum of the inputs, and applies an
activation function to produce an output signal. The weighted sum is calculated by multiplying each
input signal by a corresponding weight value, and then summing up the products. The activation
function applies a non-linear transformation to the weighted sum to produce the output signal.
Artificial neurons are widely used in various applications such as image and speech recognition,
natural language processing, and predictive analytics. They have proven to be effective in solving
complex problems that are difficult for traditional algorithms to handle.
Types of Optimizers:
A basic optimization algorithm that adjusts the Simple and easy to understand, but can be
Gradient Descent weights of a model in the direction of steepest slow and prone to getting stuck in local
descent of the loss function minima
A compromise between gradient descent and Can achieve a good balance between speed
Mini-batch
SGD, where the weights are updated after and stability, and is often used as a default
Gradient Descent
processing a small batch of training examples optimizer in deep learning
A popular optimizer that uses adaptive learning Can achieve fast convergence with minimal
Adam rates for each weight based on their historical hyper parameter tuning, and is well-suited for
gradients and second moments large and complex models
An optimizer that uses a moving average of Can help prevent oscillations and divergences,
RMSProp squared gradients to adjust the learning rate for and is often used as a default optimizer in deep
each weight learning
samples = 1000
X, y = make_circles(samples,
noise = 0.03,
random_state = 42)
The arguments to make_circles are:
import tensorflow as tf
tf.random.set_seed(42)
model_1 = tf.keras.Sequential([tf.keras.layers.Dense(1)])
This line defines the neural network model as a sequential stack of layers using the
tf.keras.Sequential API. In this case, the model has a single layer with a single neuron defined using
the tf.keras.layers.Dense layer. The argument 1 specifies the number of neurons in the layer. Since
there is only one neuron, the output of the layer will be a single scalar value.
metrics = ['accuracy'] specifies that the accuracy metric should be used to evaluate the performance
of the model during training and testing.
model3 = tf.keras.Sequential([tf.keras.layers.Dense(100),
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)])
The model has three dense layers, each defined using the tf.keras.layers.Dense layer.
• The first layer has 100 neurons, which means it will output a vector of length 100.
• The second layer has 10 neurons, which means it will output a vector of length 10.
• The third layer has a single neuron, which means it will output a single scalar value.
• The first and second layers each have 4 neurons and use the ReLU activation function
(activation='relu').
• The third layer has a single neuron and uses the sigmoid activation function
(activation='sigmoid').
model4 = tf.keras.Sequential([tf.keras.layers.Dense(4,activation='relu'),
tf.keras.layers.Dense(4,activation='relu'),
tf.keras.layers.Dense(1,activation='sigmoid')])
model4.compile(optimizer = tf.keras.optimizers.Adam(learning_rate=0.01),
loss = tf.keras.losses.BinaryCrossentropy(),
metrics = ['accuracy'])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01) sets the optimizer to Adam, which is a
widely used optimization algorithm for neural networks. The learning rate is set to 0.01.
Conclusion:
In this practical we understand Binary Class Classifier & perform practical implementation of it