0% found this document useful (0 votes)
22 views5 pages

DL Practical 02 Binary Class Classifier Using ANN

Deep learning practicals

Uploaded by

tkalyankar200
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views5 pages

DL Practical 02 Binary Class Classifier Using ANN

Deep learning practicals

Uploaded by

tkalyankar200
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Practical 02: Binary Class Classifier using ANN

Aim:- To implement Python code for Binary Class Classifier using ANN

Theory:- ANN Network

ANN stands for Artificial Neural Network, which is a type of machine learning model inspired by the
structure and function of the human brain. ANN consists of interconnected processing nodes, called
neurons, which can receive, process, and transmit information.

In an ANN, data is fed into the input layer, which is then processed through a series of hidden layers
before producing an output at the final layer. Each neuron in the hidden layer applies a mathematical
function to the data it receives and passes the result on to the next layer. The weights and biases
associated with each neuron are adjusted through a process called back propagation, which enables
the network to learn and improve its accuracy over time.

ANNs are widely used in various fields such as image and speech recognition, natural language
processing, and predictive analytics. They have proven to be effective in solving complex problems
that are difficult for traditional algorithms to handle.

ANN Structure

An artificial neuron, also known as a perceptron, is a basic unit of an artificial neural network (ANN).
It is inspired by the structure and function of biological neurons in the human brain.

An artificial neuron takes in input signals, performs a weighted sum of the inputs, and applies an
activation function to produce an output signal. The weighted sum is calculated by multiplying each
input signal by a corresponding weight value, and then summing up the products. The activation
function applies a non-linear transformation to the weighted sum to produce the output signal.

Artificial neurons are widely used in various applications such as image and speech recognition,
natural language processing, and predictive analytics. They have proven to be effective in solving
complex problems that are difficult for traditional algorithms to handle.
Types of Optimizers:

Optimizer Description Key Features

A basic optimization algorithm that adjusts the Simple and easy to understand, but can be
Gradient Descent weights of a model in the direction of steepest slow and prone to getting stuck in local
descent of the loss function minima

Stochastic Faster than gradient descent, but can be noisy


A variant of gradient descent that updates the
Gradient Descent and may require careful tuning of the learning
weights after each individual training example
(SGD) rate

A compromise between gradient descent and Can achieve a good balance between speed
Mini-batch
SGD, where the weights are updated after and stability, and is often used as a default
Gradient Descent
processing a small batch of training examples optimizer in deep learning

A popular optimizer that uses adaptive learning Can achieve fast convergence with minimal
Adam rates for each weight based on their historical hyper parameter tuning, and is well-suited for
gradients and second moments large and complex models

An optimizer that adapts the learning rate for


Well-suited for sparse data, and can be more
Adagrad each weight based on the sum of squared
robust to noisy gradients than other optimizers
gradients

An optimizer that uses a moving average of Can help prevent oscillations and divergences,
RMSProp squared gradients to adjust the learning rate for and is often used as a default optimizer in deep
each weight learning

Implementation of python code:-

samples = 1000
X, y = make_circles(samples,
noise = 0.03,
random_state = 42)
The arguments to make_circles are:

• samples: the number of samples to generate


• noise: the amount of random noise to add to the data points (default is 0.0)
• random_state: the random seed to use for reproducibility (default is None)
circle = pd.DataFrame({ 'X0' : X[:, 0], 'X1' : X[:, 1], 'label' : y})
circle.head()
pd.DataFrame is a Pandas function that creates a DataFrame from a dictionary or a NumPy array. In
this case, a dictionary with three key-value pairs is passed as an argument to the pd.DataFrame
function, where the keys correspond to the column names, and the values correspond to the data to be
stored in each column. X[:, 0] and X[:, 1] select the first and second columns of the 2D array X,
respectively. Finally, the head() method is used to display the first five rows of the circle DataFrame.

import tensorflow as tf
tf.random.set_seed(42)
model_1 = tf.keras.Sequential([tf.keras.layers.Dense(1)])
This line defines the neural network model as a sequential stack of layers using the
tf.keras.Sequential API. In this case, the model has a single layer with a single neuron defined using
the tf.keras.layers.Dense layer. The argument 1 specifies the number of neurons in the layer. Since
there is only one neuron, the output of the layer will be a single scalar value.

model_1.compile(loss = tf.keras.losses.BinaryCrossentropy(),optimizer = tf.keras.optimizers.SGD(),


metrics = ['accuracy'])
loss = tf.keras.losses.BinaryCrossentropy() sets the loss function to binary cross-entropy, which is
commonly used for binary classification problems.

optimizer = tf.keras.optimizers.SGD() sets the optimizer to stochastic gradient descent (SGD),


which is a widely used optimization algorithm for neural networks.

metrics = ['accuracy'] specifies that the accuracy metric should be used to evaluate the performance
of the model during training and testing.

model3 = tf.keras.Sequential([tf.keras.layers.Dense(100),
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)])
The model has three dense layers, each defined using the tf.keras.layers.Dense layer.

• The first layer has 100 neurons, which means it will output a vector of length 100.
• The second layer has 10 neurons, which means it will output a vector of length 10.
• The third layer has a single neuron, which means it will output a single scalar value.
• The first and second layers each have 4 neurons and use the ReLU activation function
(activation='relu').
• The third layer has a single neuron and uses the sigmoid activation function
(activation='sigmoid').
model4 = tf.keras.Sequential([tf.keras.layers.Dense(4,activation='relu'),
tf.keras.layers.Dense(4,activation='relu'),
tf.keras.layers.Dense(1,activation='sigmoid')])

model4.compile(optimizer = tf.keras.optimizers.Adam(learning_rate=0.01),
loss = tf.keras.losses.BinaryCrossentropy(),
metrics = ['accuracy'])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01) sets the optimizer to Adam, which is a
widely used optimization algorithm for neural networks. The learning rate is set to 0.01.

loss = tf.keras.losses.BinaryCrossentropy() sets the loss function to binary cross-entropy, which is


commonly used for binary classification problems.

def plot_decision_boundary(model, X, y):


# Define the axis boundaries of the plot and create a meshgrid
x_min, x_max = X[:, 0].min() - 0.1, X[:, 0].max() + 0.1
y_min, y_max = X[:, 1].min() - 0.1, X[:, 1].max() + 0.1
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),
np.linspace(y_min, y_max, 100))
# Create X values (we're going to predict on all of these)
x_in = np.c_[xx.ravel(), yy.ravel()]
# Make predictions using the trained model
y_pred = model.predict(x_in)
# Check for multi-class
if len(y_pred[0]) > 1:
print("doing multiclass classification...")
# We have to reshape our predictions to get them ready for plotting
y_pred = np.argmax(y_pred, axis=1).reshape(xx.shape)
else:
print("doing binary classifcation...")
y_pred = np.round(y_pred).reshape(xx.shape)
# Plot decision boundary
plt.contourf(xx, yy, y_pred, cmap=plt.cm.RdYlBu, alpha=0.7)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.RdYlBu)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
Output:

Conclusion:
In this practical we understand Binary Class Classifier & perform practical implementation of it

Experiment Number Date of Performance Grade Teacher's Sign

You might also like