NAMAN GUPTA 00524302022
Q1. Write a program for creating a perceptron.
Code
!pip install tensorflow
import tensorflow as tf
import numpy as np
X_train = [Link]([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=np.float32)
y_train = [Link]([[0], [0], [0], [1]], dtype=np.float32)
model = [Link]([
[Link](1, activation='sigmoid', input_shape=(2,))
])
[Link](optimizer='sgd', loss='binary_crossentropy', metrics=['accuracy'])
[Link](X_train, y_train, epochs=100, verbose=1)
print("\nTesting the model:")
predictions = [Link](X_train)
for i, x in enumerate(X_train):
print(f"Input: {x}, Predicted Output: {round(predictions[i][0])}")
Output
1
NAMAN GUPTA 00524302022
Q2. Write a program to implement multi-layer perceptron using TensorFlow. Apply multi-layer
perceptron (MLP) on the Iris dataset.
Code
!pip install tensorflow
import tensorflow as tf
from [Link] import load_iris
from sklearn.model_selection import train_test_split
from [Link] import StandardScaler, OneHotEncoder
import numpy as np
X, y = load_iris(return_X_y=True)
y = OneHotEncoder(sparse_output=False).fit_transform([Link](-1, 1))
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model = [Link]([
[Link](10, activation='relu', input_shape=(4,)),
[Link](8, activation='relu'),
[Link](3, activation='softmax')
])
[Link](optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
[Link](X_train, y_train, epochs=50, batch_size=5, verbose=1)
test_loss, test_acc = [Link](X_test, y_test)
print(f"\nTest Accuracy: {test_acc:.4f}")
predictions = [Link](X_test)
print("\nSample Predictions:")
for i in range(5):
print(f"Actual: {[Link](y_test[i])}, Predicted: {[Link](predictions[i])}")
2
NAMAN GUPTA 00524302022
Output
3
NAMAN GUPTA 00524302022
Q3. (a) Write a program to implement a Convolution Neural Network (CNN) in Keras. Perform
predictions using the trained Convolution Neural Network (CNN).
Code
import tensorflow as tf
from [Link] import layers, models
import numpy as np
(x_train, y_train), (x_test, y_test) = [Link].cifar10.load_data()
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0
y_train = [Link].to_categorical(y_train, 10)
y_test = [Link].to_categorical(y_test, 10)
model = [Link]([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
[Link](),
[Link](64, activation='relu'),
[Link](10, activation='softmax')
])
[Link](optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
[Link](x_train, y_train, epochs=10, batch_size=64, validation_split=0.1)
test_loss, test_acc = [Link](x_test, y_test)
print(f"\nTest Accuracy: {test_acc:.4f}")
predictions = [Link](x_test[:5])
print("\nSample Predictions:")
for i in range(5):
print(f"Predicted class: {[Link](predictions[i])}, True class: {[Link](y_test[i])}")
4
NAMAN GUPTA 00524302022
Output
5
NAMAN GUPTA 00524302022
Q3. (b) Write a program to build an Image Classifier with CIFAR-10 Data.
Code
import tensorflow as tf
from [Link] import layers, models
(x_train, y_train), (x_test, y_test) = [Link].cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
y_train = [Link].to_categorical(y_train, 10)
y_test = [Link].to_categorical(y_test, 10)
model = [Link]([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
[Link](),
[Link](64, activation='relu'),
[Link](10, activation='softmax')
])
[Link](optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
[Link](x_train, y_train, epochs=10, validation_data=(x_test, y_test), batch_size=64)
test_loss, test_acc = [Link](x_test, y_test, verbose=2)
print(f"Test accuracy: {test_acc:.2f}")
Output
6
NAMAN GUPTA 00524302022
Q4. (a) Write a program to perform face detection using CNN.
Code
!pip install opencv-python
import cv2
from [Link] import cv2_imshow
from [Link] import files
face_cascade = [Link]([Link] + 'haarcascade_frontalface_default.xml')
uploaded = [Link]()
video_path = list([Link]())[0]
video_cap = [Link](video_path)
while video_cap.isOpened():
ret, frame = video_cap.read()
if not ret:
print("Video ended or cannot read frame.")
break
gray = [Link](frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.3, minNeighbors=5)
for (x, y, w, h) in faces:
[Link](frame, (x, y), (x + w, y + h), (255, 0, 0), 2)
cv2_imshow(frame)
if [Link](1) & 0xFF == ord('q'):
break
video_cap.release()
[Link]()
Output
7
NAMAN GUPTA 00524302022
Q4. (b) Write a program to demonstrate hyperparameter tuning in CNN.
Code
!pip install keras-tuner
!pip install tensorflow
import tensorflow as tf
from tensorflow import keras
from [Link] import layers
import keras_tuner as kt
import numpy as np
(x_train, y_train), (x_test, y_test) = [Link].cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def build_model(hp):
model = [Link]()
[Link](layers.Conv2D(
filters=[Link]('conv1_filters', min_value=32, max_value=128, step=32),
kernel_size=[Link]('conv1_kernel', values=[3, 5]),
activation='relu', input_shape=(32, 32, 3)
))
[Link](layers.MaxPooling2D(pool_size=(2, 2)))
[Link](layers.Conv2D(
filters=[Link]('conv2_filters', min_value=64, max_value=256, step=64),
kernel_size=[Link]('conv2_kernel', values=[3, 5]),
activation='relu'
))
[Link](layers.MaxPooling2D(pool_size=(2, 2)))
[Link]([Link]())
[Link]([Link](
units=[Link]('dense_units', min_value=64, max_value=256, step=64),
activation='relu'
))
[Link]([Link](10, activation='softmax'))
8
NAMAN GUPTA 00524302022
[Link](
optimizer=[Link](
learning_rate=[Link]('learning_rate', values=[1e-2, 1e-3, 1e-4])
),
loss='sparse_categorical_crossentropy', # No need to one-hot encode in this case
metrics=['accuracy']
return model
tuner = [Link](
build_model,
objective='val_accuracy',
max_trials=5, # Number of different hyperparameter combinations to try
executions_per_trial=1, # Number of times to train each model
directory='my_tuner_dir',
project_name='cnn_tuning'
[Link](x_train, y_train, epochs=10, validation_split=0.2)
best_hps = tuner.get_best_hyperparameters(num_trials=1)[0]
print(f"""
Best hyperparameters found:
Conv1 Filters: {best_hps.get('conv1_filters')}
Conv1 Kernel Size: {best_hps.get('conv1_kernel')}
Conv2 Filters: {best_hps.get('conv2_filters')}
Conv2 Kernel Size: {best_hps.get('conv2_kernel')}
Dense Units: {best_hps.get('dense_units')}
Learning Rate: {best_hps.get('learning_rate')}
""")
best_model = [Link](best_hps)
best_model.fit(x_train, y_train, epochs=10, validation_split=0.2)
test_loss, test_acc = best_model.evaluate(x_test, y_test)
print(f"Test accuracy: {test_acc:.4f}")
9
NAMAN GUPTA 00524302022
Output
10
NAMAN GUPTA 00524302022
Q4. (c)Predicting Bike-Sharing Patterns Build and train neural networks from scratch to predict the
number of bike share users on a given day.
Code
import pandas as pd
import numpy as np
import [Link] as plt
from sklearn.model_selection import train_test_split
from [Link] import StandardScaler
import tensorflow as tf
from [Link] import layers, models
import zipfile
import [Link]
import os
url = "[Link]
zip_path = "[Link]"
if not [Link](zip_path):
[Link](url, zip_path)
with [Link](zip_path, 'r') as zip_ref:
zip_ref.extractall("bike_data")
df = pd.read_csv("bike_data/[Link]")
X = df[['season', 'yr', 'mnth', 'holiday', 'weekday', 'workingday', 'weathersit', 'temp', 'atemp', 'hum',
'windspeed']]
y = df['cnt']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = [Link](X_test)
model = [Link]([
[Link](64, activation='relu', input_shape=(X_train.shape[1],)),
[Link](32, activation='relu'),
[Link](1) # Output layer for regression
])
11
NAMAN GUPTA 00524302022
[Link](optimizer='adam', loss='mse', metrics=['mae'])
history = [Link](X_train, y_train, epochs=100, validation_data=(X_test, y_test), batch_size=32)
test_loss, test_mae = [Link](X_test, y_test)
print(f"Test MAE: {test_mae}")
y_pred = [Link](X_test)
[Link](y_test, y_pred)
[Link]('Actual Rentals')
[Link]('Predicted Rentals')
[Link]('Actual vs Predicted Rentals')
[Link]()
Output
12
NAMAN GUPTA 00524302022
Q5. Write a program to build auto-encoder in Keras.
Code
!pip install tensorflow matplotlib
import tensorflow as tf
from [Link] import layers, models
import [Link] as plt
(x_train, _), (x_test, _) = [Link].load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), 28, 28, 1))
x_test = x_test.reshape((len(x_test), 28, 28, 1))
input_img = [Link](shape=(28, 28, 1))
encoded = layers.Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
encoded = layers.MaxPooling2D((2, 2), padding='same')(encoded)
encoded = layers.Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
encoded = layers.MaxPooling2D((2, 2), padding='same')(encoded)
decoded = layers.Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
decoded = layers.UpSampling2D((2, 2))(decoded)
decoded = layers.Conv2D(16, (3, 3), activation='relu', padding='same')(decoded) # Fixed padding
decoded = layers.UpSampling2D((2, 2))(decoded)
decoded = layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same')(decoded)
autoencoder = [Link](input_img, decoded)
[Link](optimizer='adam', loss='binary_crossentropy')
[Link](x_train, x_train,
epochs=10,
batch_size=128,
shuffle=True,
validation_data=(x_test, x_test))
decoded_imgs = [Link](x_test)
n = 10
[Link](figsize=(20, 4))
13
NAMAN GUPTA 00524302022
for i in range(n):
ax = [Link](2, n, i + 1)
[Link](x_test[i].reshape(28, 28), cmap='gray')
[Link]('off')
ax = [Link](2, n, i + 1 + n)
[Link](decoded_imgs[i].reshape(28, 28), cmap='gray')
[Link]('off')
[Link]()
Output
14
NAMAN GUPTA 00524302022
Q6. Write a program to implement basic reinforcement learning algorithm to teach a bot to reach
its destination.
Code
import numpy as np
import random
grid_size = 5
start = (0, 0)
goal = (4, 4)
actions = [(0, 1), (0, -1), (1, 0), (-1, 0)]
Q = [Link]((grid_size, grid_size, len(actions)))
alpha = 0.1 # Learning rate
gamma = 0.9 # Discount factor
epsilon = 0.1 # Exploration factor
episodes = 1000 # Number of episodes
def reward(state):
if state == goal:
return 100 # Reward for reaching the goal
return -1 # Negative reward for each step (penalty)
def choose_action(state):
if [Link](0, 1) < epsilon:
return [Link](range(len(actions))) # Explore
else:
return [Link](Q[state[0], state[1]]) # Exploit
for episode in range(episodes):
state = start
done = False
while not done:
action_idx = choose_action(state)
action = actions[action_idx]
next_state = (state[0] + action[0], state[1] + action[1])
next_state = (max(0, min(next_state[0], grid_size - 1)),
15
NAMAN GUPTA 00524302022
max(0, min(next_state[1], grid_size - 1)))
r = reward(next_state)
Q[state[0], state[1], action_idx] = Q[state[0], state[1], action_idx] + \
alpha * (r + gamma * [Link](Q[next_state[0], next_state[1]]) - Q[state[0], state[1],
action_idx])
state = next_state
if state == goal:
done = True
state = start
path = [state]
while state != goal:
action_idx = [Link](Q[state[0], state[1]]) # Choose the best action (exploit)
action = actions[action_idx]
state = (state[0] + action[0], state[1] + action[1])
state = (max(0, min(state[0], grid_size - 1)),
max(0, min(state[1], grid_size - 1)))
[Link](state)
print(f"Path to goal: {path}")
Output
16
NAMAN GUPTA 00524302022
Q7. (a) Write a program to implement a Recurrent Neural Network
Code
import numpy as np
import tensorflow as tf
from [Link] import Sequential
from [Link] import SimpleRNN, Dense
import [Link] as plt
def generate_data(seq_length=100):
x = [Link](0, seq_length, seq_length)
y = [Link](x)
return y
def create_dataset(data, time_step=10):
X, y = [], []
for i in range(len(data) - time_step):
[Link](data[i:(i + time_step)])
[Link](data[i + time_step])
return [Link](X), [Link](y)
data = generate_data(100)
X, y = create_dataset(data, time_step=10)
X = [Link](([Link][0], [Link][1], 1))
train_size = int(len(X) * 0.8)
X_train, X_test = X[:train_size], X[train_size:]
y_train, y_test = y[:train_size], y[train_size:]
model = Sequential()
[Link](SimpleRNN(10, activation='relu', input_shape=(X_train.shape[1], 1)))
[Link](Dense(1))
[Link](optimizer='adam', loss='mean_squared_error')
[Link](X_train, y_train, epochs=20, batch_size=16, verbose=1)
predictions = [Link](X_test)
[Link](y_test, label='True Data')
[Link](predictions, label='Predicted Data')
17
NAMAN GUPTA 00524302022
[Link]()
[Link]()
Output
18
NAMAN GUPTA 00524302022
Q7. (b) Write a program to implement LSTM and perform time series analysis using LSTM.
Code
import numpy as np
import [Link] as plt
from [Link] import Sequential
from [Link] import LSTM, Dense
time = [Link](0, 100, 1000)
data = [Link](time) + [Link](0, 0.1, 1000)
def create_dataset(data, time_step=50):
X, y = [], []
for i in range(len(data) - time_step - 1):
[Link](data[i:(i + time_step)])
[Link](data[i + time_step])
return [Link](X), [Link](y)
time_step = 50
X, y = create_dataset(data)
X = [Link](([Link][0], [Link][1], 1))
train_size = int(len(X) * 0.8)
X_train, X_test = X[:train_size], X[train_size:]
y_train, y_test = y[:train_size], y[train_size:]
model = Sequential([
LSTM(50, input_shape=(X_train.shape[1], 1)),
Dense(1)
])
[Link](optimizer='adam', loss='mean_squared_error')
[Link](X_train, y_train, epochs=10, batch_size=32)
predictions = [Link](X_test)
[Link](y_test, label='True Data')
[Link](predictions, label='Predictions')
[Link]()
[Link]()
19
NAMAN GUPTA 00524302022
Output
20
NAMAN GUPTA 00524302022
Q8. (a) Write a program to perform object detection using Deep Learning
Code
!pip install ultralytics
from ultralytics import YOLO
from [Link] import files
from glob import glob
uploaded = [Link]()
model = YOLO('[Link]') # You can change the model to [Link] or [Link]
if len(uploaded) == 1 and any([Link]().endswith(('.png', '.jpg', '.jpeg')) for file in [Link]()):
image_path = list([Link]())[0]
results = model(image_path)
results[0].show()
results[0].save(filename='[Link]')
[Link]('[Link]')
elif len(uploaded) == 1 and any([Link]().endswith(('.mp4', '.avi', '.mov')) for file in
[Link]()):
video_path = list([Link]())[0]
[Link](source=video_path, save=True)
output_files = glob('runs/detect/*/*.mp4')
if output_files:
output_path = output_files[0] # Get the first .mp4 file
[Link](output_path)
else:
print("No .mp4 files found in the output directory.")
else:
print("Please upload a valid image or video.")
21
NAMAN GUPTA 00524302022
Output
22
NAMAN GUPTA 00524302022
Q8. (b) Dog-Breed Classifier Design and train a convolutional neural network to analyze images of
dogs and correctly identify their breeds. Use transfer learning and well-known architectures to
improve this model.
Code
import numpy as np
import pandas as pd
import [Link] as plt
import seaborn as sb
from [Link] import LabelEncoder
from sklearn.model_selection import train_test_split
import cv2
import tensorflow as tf
from tensorflow import keras
from keras import layers
from functools import partial
import warnings
[Link]('ignore')
AUTO = [Link]
from zipfile import ZipFile
data_path = '/content/drive/MyDrive/[Link]' # Make sure to adjust the path if
necessary
with ZipFile(data_path, 'r') as zip:
[Link]()
print('The dataset has been extracted.')
df = pd.read_csv('[Link]')
print([Link]())
print([Link])
print(f"Number of unique breeds: {df['breed'].nunique()}")
[Link](figsize=(10, 5))
df['breed'].value_counts().[Link]()
[Link]('off')
[Link]()
23
NAMAN GUPTA 00524302022
df['filepath'] = 'train/' + df['id'] + '.jpg'
print([Link]())
[Link](figsize=(10, 10))
for i in range(12):
[Link](4, 3, i + 1)
k = [Link](0, len(df))
img = [Link]([Link][k, 'filepath'])
[Link](img)
[Link]([Link][k, 'breed'])
[Link]('off')
[Link]()
le = LabelEncoder()
df['breed'] = le.fit_transform(df['breed'])
print([Link]())
features = df['filepath']
target = df['breed']
X_train, X_val, Y_train, Y_val = train_test_split(features, target, test_size=0.15, random_state=10)
print(f"Training set size: {X_train.shape}, Validation set size: {X_val.shape}")
import albumentations as A
transforms_train = [Link]([
[Link](p=0.2),
[Link](p=0.7),
[Link](p=0.5),
[Link](p=0.5),
[Link](p=1)
])
img = [Link]('train/[Link]')
[Link](img)
[Link]()
augments = [[Link](p=1), [Link](p=1), [Link](p=1), [Link](p=1)]
24
NAMAN GUPTA 00524302022
[Link](figsize=(10, 10))
for i, aug in enumerate(augments):
[Link](2, 2, i + 1)
aug_img = aug(image=img)['image']
[Link](aug_img)
[Link]()
def aug_fn(img):
aug_data = transforms_train(image=img)
aug_img = aug_data['image']
return aug_img
@[Link]
def process_data(img, label):
aug_img = tf.numpy_function(aug_fn, [img], Tout=tf.float32)
return img, label
def decode_image(filepath, label=None):
img = [Link].read_file(filepath)
img = [Link].decode_jpeg(img)
img = [Link](img, [128, 128])
img = [Link](img, tf.float32) / 255.0
if label is None:
return img
return img, tf.one_hot(indices=label, depth=120, dtype=tf.float32)
train_ds = (
[Link]
.from_tensor_slices((X_train, Y_train))
.map(decode_image, num_parallel_calls=AUTO)
.map(partial(process_data), num_parallel_calls=AUTO)
.batch(32)
.prefetch(AUTO)
25
NAMAN GUPTA 00524302022
val_ds = (
[Link]
.from_tensor_slices((X_val, Y_val))
.map(decode_image, num_parallel_calls=AUTO)
.batch(32)
.prefetch(AUTO)
for img, label in train_ds.take(1):
print([Link], [Link])
from [Link].inception_v3 import InceptionV3
pre_trained_model = InceptionV3(input_shape=(128, 128, 3), weights='imagenet',
include_top=False)
for layer in pre_trained_model.layers:
[Link] = False
last_layer = pre_trained_model.get_layer('mixed7')
print('Last layer output shape:', last_layer.[Link])
last_output = last_layer.output
x = [Link]()(last_output)
x = [Link](256, activation='relu')(x)
x = [Link]()(x)
x = [Link](256, activation='relu')(x)
x = [Link](0.3)(x)
x = [Link]()(x)
output = [Link](120, activation='softmax')(x)
model = [Link](pre_trained_model.input, output)
[Link](
optimizer='adam',
loss=[Link](from_logits=True),
metrics=[[Link]()]
26
NAMAN GUPTA 00524302022
from [Link] import EarlyStopping, ReduceLROnPlateau
class myCallback([Link]):
def on_epoch_end(self, epoch, logs=None):
if [Link]('val_auc') is not None and [Link]('val_auc') > 0.99:
print('\nValidation AUC has reached above 99%. Stopping training.')
[Link].stop_training = True
es = EarlyStopping(patience=3, monitor='val_auc', restore_best_weights=True, mode='max')
lr = ReduceLROnPlateau(monitor='val_loss', patience=2, factor=0.5, verbose=1)
history = [Link](
train_ds,
validation_data=val_ds,
epochs=50,
verbose=1,
callbacks=[es, lr, myCallback()]
history_df = [Link]([Link])
history_df.loc[:, ['loss', 'val_loss']].plot()
history_df.loc[:, ['auc', 'val_auc']].plot()
[Link]()
Output
27
NAMAN GUPTA 00524302022
Q9. (a) Write a program to demonstrate different activation functions.
Code
import numpy as np
import [Link] as plt
x = [Link](-5, 5, 100)
def sigmoid(x): return 1 / (1 + [Link](-x))
def tanh(x): return [Link](x)
def relu(x): return [Link](0, x)
def leaky_relu(x, alpha=0.1): return [Link](x > 0, x, alpha * x)
def softmax(x): return [Link](x) / [Link]([Link](x), axis=0)
def swish(x): return x * sigmoid(x)
activation_functions = {
"Sigmoid": sigmoid(x),
"Tanh": tanh(x),
"ReLU": relu(x),
"Leaky ReLU": leaky_relu(x),
"Softmax": softmax(x),
"Swish": swish(x),
[Link](figsize=(10, 8))
for i, (name, y) in enumerate(activation_functions.items()):
[Link](2, 3, i + 1)
[Link](x, y)
[Link](name)
[Link](0, color="black", linewidth=0.5, linestyle="--")
[Link](0, color="black", linewidth=0.5, linestyle="--")
plt.tight_layout()
[Link]()
28
NAMAN GUPTA 00524302022
Output
29
NAMAN GUPTA 00524302022
Q9. (b) Write a program in TensorFlow to demonstrate different Loss functions.
Code
import numpy as np
import tensorflow as tf
import [Link] as plt
y_true = [Link]([0.0, 1.0, 1.0, 0.0, 1.0])
y_pred = [Link]([0.1, 0.9, 0.8, 0.2, 0.6])
loss_functions = {
"Mean Squared Error": [Link](),
"Mean Absolute Error": [Link](),
"Binary Crossentropy": [Link](),
"Categorical Crossentropy": [Link](from_logits=True)
[Link](figsize=(12, 8))
for i, (name, loss_fn) in enumerate(loss_functions.items()):
loss_value = loss_fn(y_true, y_pred).numpy()
[Link](2, 2, i+1)
[Link]([0, 1], [loss_value, loss_value], label=f'{name}: {loss_value:.2f}', color='blue')
[Link](name)
[Link]('Index')
[Link]('Loss')
[Link]()
plt.tight_layout()
[Link]()
30
NAMAN GUPTA 00524302022
Output
31
NAMAN GUPTA 00524302022
Q10. Write a program to build an Artificial Neural Network by implementing the Back propagation
algorithm and test the same using appropriate data sets.
Code
import numpy as np
import pandas as pd
from [Link] import Sequential
from [Link] import Dense
data = {
'feature1': [0.1, 0.2, 0.3, 0.4, 0.5],
'feature2': [0.5, 0.4, 0.3, 0.2, 0.1],
'label': [0, 0, 1, 1, 1] # Binary labels
df = [Link](data)
X = df[['feature1', 'feature2']].values # Input features
y = df['label'].values # Output labels
model = Sequential()
[Link](Dense(8, input_dim=2, activation='relu')) # 2 input features, 8 neurons in hidden layer
[Link](Dense(1, activation='sigmoid')) # Output layer for binary classification
[Link](loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
[Link](X, y, epochs=100, batch_size=1, verbose=1)
test_data = [Link]([[0.2, 0.4]]) # New input data
prediction = [Link](test_data)
predicted_label = (prediction > 0.5).astype(int) # Convert probability to binary
print(f"Predicted label: {predicted_label[0][0]}")
32
NAMAN GUPTA 00524302022
Output
33