0% found this document useful (0 votes)
18 views11 pages

Assignment_6_2024

The document outlines an assignment for a Deep Learning course from IIT Kharagpur, consisting of multiple-choice questions (MCQs) related to concepts such as PCA, sigmoid functions, autoencoders, and neural networks. Each question includes options, the correct answer, and a detailed solution. The assignment aims to assess understanding of key deep learning principles and calculations.

Uploaded by

lavanya ramesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views11 pages

Assignment_6_2024

The document outlines an assignment for a Deep Learning course from IIT Kharagpur, consisting of multiple-choice questions (MCQs) related to concepts such as PCA, sigmoid functions, autoencoders, and neural networks. Each question includes options, the correct answer, and a detailed solution. The assignment aims to assess understanding of key deep learning principles and calculations.

Uploaded by

lavanya ramesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

NPTEL Online Certification Courses

Indian Institute of Technology Kharagpur

Deep Learning
Assignment- Week 6
TYPE OF QUESTION: MCQ/MSQ
Number of questions: 10 Total mark: 10 X 1 = 10
______________________________________________________________________________

QUESTION 1:
Which of the following is not true for PCA? Choose the correct option.

a. Rotates the axes to lie along the principal components


b. Is calculated from the covariance matrix
c. Removes some information from the data
d. Eigenvectors describe the length of the principal components

Correct Answer: d

Detailed Solution:

See the definition

Direct from classroom lecture

QUESTION 2:
What is the output of sigmoid function for an input with dynamic range[0, ∞]?

a. [0, 1]
b. [−1, 1]
c. [0.5, 1]
d. [0.25, 1]

Correct Answer: c

Detailed Solution:

𝟏
𝑺𝒊𝒈𝒎𝒐𝒊𝒅(𝒙) =
𝟏 + 𝒆−𝒙
𝟏 𝟏
If 𝒙 = 𝟎, 𝑺𝒊𝒈𝒎𝒐𝒊𝒅(𝟎) = = = 𝟎. 𝟓
𝟏+𝒆−𝟎 𝟏+𝟏
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

𝟏 𝟏
If 𝒙 = ∞, 𝑺𝒊𝒈𝒎𝒐𝒊𝒅(∞) = = =𝟏
𝟏+𝒆−∞ 𝟏+𝟎

QUESTION 3:

A zero-bias autoencoder has 3 input neurons, 1 hidden neuron and 3 output neurons. If the
2
network is perfectly trained using an input[3 ]. What would be the values of the weights in the
5
autoencoder?
2
a. [1 1 1], [3]
5
0.2
b. [1 1 1], [0.3 ]
0.5
1
c. [0.2 0.3 0.5], [1]
1
1
d. [2 3 5], [1]
1

Correct Answer: b

Detailed Solution:

𝑦 = 𝑊2 ∙ 𝑊1 ∙ 𝑥 ∙∙∙∙∙∙∙∙∙ (1)

Where 𝑊1 is encoder weight and 𝑊2 is decoder weight.

2
If the network is perfectly trained, 𝑦 = 𝑥 = [3 ]
5
0.2
Equation 1 is only satisfied if 𝑊1 = [ 1 1 1] and 𝑊2 = [0.3]
0.5
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

QUESTION 4:
A single hidden and no-bias autoencoder has 100 input neurons and 10 hidden neurons. What
will be the number of parameters associated with this autoencoder?

a. 1000
b. 2000
c. 2110
d. 1010

Correct Answer: b

Detailed Solution:

As single hidden layer and no-bias autoencoder,

Input neurons = 100, Hidden neurons = 10. So Output neurons = 100

Total number of parameters = 100*10+10*100=2000


NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

QUESTION 5:
Consider the 2-layer neural network shown below. The weights are represented as follows:
𝑘
𝑤𝑚𝑛 = weight between 𝑛th node of 𝑘th layer and 𝑚 th node (𝑘 − 1)th layer. 0th node is the bias
node = 1 as depicted in the diagram.
1
e.g. 𝑤32 = weight between 2nd node of hidden layer and 3rd node of input layer. Refer to the
diagram. All weights have not been shown to maintain clarity.

Sigmoid activation function is applied to both the hidden layer and the output layer. The loss
function is defined as 𝐽(∙) = 0.5(𝑦 − 𝑡) 2 where 𝑡 is the true label.

The initial weights are given as:


−0.4 0.2 0.4 −0.5
𝑊1 = [ ] 𝑊 2 = [0.1 − 0.3 − 0.2]
0.2 −0.3 0.1 0.2

Find the output at node 𝑎1 and 𝑎2 for given input {𝑥 1 = 1, 𝑥 2 = 0, 𝑥 3 = 1}?

a. 0.13, 0.54
b. 0.33, 0.52
c. 0.23, 0.51
d. 0.13, 0.51

Correct Answer: b

Detailed Solution:
Let input vector be X = [1 1 0 1]T
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

1
−0.4 0.2 0.4 −0.5 1 −0.7
𝒂 = 𝜎(𝑊1 𝑋) 𝑊1𝑋 = [ ][ ] = [ ]
0.2 −0.3 0.1 0.2 0 0.1
1
−0.7 0.33
𝜎 ([ ]) = [ ]
0.1 0.52

QUESTION 6:
Consider the 2-layer neural network shown below. The weights are represented as follows:
𝑘
𝑤𝑚𝑛 = weight between 𝑛th node of 𝑘th layer and 𝑚 th node (𝑘 − 1)th layer. 0th node is the bias
node = 1 as depicted in the diagram.
1
e.g. 𝑤32 = weight between 2nd node of hidden layer and 3rd node of input layer. Refer to the
diagram. All weights have not been shown to maintain clarity.

Sigmoid activation function is applied to both the hidden layer and the output layer. The loss
function is defined as 𝐽(∙) = 0.5(𝑦 − 𝑡) 2 where 𝑡 is the true label.

The initial weights are given as:


−0.4 0.2 0.4 −0.5
𝑊1 = [ ] 𝑊 2 = [0.1 − 0.3 − 0.2]
0.2 −0.3 0.1 0.2

Find the final output at node 𝑦 for given input {𝑥 1 = 1, 𝑥 2 = 0, 𝑥 3 = 1}? Choose the closest
answer.

a. 0.13
b. 0.33
c. 0.48
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

d. 0.51

Correct Answer: c

Detailed Solution:
Hidden vector be A = [1 0.33 0.52]T as calculated in other question.
1
𝒚 = 𝜎 (𝑊2 𝐴) 𝑊2 𝐴 = [0.1 − 0.3 − 0.2] [0.33] = −0.1
0.52
𝜎 (−0.1) = 0.48
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

QUESTION 7:
Consider the 2-layer neural network shown below. The weights are represented as follows:
𝑘
𝑤𝑚𝑛 = weight between 𝑛th node of 𝑘th layer and 𝑚 th node (𝑘 − 1)th layer. 0th node is the bias
node = 1 as depicted in the diagram.
1
e.g. 𝑤32 = weight between 2nd node of hidden layer and 3rd node of input layer. Refer to the
diagram. All weights have not been shown to maintain clarity.

Sigmoid activation function is applied to both the hidden layer and the output layer. The loss
function is defined as 𝐽(∙) = 0.5(𝑦 − 𝑡) 2 where 𝑡 is the true label.

The initial weights are given as:


−0.4 0.2 0.4 −0.5
𝑊1 = [ ] 𝑊 2 = [0.1 − 0.3 − 0.2]
0.2 −0.3 0.1 0.2
𝜕𝐽
Find the gradient component 𝜕 𝑤211
for 𝑡 = 1 and given input {𝑥 1 = 1, 𝑥 2 = 0, 𝑥 3 = 1}? Choose
the closest answer.

a. −0.09
b. −0.11
c. −0.13
d. −0.04

Correct Answer: d

Detailed Solution:
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

𝐽(∙) = 0.5(𝑦 − 𝑡)2

𝑡 = 1 𝑎𝑛𝑑 𝑦 = 0.48

1
2 2 2 2
Let 𝒂 = 𝑊 𝐴 = 𝑤01 + 𝑤11 𝒂𝟏 + 𝑤21 𝒂𝟐 = [0.1 − 0.3 − 0.2] [ 0.33] = −0.1
0.52

𝒚 = 𝝈(𝒂) and 𝑱(∙) = 𝟎. 𝟓(𝒚 − 𝒕)𝟐

Using chain rule,

𝝏𝑱 𝝏𝑱 𝝏𝒚 𝝏𝒂
2
= ∙ ∙ 2
𝝏 𝑤11 𝝏𝒚 𝝏𝒂 𝝏 𝑤11
𝝏𝑱 𝝏𝒚 𝝏𝒂
= (𝒚 − 𝒕), = 𝝈 (𝒂) × (𝟏 − 𝝈(𝒂)), = 𝒂𝟏
𝝏𝒚 𝝏𝒂 𝝏𝑤211

𝝏𝑱
2
= (𝒚 − 𝒕) × 𝝈(𝒂) × (𝟏 − 𝝈(𝒂)) × 𝒂𝟏 = (𝟎. 𝟒𝟖 − 𝟏) × 𝟎. 𝟒𝟖 × (𝟏 − 𝟎. 𝟒𝟖) × 𝟎. 𝟑𝟑
𝝏𝑤11
= −𝟎. 𝟎𝟒
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

QUESTION 8:
Consider the 2-layer neural network shown below. The weights are represented as follows:
𝑘
𝑤𝑚𝑛 = weight between 𝑛th node of 𝑘th layer and 𝑚 th node (𝑘 − 1)th layer. 0th node is the bias
node = 1 as depicted in the diagram.
1
e.g. 𝑤32 = weight between 2nd node of hidden layer and 3rd node of input layer. Refer to the
diagram. All weights have not been shown to maintain clarity.

Sigmoid activation function is applied to both the hidden layer and the output layer. The loss
function is defined as 𝐽(∙) = 0.5(𝑦 − 𝑡) 2 where 𝑡 is the true label.

The initial weights are given as:


−0.4 0.2 0.4 −0.5
𝑊1 = [ ] 𝑊 2 = [0.1 − 0.3 − 0.2]
0.2 −0.3 0.1 0.2
2
Find the updated value of 𝑤21 after 1 iteration for 𝑡 = 1, the learning rate η = 0.9 and given
input {𝑥 1 = 1, 𝑥 2 = 0, 𝑥 3 = 1}? Choose the closest answer.

a. −0.29
b. −0.1
c. −0.14
d. −0.04

Correct Answer: c

Detailed Solution:
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

𝐽(∙) = 0.5(𝑦 − 𝑡)2

𝑡 = 1 𝑎𝑛𝑑 𝑦 = 0.48

1
2 2 2 2
Let 𝒂 = 𝑊 𝐴 = 𝑤01 + 𝑤11 𝒂𝟏 + 𝑤21 𝒂𝟐 = [0.1 − 0.3 − 0.2] [ 0.33] = −0.1
0.52

𝒚 = 𝝈(𝒂) and 𝑱(∙) = 𝟎. 𝟓(𝒚 − 𝒕)𝟐

Using chain rule,

𝝏𝑱 𝝏𝑱 𝝏𝒚 𝝏𝒂
2
= ∙ ∙ 2
𝝏 𝑤21 𝝏𝒚 𝝏𝒂 𝝏 𝑤21
𝝏𝑱 𝝏𝒚 𝝏𝒂
= (𝒚 − 𝒕), = 𝝈 (𝒂) × (𝟏 − 𝝈(𝒂)), = 𝒂𝟐
𝝏𝒚 𝝏𝒂 𝝏𝑤221

𝝏𝑱
2
= (𝒚 − 𝒕) × 𝝈(𝒂) × (𝟏 − 𝝈(𝒂)) × 𝒂𝟐 = (𝟎. 𝟒𝟖 − 𝟏) × 𝟎. 𝟒𝟖 × (𝟏 − 𝟎. 𝟒𝟖) × 𝟎. 𝟓𝟐
𝝏𝑤21
= −𝟎. 𝟎𝟕

2 2 𝝏𝑱
Updated 𝑤21 = 𝑤21 − 𝜼 = −𝟎. 𝟐 − 𝟎. 𝟗 × (−𝟎. 𝟎𝟕) = −𝟎. 𝟐 + 𝟎. 𝟎𝟔 = −𝟎. 𝟏𝟒
𝑤221

QUESTION 9:
𝑑𝑦 𝑑𝑦
𝑦 = min (𝑎, 𝑏) and 𝑎 > 𝑏. What is the value of 𝑑𝑎 and 𝑑𝑏 ?

a. 1, 0
b. 0, 1
c. 0, 0
d. 1, 1

Correct Answer: b

Detailed Solution:

𝑦 = min (𝑎, 𝑏) and 𝑎 > 𝑏.


𝑑𝑦 𝑑𝑦
Now 𝑦 = 𝑏. So 𝑑𝑎 = 0 and 𝑑𝑏 = 1
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

QUESTION 10:
Let’s say vectors 𝑎⃗ = {2; 4} and 𝑏⃗⃗ = {𝑛; 1} forms the first two principle components after
applying PCA. Under such circumstances, which among the following can be a possible value of
n?

a. 2
b. -2
c. 0
d. 1

Correct Answer: b

Detailed Solution:

Only option (b) makes the two vectors orthogonal.

____________________________________________________________________________

************END*******

You might also like