Assignment_6_2024
Assignment_6_2024
Deep Learning
Assignment- Week 6
TYPE OF QUESTION: MCQ/MSQ
Number of questions: 10 Total mark: 10 X 1 = 10
______________________________________________________________________________
QUESTION 1:
Which of the following is not true for PCA? Choose the correct option.
Correct Answer: d
Detailed Solution:
QUESTION 2:
What is the output of sigmoid function for an input with dynamic range[0, ∞]?
a. [0, 1]
b. [−1, 1]
c. [0.5, 1]
d. [0.25, 1]
Correct Answer: c
Detailed Solution:
𝟏
𝑺𝒊𝒈𝒎𝒐𝒊𝒅(𝒙) =
𝟏 + 𝒆−𝒙
𝟏 𝟏
If 𝒙 = 𝟎, 𝑺𝒊𝒈𝒎𝒐𝒊𝒅(𝟎) = = = 𝟎. 𝟓
𝟏+𝒆−𝟎 𝟏+𝟏
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
𝟏 𝟏
If 𝒙 = ∞, 𝑺𝒊𝒈𝒎𝒐𝒊𝒅(∞) = = =𝟏
𝟏+𝒆−∞ 𝟏+𝟎
QUESTION 3:
A zero-bias autoencoder has 3 input neurons, 1 hidden neuron and 3 output neurons. If the
2
network is perfectly trained using an input[3 ]. What would be the values of the weights in the
5
autoencoder?
2
a. [1 1 1], [3]
5
0.2
b. [1 1 1], [0.3 ]
0.5
1
c. [0.2 0.3 0.5], [1]
1
1
d. [2 3 5], [1]
1
Correct Answer: b
Detailed Solution:
𝑦 = 𝑊2 ∙ 𝑊1 ∙ 𝑥 ∙∙∙∙∙∙∙∙∙ (1)
2
If the network is perfectly trained, 𝑦 = 𝑥 = [3 ]
5
0.2
Equation 1 is only satisfied if 𝑊1 = [ 1 1 1] and 𝑊2 = [0.3]
0.5
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
QUESTION 4:
A single hidden and no-bias autoencoder has 100 input neurons and 10 hidden neurons. What
will be the number of parameters associated with this autoencoder?
a. 1000
b. 2000
c. 2110
d. 1010
Correct Answer: b
Detailed Solution:
QUESTION 5:
Consider the 2-layer neural network shown below. The weights are represented as follows:
𝑘
𝑤𝑚𝑛 = weight between 𝑛th node of 𝑘th layer and 𝑚 th node (𝑘 − 1)th layer. 0th node is the bias
node = 1 as depicted in the diagram.
1
e.g. 𝑤32 = weight between 2nd node of hidden layer and 3rd node of input layer. Refer to the
diagram. All weights have not been shown to maintain clarity.
Sigmoid activation function is applied to both the hidden layer and the output layer. The loss
function is defined as 𝐽(∙) = 0.5(𝑦 − 𝑡) 2 where 𝑡 is the true label.
a. 0.13, 0.54
b. 0.33, 0.52
c. 0.23, 0.51
d. 0.13, 0.51
Correct Answer: b
Detailed Solution:
Let input vector be X = [1 1 0 1]T
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
1
−0.4 0.2 0.4 −0.5 1 −0.7
𝒂 = 𝜎(𝑊1 𝑋) 𝑊1𝑋 = [ ][ ] = [ ]
0.2 −0.3 0.1 0.2 0 0.1
1
−0.7 0.33
𝜎 ([ ]) = [ ]
0.1 0.52
QUESTION 6:
Consider the 2-layer neural network shown below. The weights are represented as follows:
𝑘
𝑤𝑚𝑛 = weight between 𝑛th node of 𝑘th layer and 𝑚 th node (𝑘 − 1)th layer. 0th node is the bias
node = 1 as depicted in the diagram.
1
e.g. 𝑤32 = weight between 2nd node of hidden layer and 3rd node of input layer. Refer to the
diagram. All weights have not been shown to maintain clarity.
Sigmoid activation function is applied to both the hidden layer and the output layer. The loss
function is defined as 𝐽(∙) = 0.5(𝑦 − 𝑡) 2 where 𝑡 is the true label.
Find the final output at node 𝑦 for given input {𝑥 1 = 1, 𝑥 2 = 0, 𝑥 3 = 1}? Choose the closest
answer.
a. 0.13
b. 0.33
c. 0.48
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
d. 0.51
Correct Answer: c
Detailed Solution:
Hidden vector be A = [1 0.33 0.52]T as calculated in other question.
1
𝒚 = 𝜎 (𝑊2 𝐴) 𝑊2 𝐴 = [0.1 − 0.3 − 0.2] [0.33] = −0.1
0.52
𝜎 (−0.1) = 0.48
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
QUESTION 7:
Consider the 2-layer neural network shown below. The weights are represented as follows:
𝑘
𝑤𝑚𝑛 = weight between 𝑛th node of 𝑘th layer and 𝑚 th node (𝑘 − 1)th layer. 0th node is the bias
node = 1 as depicted in the diagram.
1
e.g. 𝑤32 = weight between 2nd node of hidden layer and 3rd node of input layer. Refer to the
diagram. All weights have not been shown to maintain clarity.
Sigmoid activation function is applied to both the hidden layer and the output layer. The loss
function is defined as 𝐽(∙) = 0.5(𝑦 − 𝑡) 2 where 𝑡 is the true label.
a. −0.09
b. −0.11
c. −0.13
d. −0.04
Correct Answer: d
Detailed Solution:
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
𝑡 = 1 𝑎𝑛𝑑 𝑦 = 0.48
1
2 2 2 2
Let 𝒂 = 𝑊 𝐴 = 𝑤01 + 𝑤11 𝒂𝟏 + 𝑤21 𝒂𝟐 = [0.1 − 0.3 − 0.2] [ 0.33] = −0.1
0.52
𝝏𝑱 𝝏𝑱 𝝏𝒚 𝝏𝒂
2
= ∙ ∙ 2
𝝏 𝑤11 𝝏𝒚 𝝏𝒂 𝝏 𝑤11
𝝏𝑱 𝝏𝒚 𝝏𝒂
= (𝒚 − 𝒕), = 𝝈 (𝒂) × (𝟏 − 𝝈(𝒂)), = 𝒂𝟏
𝝏𝒚 𝝏𝒂 𝝏𝑤211
𝝏𝑱
2
= (𝒚 − 𝒕) × 𝝈(𝒂) × (𝟏 − 𝝈(𝒂)) × 𝒂𝟏 = (𝟎. 𝟒𝟖 − 𝟏) × 𝟎. 𝟒𝟖 × (𝟏 − 𝟎. 𝟒𝟖) × 𝟎. 𝟑𝟑
𝝏𝑤11
= −𝟎. 𝟎𝟒
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
QUESTION 8:
Consider the 2-layer neural network shown below. The weights are represented as follows:
𝑘
𝑤𝑚𝑛 = weight between 𝑛th node of 𝑘th layer and 𝑚 th node (𝑘 − 1)th layer. 0th node is the bias
node = 1 as depicted in the diagram.
1
e.g. 𝑤32 = weight between 2nd node of hidden layer and 3rd node of input layer. Refer to the
diagram. All weights have not been shown to maintain clarity.
Sigmoid activation function is applied to both the hidden layer and the output layer. The loss
function is defined as 𝐽(∙) = 0.5(𝑦 − 𝑡) 2 where 𝑡 is the true label.
a. −0.29
b. −0.1
c. −0.14
d. −0.04
Correct Answer: c
Detailed Solution:
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
𝑡 = 1 𝑎𝑛𝑑 𝑦 = 0.48
1
2 2 2 2
Let 𝒂 = 𝑊 𝐴 = 𝑤01 + 𝑤11 𝒂𝟏 + 𝑤21 𝒂𝟐 = [0.1 − 0.3 − 0.2] [ 0.33] = −0.1
0.52
𝝏𝑱 𝝏𝑱 𝝏𝒚 𝝏𝒂
2
= ∙ ∙ 2
𝝏 𝑤21 𝝏𝒚 𝝏𝒂 𝝏 𝑤21
𝝏𝑱 𝝏𝒚 𝝏𝒂
= (𝒚 − 𝒕), = 𝝈 (𝒂) × (𝟏 − 𝝈(𝒂)), = 𝒂𝟐
𝝏𝒚 𝝏𝒂 𝝏𝑤221
𝝏𝑱
2
= (𝒚 − 𝒕) × 𝝈(𝒂) × (𝟏 − 𝝈(𝒂)) × 𝒂𝟐 = (𝟎. 𝟒𝟖 − 𝟏) × 𝟎. 𝟒𝟖 × (𝟏 − 𝟎. 𝟒𝟖) × 𝟎. 𝟓𝟐
𝝏𝑤21
= −𝟎. 𝟎𝟕
2 2 𝝏𝑱
Updated 𝑤21 = 𝑤21 − 𝜼 = −𝟎. 𝟐 − 𝟎. 𝟗 × (−𝟎. 𝟎𝟕) = −𝟎. 𝟐 + 𝟎. 𝟎𝟔 = −𝟎. 𝟏𝟒
𝑤221
QUESTION 9:
𝑑𝑦 𝑑𝑦
𝑦 = min (𝑎, 𝑏) and 𝑎 > 𝑏. What is the value of 𝑑𝑎 and 𝑑𝑏 ?
a. 1, 0
b. 0, 1
c. 0, 0
d. 1, 1
Correct Answer: b
Detailed Solution:
QUESTION 10:
Let’s say vectors 𝑎⃗ = {2; 4} and 𝑏⃗⃗ = {𝑛; 1} forms the first two principle components after
applying PCA. Under such circumstances, which among the following can be a possible value of
n?
a. 2
b. -2
c. 0
d. 1
Correct Answer: b
Detailed Solution:
____________________________________________________________________________
************END*******