Deep Learning For Computer Vision Week 4 Nptel Answers 2024

Are you looking for the Deep Learning For Computer Vision Week 4 Nptel Answers (July-Dec)? You’ve come to the right place! Access the latest Deep Learning For Computer Vision Week 4 Nptel Answers

Course link on Nptel Website: Visit Here


Deep Learning For Computer Vision Week 4 Nptel Answers 2024
Deep Learning For Computer Vision Week 4 Nptel Answers 2024

Deep Learning For Computer Vision Week 4 Nptel Answers (July-Dec 2024)


1. Which one of the following statements is true:
a) Weight change criterion is a method of ‘early stopping’ that checks whether or not the error is dropping over epochs to decide whether to continue training or stop.
b) L2 norm tends to create more sparse weights than L1 norm.
c) During the training phase, for each iteration, Dropout ignores a random fraction, p, of nodes, and accounts for it in the test phase by scaling down the activations by a factor of p.
d) A single McCulloch-Pitts neuron is capable of modeling AND, OR, XOR, NOR, and NAND functions

Answer: c) During the training phase, for each iteration, Dropout ignores a random fraction, p, of nodes, and accounts for it in the test phase by scaling down the activations by a factor of p.


2. For a neural network f, let wij be the weight connecting neurons ai in hidden layer-1 to bj in adjacent hidden layer-2. Consider the following statements:
Statement-1: ∂L/∂wij = ∂L/∂bj * ai, where L is the loss function of f.
Statement-2: wij is not the only weight that is connecting neurons ai and bj

Choose the most appropriate answer:
a) Statement-1 and Statement-2 are false
b) Statement-1 and Statement-2 are true
c) Statement-1 is true but Statement-2 is false
d) Statement-1 is false but Statement-2 is true

Answer:c) Statement-1 is true but Statement-2 is false


3. Which of the following statements are true? (Select all that apply)
a) Sigmoid activation function σ(⋅) can be represented in terms of tanh activation function as below:
σ(x) = (tanh(x/2) – 1)/2
b) The derivative of the sigmoid activation function is symmetric around the y-axis
c) Gradient of a sigmoid neuron vanishes at saturation.
d) Sigmoid activation is centered around 0 whereas tanh activation is centered around 0.5

Answer: a) Sigmoid activation function σ(⋅) can be represented in terms of tanh activation function as below: σ(x) = (tanh(x/2) – 1)/2
c) Gradient of a sigmoid neuron vanishes at saturation.


These are Deep Learning For Computer Vision Week 4 Nptel Answers


4. Consider two 3×3 images x1 and x2 such that x1 = ⎡⎣⎢2413774126⎤⎦⎥ and x2 = ⎡⎣⎢10826123286⎤⎦⎥. Their corresponding one-hot encoded label vectors are y1=[0,1,0] and y2=[0,0,1]. Perform mixup data augmentation between x1 and x2 given that λ=0.4.
a) x~ = ⎡⎣⎢6.8 6.4 1.6 4.8 10.0 4.6 2.8 9.6 6.0⎤⎦⎥; y~ = [0, 0.6, 0.4]
b) x~ = ⎡⎣⎢5.2 5.6 1.4 4.2 9.0 5.4 3.2 10.4 6.0⎤⎦⎥; y~ = [0, 0.6, 0.4]
c) x~ = ⎡⎣⎢6.8 6.4 1.6 4.8 10.0 4.6 2.8 9.6 6.0⎤⎦⎥; y~ = [0, 0.4, 0.6]
d) x~ = ⎡⎣⎢5.2 5.6 1.4 4.2 9.0 5.4 3.2 10.4 6.0⎤⎦⎥; y~ = [0, 0.4, 0.6]

Answer:c) x~ = ⎡⎣⎢6.8 6.4 1.6 4.8 10.0 4.6 2.8 9.6 6.0⎤⎦⎥; y~ = [0, 0.4, 0.6]


5. Consider the following statements P and Q regarding AlexNet and choose the correct option:
(P) In AlexNet, Response Normalization Layers were introduced to emulate the competitive nature of real neurons, where highly active neurons suppress the activity of neighboring neurons, creating competition among different kernel outputs.
(Q) Convolutional layers contain only about 5% of the total parameters hence account for the least computation.

Choose the correct option:
a) Only statement P is true
b) Only statement Q is true
c) Both statements are true
d) None of the statements is true

Answer: a) Only statement P is true


6. Given an input image of shape (10,10,3), you want to use one of the two following layers: Fully connected layer with 2 neurons, with biases Convolutional layer with three 2×2 filters (with biases) with 0 padding and a stride of 2. If you use the fully-connected layer, the input volume is “flattened” into a column vector before being fed into the layer. What is the difference in the number of trainable parameters between these two layers?
a) The fully connected layer has 566 fewer parameters
b) The convolutional layer has 518 fewer parameters
c) The convolutional layer has 570 fewer parameters
d) None of the above

Answer: d) None of the above


These are Deep Learning For Computer Vision Week 4 Nptel Answers


7. Which of the following statements is false?
a) For a fixed padding, the bigger the size of the kernel, the smaller is the output after convolution.
b) To get the output with the same size as that of the input, padding used is ⌊k/2⌋ where k×k is the kernel used.
c) The number of feature maps obtained after a convolution operation depends on the depth of the input but not on the number of filters.
d) Stride is a hyper-parameter

Answer: c) The number of feature maps obtained after a convolution operation depends on the depth of the input but not on the number of filters.


8. Compute the value for the following expression ELU(tanh(x)) where x = -1.3 and α = 0.3 (Round decimal point till 2 places).

Answer:-0.17


9. Using RMSProp-based Gradient Descent, find the new value of parameter θt+1, given that the old value θt = 1.2, aggregated gradient Δθt = 0.85, gradient accumulation rt−1 = 0.7, learning rate α = 0.9, decay rate ρ = 0.3 and small constant δ = 10−7 (Round decimal point till 3 places).

Answer: 0.516


These are Deep Learning For Computer Vision Week 4 Nptel Answers


10) If we convolve a feature map of size 32 × 32 × 6 with a filter of size 7 × 7 × 3, with a stride of 1 across all dimensions and a padding of 0, the width of the output volume is A, the height of the output volume is B and the depth of the output volume is C.
10) A: 26
11) B: 26
12) C: 3


Assume that the feature map given below is generated from a convolution layer in CNN, after which a 2 × 2 Max Pooling layer with a stride 2 is applied to it. While backpropagation, we get the following gradient for the pooling layer. Assign the appropriate gradient value for the locations at the feature map:


13) Location (1,1):

8


14) Location (1,4):

10


15) Location (2,2):

0


These are Deep Learning For Computer Vision Week 4 Nptel Answers



16) Location (3,1):

0


17) Location (3,3):

2


18) Location (4,3):

0


These are Deep Learning For Computer Vision Week 4 Nptel Answers


For the same previous question, assign the appropriate gradient value for the locations at the feature map but use Average Pooling layer instead of Max Pooling layer:


19) Location (1,1):

2


These are Deep Learning For Computer Vision Week 4 Nptel Answers



20) Location (1,4):

2.5


21) Location (2,2):

2


22) Location (3,1):

3.5


23) Location (3,3):

0.5


24) Location (4,3):

0.5

These are Deep Learning For Computer Vision Week 4 Nptel Answers


All Weeks Answers of Deep Learning for Computer Vision: Click here

For answers to additional Nptel courses, please refer to this link: NPTEL Assignment Answers