Deep Learning Week 10 Nptel Assignment Answers
Are you looking for Deep Learning Week 10 Nptel Assignment Answers ? You’ve come to the right place! Access the most accurate answers at Progiez.
Table of Contents
NPTEL Deep Learning Week 10 Assignment 10 Answers (Jan-Apr 2025)
Course Link: Click Here
1) What is not a reason for using batch-normalization?
a. Prevent overfitting
b. Faster convergence
c. Faster inference time
d. Prevent Co-variant shift
2) A neural network has 3 neurons in a hidden layer. Activations of the neurons for three batches are 2, 2, 9 respectively. What will be the value of mean if we use batch normalization in this layer?
a.
b.
c.
d.
3) How can we prevent underfitting?
a. Increase the number of data samples
b. Increase the number of features
c. Decrease the number of features
d. Decrease the number of data samples
4) How do we generally calculate mean and variance during testing?
a. Batch normalization is not required during testing
b. Mean and variance based on test image
c. Estimated mean and variance statistics during training
d. None of the above
NPTEL Deep Learning Week 10 Assignment 10 Answers
5) Which one of the following is not an advantage of dropout?
a. Regularization
b. Prevent Overfitting
c. Improve Accuracy
d. Reduce computational cost during testing
6) What is the main advantage of layer normalization over batch normalization?
a. Faster convergence
b. Lesser computation
c. Useful in recurrent neural network
d. None of these
7) While training a neural network for an image recognition task, we plot the graph of training error and validation error. Which is the best for early stopping?
a. A
b. B
c. C
d. D
8) Which among the following is NOT a data augmentation technique?
a. Random horizontal and vertical flip of image
b. Random shuffle all the pixels of an image
c. Random color jittering
d. All the above are data augmentation techniques
NPTEL Deep Learning Week 10 Assignment 10 Answers
9) Which of the following is true about model capacity (where model capacity means the ability of a neural network to approximate complex functions)?
a. As number of hidden layers increase, model capacity increases
b. As dropout ratio increases, model capacity increases
c. As learning rate increases, model capacity increases
d. None of these
10) Batch Normalization is helpful because
a. It normalizes all the input before sending it to the next layer
b. It returns back the normalized mean and standard deviation of weights
c. It is a very efficient back-propagation technique
d. None of these
These are NPTEL Deep Learning Week 10 Assignment 10 Answers
More weeks of Deep Learning: Click Here
More Nptel Courses: https://progiez.com/nptel
NPTEL Deep Learning Week 10 Assignment 10 Answers
Q1. In case of Group Normalization, if group number=1, Group Normalization behaves like
a. Batch Normalization
b. Layer Normalization
c. Instance Normalization
d. None of the above
Answer: b. Layer Normalization
Q2. In case of Group Normalization, if group number=number of channels, Group Normalization behaves like
a. Batch Normalization
b. Layer Normalization
c. Instance Normalization
d. None of the above
Answer: c. Instance Normalization
These are NPTEL Deep Learning Week 10 Assignment 10 Answers
Q3. When will you do early stopping?
a. Minimum training loss point
b. Minimum validation loss point
c. Minimum test loss point
d. None of these
Answer: b. Minimum validation loss point
Q4. What is the use of learnable parameters in batch-normalization layer?
a. Calculate mean and variances
b. Perform normalization
c. Renormalize the activations
d. No learnable parameter is present
Answer: c. Renormalize the activations
These are NPTEL Deep Learning Week 10 Assignment 10 Answers
Q5. Which of the one is not a procedure to prevent overfitting?
a. Reduce feature size
b. Use dropout
c. Use Early stopping
d. Increase training iterations
Answer: d. Increase training iterations
These are NPTEL Deep Learning Week 10 Assignment 10 Answers
Q6. An autoencoder with 5 hidden layers has 10,004 number of parameters. If we use a dropout in each layer with 50% drop rate, what will be the number of parameters of that autoencoder?
a 2,501
b. 5,002
c. 10,004
d. 20,008
Answer: c. 10,004
These are NPTEL Deep Learning Week 10 Assignment 10 Answers
Q7. Suppose, you have used a batch-normalization layer after a convolution block. After that you train the model using any standard dataset. Now will the extracted feature distribution after batch normalization layer have zero mean and unit variance if we feed any input image?
a. Yes. Because batch-normalization normalizes the features into zero mean and unit variance
b. No. Itis not possible to normalize the features into zero mean and unit variance
c. Can’t Say. Because the batch-normalization renormalizes the features using trainable parameters. After training, it may or may not be the zero mean and unit variance.
d. None of the above
Answer: c. Can’t Say. Because the batch-normalization renormalizes the features using trainable parameters. After training, it may or may not be the zero mean and unit variance.
These are NPTEL Deep Learning Week 10 Assignment 10 Answers
Q8. Which one of the following regularization methods induces sparsity among the trained weights?
a. L1 regularizer
b. L2 regularizer
c. Both L1 & L2
d. None of the above
Answer: a. L1 regularizer
These are NPTEL Deep Learning Week 10 Assignment 10 Answers
Q9. Which one of the following is not an advantage of dropout?
a. Regularization
b. Prevent Overfitting
c. Improve Accuracy
d. Reduce computational cost during testing
Answer: d. Reduce computational cost during testing
These are NPTEL Deep Learning Week 10 Assignment 10 Answers
Q10. Batch-Normalization layer takes the input x € R¥N*CWXH hatch mean is computed as je = 1 y~ yw yH N i 2 __1 N YW YH ww iz Ze Zk=1 Xicjk and batch variance is computed as oF = wa ist Zi Zea (icp — Hc)?. Now after normalization, ® = ir What is the purpose of € in this expression?
a. There is no such purpose
b. It helps to converge faster
c. Itis decay rate in normalization
d. It prevents division by zero for inputs with zero variance
Answer: d. It prevents division by zero for inputs with zero variance
These are NPTEL Deep Learning Week 10 Assignment 10 Answer