Deep Learning | Week 10

Course Name: Deep Learning

Course Link: Click Here

These are NPTEL Deep Learning Week 10 Assignment 10 Answers


Q1. In case of Group Normalization, if group number=1, Group Normalization behaves like
a. Batch Normalization
b. Layer Normalization
c. Instance Normalization
d. None of the above

Answer: b. Layer Normalization


Q2. In case of Group Normalization, if group number=number of channels, Group Normalization behaves like
a. Batch Normalization
b. Layer Normalization
c. Instance Normalization
d. None of the above

Answer: c. Instance Normalization


These are NPTEL Deep Learning Week 10 Assignment 10 Answers


Q3. When will you do early stopping?
a. Minimum training loss point
b. Minimum validation loss point
c. Minimum test loss point
d. None of these

Answer: b. Minimum validation loss point


Q4. What is the use of learnable parameters in batch-normalization layer?
a. Calculate mean and variances
b. Perform normalization
c. Renormalize the activations
d. No learnable parameter is present

Answer: c. Renormalize the activations


These are NPTEL Deep Learning Week 10 Assignment 10 Answers


Q5. Which of the one is not a procedure to prevent overfitting?
a. Reduce feature size
b. Use dropout
c. Use Early stopping
d. Increase training iterations

Answer: d. Increase training iterations


These are NPTEL Deep Learning Week 10 Assignment 10 Answers


Q6. An autoencoder with 5 hidden layers has 10,004 number of parameters. If we use a dropout in each layer with 50% drop rate, what will be the number of parameters of that autoencoder?
a 2,501
b. 5,002
c. 10,004
d. 20,008

Answer: c. 10,004

See also  Deep Learning | Week 4

These are NPTEL Deep Learning Week 10 Assignment 10 Answers


Q7. Suppose, you have used a batch-normalization layer after a convolution block. After that you train the model using any standard dataset. Now will the extracted feature distribution after batch normalization layer have zero mean and unit variance if we feed any input image?
a. Yes. Because batch-normalization normalizes the features into zero mean and unit variance
b. No. Itis not possible to normalize the features into zero mean and unit variance
c. Can’t Say. Because the batch-normalization renormalizes the features using trainable parameters. After training, it may or may not be the zero mean and unit variance.
d. None of the above

Answer: c. Can’t Say. Because the batch-normalization renormalizes the features using trainable parameters. After training, it may or may not be the zero mean and unit variance.


These are NPTEL Deep Learning Week 10 Assignment 10 Answers


Q8. Which one of the following regularization methods induces sparsity among the trained weights?
a. L1 regularizer
b. L2 regularizer
c. Both L1 & L2
d. None of the above

Answer: a. L1 regularizer


These are NPTEL Deep Learning Week 10 Assignment 10 Answers


Q9. Which one of the following is not an advantage of dropout?
a. Regularization
b. Prevent Overfitting
c. Improve Accuracy
d. Reduce computational cost during testing

Answer: d. Reduce computational cost during testing


These are NPTEL Deep Learning Week 10 Assignment 10 Answers


Q10. Batch-Normalization layer takes the input x € R¥N*CWXH hatch mean is computed as je = 1 y~ yw yH N i 2 __1 N YW YH ww iz Ze Zk=1 Xicjk and batch variance is computed as oF = wa ist Zi Zea (icp — Hc)?. Now after normalization, ® = ir What is the purpose of € in this expression?
a. There is no such purpose
b. It helps to converge faster
c. Itis decay rate in normalization
d. It prevents division by zero for inputs with zero variance

See also  Deep Learning | Week 1

Answer: d. It prevents division by zero for inputs with zero variance



These are NPTEL Deep Learning Week 10 Assignment 10 Answers

More weeks of Deep Learning: Click Here

More Nptel Courses: https://progiez.com/nptel


These are NPTEL Deep Learning Week 10 Assignment 10 Answers