Deep Learning IIT Ropar Week 7 Nptel Answers

Are you looking for the Deep Learning IIT Ropar Week 7 NPTEL Assignment Answers . You’ve come to the right place! Access the most accurate and up-to-date solutions for your Week 7 assignment in the Deep Learning course offered by IIT Ropar.

Course Link: Click Here


Deep Learning IIT Ropar Week 7 Nptel Assignment Answers (Jan-Apr 2025)


Que.1 Which of the following statements about L2 regularization is true?
a) It adds a penalty term to the loss function that is proportional to the absolute value of the weights
b) It results in sparse solutions for w
c) It adds a penalty term to the loss function that is proportional to the square of the weights
d) It is equivalent to adding Gaussian noise to the weights
View Answer


Que.2 Consider two models:
    f¹(x) = w₀ + w₁x
    f²(x) = w₀ + w₁x² + w₂x² + w₄x⁴ + w₅x⁵

Which of these models has higher complexity?
a) f¹(x)
b) f²(x)
c) It is not possible to decide without knowing the true distribution of data points in the dataset
View Answer


Que.3 We generate the data using the following model:
    y = 7x³ + 12x + x + 2.

We fit the two models f¹(x) and f²(x) on this data and train them using a neural network.

a) f¹(x) has a higher bias than f²(x)
b) f²(x) has a higher bias than f¹(x)
c) f²(x) has a higher variance than f¹(x)
d) f¹(x) has a higher variance than f²(x)
View Answer


Que.4 Suppose that we apply Dropout regularization to a feedforward neural network. Suppose further that the mini-batch gradient descent algorithm is used for updating the parameters of the network. Choose the correct statement(s):

a) The dropout probability p can be different for each hidden layer
b) Batch gradient descent cannot be used to update the parameters of the network
c) Dropout with p = 0.5 acts as an ensemble regularizer
d) The weights of the neurons that were dropped during forward propagation at the t-th iteration will not get updated during t+1-th iteration
View Answer


Que.5 We have trained four different models on the same dataset using various hyperparameters. The training and validation errors for each model are provided below. Based on this information, which model is likely to perform best on the test dataset?

a) Model 1
b) Model 2
c) Model 3
d) Model 4
View Answer


Que.6 Consider a function L(w,b) = 0.4w² + 7b² + 1 and its contour plot. What is the value of L(w, b*)* where w** and b** are the values that minimize the function?
View Answer


Que.7 What is the sum of the elements of ∇L(w*,b*)?
View Answer


Que.8 What is the determinant of HL(w*,b*), where H is the Hessian of the function?
View Answer


Que.9 Compute the Eigenvalues and Eigenvectors of the Hessian. According to the eigenvalues of the Hessian, which parameter is the loss more sensitive to?
a) b
b) w
View Answer


Que.10 Consider the problem of recognizing an alphabet (uppercase or lowercase) of the English language in an image. There are 26 alphabets in the language. A team decided to use a CNN network to solve this problem. Suppose that the data augmentation technique is being used for regularization. Then which of the following transformations on all the training images is (are) appropriate?

a) Rotating the images by ±10°
b) Rotating the images by ±180°
c) Translating the image by 1 pixel in all directions
d) Cropping
View Answer


These are NPTEL Deep Learning Week 7 Assignment 7 Answers


Q1. Select the correct option about Autoencoder.
Statement 1: Autoencoder can be used for image compression
Statement 2: Autoencoder can be used for unsupervised pre-training for image classification

a. Both statements are true
b. Statement 1is true, but Statement 2 is false
c. Statement 1is false, but statement 2 is true
d. Both statements are false

Answer: a. Both statements are true


Q2. What is not a purpose of the stacked autoencoder?
a. Memory Efficient Training
b. Better Convergence
c. Faster inference
d. All of the above is the purpose of using stacked autoencoder

Answer: c. Faster inference


These are NPTEL Deep Learning Week 7 Assignment 7 Answers


Q3. Which autoencoder is the most effective for the dimensionality reduction of the data?
a. Overcomplete Denoising Autoencoder
b. Overcomplete Stacked Autoencoder
c. Undercomplete Denoising Autoencoder
d. Undercomplete Stacked Autoencoder

Answer: d. Undercomplete Stacked Autoencoder


Q4. An overcomplete autoencoder generally learns identity function. How can we prevent those autoencoder from learning the identity function and learn some useful representations?
a. Stack autoencoder based layer-wise training
b. Train the autoencoder for large number of epochs in order to learn more useful representation
c. Add noise to the data and train to learn noise-free data from noisy data
d. Itis not possible to train overcomplete autoencoder. It always converges to the identity function.

Answer: c. Add noise to the data and train to learn noise-free data from noisy data


These are NPTEL Deep Learning Week 7 Assignment 7 Answers


Q5. In which conditions, autoencoder has more powerful generalization than Principal Components Analysis (PCA) while performing dimensionality reduction?
a. Undercomplete Linear Autoencoder
b. Overcomplete Linear Autoencoder
c. Undercomplete Non-linear Autoencoder
d. Overcomplete Non-Linear Autoencoder

Answer: c. Undercomplete Non-linear Autoencoder


Q6. A autoencoder consists of 100 input neurons, 50 hidden neurons. If the network weights are represented using single precision floating point numbers then what will be size of weight matrix?
a. 10,000 Bytes
b. 10,150 Bits
c. 40,000 Bytes
d. 40,600 Bytes

Answer: d. 40,600 Bytes


These are NPTEL Deep Learning Week 7 Assignment 7 Answers


Q7. Which of the following is not the purpose of cost function in training denoising autoencoders?
a. Dimension reduction
b. Error minimization
c. Weight Regularization
d. Image denoising

Answer: a. Dimension reduction


Q8. What is the role of sparsity constraint in a sparse autoencoder?
a. Control the number of active nodes in a hidden layer
b. Control the noise level in a hidden layer
c. Control the hidden layer length
d. Not related to sparse autoencoder

Answer: a. Control the number of active nodes in a hidden layer


These are NPTEL Deep Learning Week 7 Assignment 7 Answers


Q9. Which of the following autoencoder is not a regularization autoencoder?
a. Sparse autoencoder
b. Denoising autoencoder
c. Bothaandb
d. Stack autoencoder

Answer: d. Stack autoencoder


Q10. Which of the following is NOT an application of an autoencoder?
a. Dimensionality reduction
b. Feature learning
c. Image compression
d. Image segmentation

Answer: d. Image segmentation



These are NPTEL Deep Learning Week 7 Assignment 7 Answers

More weeks of Deep Learning: Click Here

More Nptel Courses: https://progiez.com/nptel


These are NPTEL Deep Learning Week 7 Assignment 7 Answers