Deep Learning IIT Ropar Week 6 Nptel Answers

Are you looking for the Deep Learning IIT Ropar Week 6 NPTEL Assignment Answers 2025 (Jan-Apr)? You’ve come to the right place! Access the most accurate and up-to-date solutions for your Week 6 assignment in the Deep Learning course offered by IIT Ropar.


Deep Learning IIT Ropar Week 6 Nptel Assignment Answers
 img
Deep Learning IIT Ropar Week 6 Nptel Assignment Answers

Deep Learning IIT Ropar Week 6 Nptel Assignment Answers (Jan-Apr 2025)

Course Link: Click Here


  1. What is/are the primary advantages of Autoencoders over PCA?
    a. Autoencoders are less prone to overfitting than PCA.
    b. Autoencoders are faster and more efficient than PCA.
    c. Autoencoders require fewer input data than PCA.
    d. Autoencoders can capture nonlinear relationships in the input data.

View Answer


  1. Which of the following is a potential advantage of using an overcomplete autoencoder?
    a. Reduction of the risk of overfitting
    b. Faster training time
    c. Ability to learn more complex and nonlinear representations
    d. To compress the input data

View Answer


  1. We are given an autoencoder A. The average activation value of neurons in this network is 0.015. The given autoencoder is:
    a. Contractive autoencoder
    b. Sparse autoencoder
    c. Overcomplete neural network
    d. Denoising autoencoder

View Answer


  1. Suppose we build a neural network for a 5-class classification task. Suppose for a single training example, the true label is [0 1 0 0 1] while the predictions by the neural network are [0.4 0.25 0.2 0.1 0.6]. What would be the value of cross-entropy loss for this example? (Answer up to two decimal places, Use base 2 for log-related calculations)

View Answer


  1. If an under-complete autoencoder has an input layer with a dimension of 5, what could be the possible dimension of the hidden layer?
    a. 5
    b. 4
    c. 2
    d. 0
    e. 6
See also  Deep Learning IIT Ropar Week 1 Nptel Assignment Answers

View Answer


  1. Which of the following networks represents an autoencoder?

a.
b.
c.
d.

View Answer


  1. What is the primary reason for adding corruption to the input data in a denoising autoencoder?
    a. To increase the complexity of the model.
    b. To improve the model’s ability to generalize to unseen data.
    c. To reduce the size of the training dataset.
    d. To increase the training time.

View Answer


  1. Suppose for one data point we have features x₁, x₂, x₃, x₄, x₅ as −4, 6, 2.8, 0, 17.3. Then, which of the following function should we use on the output layer (decoder)?
    a. Linear
    b. Logistic
    c. Relu
    d. Tanh

View Answer


  1. Which of the following statements about overfitting in overcomplete autoencoders is true?
    a. Reconstruction error is very high while training
    b. Reconstruction error is very low while training
    c. Network fails to learn good representations of input
    d. Network learns good representations of input

View Answer


  1. What is the purpose of a decoder in an autoencoder?
    a. To reconstruct the input data
    b. To generate new data
    c. To compress the input data
    d. To extract features from the input data

View Answer


Deep Learning IIT Ropar Week 6 Nptel Assignment Answers (July-Dec 2024)

Course Link: Click Here


1. We are given an autoencoder A. The average activation value of neurons in this network is 0.01. The given autoencoder is:

A) Contractive autoencoder
B) Overcomplete neural network
C) Denoising autoencoder
D) Sparse autoencoder

Answer: D) Sparse autoencoder


2. If an under-complete autoencoder has an input layer with a dimension of 7, what could be the possible dimension of the hidden layer?

A) 6
B) 8
C) 0
D) 7
E) 2

Answer: C) 0
E) 2


3. What is the primary reason for adding corruption to the input data in a denoising autoencoder?

See also  Deep Learning | Week 8

A) To increase the complexity of the model.
B) To improve the model’s ability to generalize to unseen data.
C) To reduce the size of the training dataset.
D) To increase the training time.

Answer: Updating soon in Progress


4. Suppose for one data point we have features ( x1, x2, x3, x4, x5 ) as −3, 7, 2.1, 0, 12.5 then, which of the following function should we use on the output layer (decoder)?

A) Logistic
B) Linear
C) ReLU
D) Tanh

Answer: B) Linear


These are Deep Learning IIT Ropar Week 6 Nptel Assignment Answers


5. What is/are the primary advantages of Autoencoders over PCA?

A) Autoencoders are less prone to overfitting than PCA.
B) Autoencoders are faster and more efficient than PCA.
C) Autoencoders can capture nonlinear relationships in the input data.
D) Autoencoders require fewer input data than PCA.

Answer: C) Autoencoders can capture nonlinear relationships in the input data.


6. What type of autoencoder is it when the hidden layer’s dimensionality is less than that of the input layer?

A) Under-complete autoencoder
B) Complete autoencoder
C) Overcomplete autoencoder
D) Sparse autoencoder

Answer: C) Overcomplete autoencoder


7. Which of the following statements about overfitting in overcomplete autoencoders is true?

A) Reconstruction error is very low while training
B) Reconstruction error is very high while training
C) Network fails to learn good representations of input
D) Network learns good representations of input

Answer: Updating soon in Progress


8. Which of the following statements about regularization in autoencoders is always true?

A) Regularisation reduces the search space of weights for the network.
B) Regularisation helps to reduce the overfitting in overcomplete autoencoders.
C) Regularisation shrinks the size of weight vectors learned.
D) All of these.

See also  Deep Learning IIT Ropar Week 9 Nptel Assignment Answers

Answer: Updating soon in Progress


These are Deep Learning IIT Ropar Week 6 Nptel Assignment Answers


9. We are using the following autoencoder with linear encoder and linear decoder. The eigenvectors associated with the covariance matrix of our data ( X ) are ( (V1, V2, V3, V4, V5) ). What are the representations most likely to be learned by our hidden layer ( H )? (Eigenvectors are written in decreasing order to the eigenvalues associated with them)

A) ( V1, V2 )
B) ( V4, V5 )
C) ( V1, V3 )
D) ( V1, V2, V3, V4, V5 )

Answer: A) ( V1, V2 )


10. What is the primary objective of sparse autoencoders that distinguishes it from vanilla autoencoder?

A) They learn a low-dimensional representation of the input data
B) They minimize the reconstruction error between the input and the output
C) They capture only the important variations/features in the data
D) They maximize the mutual information between the input and the output

Answer: C) They capture only the important variations/features in the data


These are Deep Learning IIT Ropar Week 6 Nptel Assignment Answers

Check here all Deep Learning IIT Ropar Nptel Assignment Answers : Click here

For answers to additional Nptel courses, please refer to this link: NPTEL Assignment Answers