Deep Learning IIT Ropar Week 6 Nptel Answers
Are you looking for the Deep Learning IIT Ropar Week 6 NPTEL Assignment Answers 2024 (July-Dec)? You’ve come to the right place! Access the most accurate and up-to-date solutions for your Week 6 assignment in the Deep Learning course offered by IIT Ropar.
Course Link: Click Here
Table of Contents
Deep Learning IIT Ropar Week 6 Nptel Assignment Answers (July-Dec 2024)
1. We are given an autoencoder A. The average activation value of neurons in this network is 0.01. The given autoencoder is:
A) Contractive autoencoder
B) Overcomplete neural network
C) Denoising autoencoder
D) Sparse autoencoder
Answer: D) Sparse autoencoder
2. If an under-complete autoencoder has an input layer with a dimension of 7, what could be the possible dimension of the hidden layer?
A) 6
B) 8
C) 0
D) 7
E) 2
Answer: C) 0
E) 2
3. What is the primary reason for adding corruption to the input data in a denoising autoencoder?
A) To increase the complexity of the model.
B) To improve the model’s ability to generalize to unseen data.
C) To reduce the size of the training dataset.
D) To increase the training time.
Answer: Updating soon in Progress
4. Suppose for one data point we have features ( x1, x2, x3, x4, x5 ) as −3, 7, 2.1, 0, 12.5 then, which of the following function should we use on the output layer (decoder)?
A) Logistic
B) Linear
C) ReLU
D) Tanh
Answer: B) Linear
These are Deep Learning IIT Ropar Week 6 Nptel Assignment Answers
5. What is/are the primary advantages of Autoencoders over PCA?
A) Autoencoders are less prone to overfitting than PCA.
B) Autoencoders are faster and more efficient than PCA.
C) Autoencoders can capture nonlinear relationships in the input data.
D) Autoencoders require fewer input data than PCA.
Answer: C) Autoencoders can capture nonlinear relationships in the input data.
6. What type of autoencoder is it when the hidden layer’s dimensionality is less than that of the input layer?
A) Under-complete autoencoder
B) Complete autoencoder
C) Overcomplete autoencoder
D) Sparse autoencoder
Answer: C) Overcomplete autoencoder
7. Which of the following statements about overfitting in overcomplete autoencoders is true?
A) Reconstruction error is very low while training
B) Reconstruction error is very high while training
C) Network fails to learn good representations of input
D) Network learns good representations of input
Answer: Updating soon in Progress
8. Which of the following statements about regularization in autoencoders is always true?
A) Regularisation reduces the search space of weights for the network.
B) Regularisation helps to reduce the overfitting in overcomplete autoencoders.
C) Regularisation shrinks the size of weight vectors learned.
D) All of these.
Answer: Updating soon in Progress
These are Deep Learning IIT Ropar Week 6 Nptel Assignment Answers
9. We are using the following autoencoder with linear encoder and linear decoder. The eigenvectors associated with the covariance matrix of our data ( X ) are ( (V1, V2, V3, V4, V5) ). What are the representations most likely to be learned by our hidden layer ( H )? (Eigenvectors are written in decreasing order to the eigenvalues associated with them)
A) ( V1, V2 )
B) ( V4, V5 )
C) ( V1, V3 )
D) ( V1, V2, V3, V4, V5 )
Answer: A) ( V1, V2 )
10. What is the primary objective of sparse autoencoders that distinguishes it from vanilla autoencoder?
A) They learn a low-dimensional representation of the input data
B) They minimize the reconstruction error between the input and the output
C) They capture only the important variations/features in the data
D) They maximize the mutual information between the input and the output
Answer: C) They capture only the important variations/features in the data
These are Deep Learning IIT Ropar Week 6 Nptel Assignment Answers
Check here all Deep Learning IIT Ropar Nptel Assignment Answers : Click here
For answers to additional Nptel courses, please refer to this link: NPTEL Assignment Answers