Deep Learning IIT Ropar Week 5 Nptel Answers
Are you looking for the Deep Learning IIT Ropar Week 5 NPTEL Assignment Answers 2024 (July-Dec)? You’ve come to the right place! Access the most accurate and up-to-date solutions for your Week 5 assignment in the Deep Learning course offered by IIT Ropar.
Table of Contents

Deep Learning IIT Ropar Week 5 Nptel Assignment Answers (July-Dec 2025)
Question 1. Let A=[1221],x=[13]A = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix}, x = \begin{bmatrix} 1 \\ 3 \end{bmatrix}. You apply the transformation AxAx. Which of the following best describes the geometric effect of this matrix transformation on the vector xx?
a) The vector xx is scaled but remains in the same direction — no rotation occurs.
b) The vector xx is transformed into a vector orthogonal to itself.
c) The vector xx is mapped to the origin — it lies in the null space of AA.
d) The vector xx is rotated to a new direction and scaled — the direction changes.
Question 2. Let A=[1221],x=[11]A = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix}, x = \begin{bmatrix} 1 \\ 1 \end{bmatrix}. A student claims that xx is an eigenvector of matrix AA. Is the student’s claim correct? If so, what is the corresponding eigenvalue?
a) Yes, xx is an eigenvector of AA with eigenvalue λ=2\lambda = 2.
b) No, the transformation changes the direction of xx, so it cannot be an eigenvector.
c) Yes, xx is an eigenvector with eigenvalue λ=3\lambda = 3.
d) No, xx maps to a zero vector under AA, indicating it lies in the null space.
Question 3. A company models user transition between two online platforms, AppX and AppY, using a matrix M=[0.70.30.20.8]M = \begin{bmatrix} 0.7 & 0.3 \\ 0.2 & 0.8 \end{bmatrix}. If on day 0, 300 users are on AppX and 200 on AppY, which of the following best describes what will eventually happen to the distribution of users?
a) The number of users will keep oscillating without settling
b) The users will all eventually shift to AppX
c) The distribution will stabilize in the ratio of the dominant eigenvector
d) The user numbers will keep increasing exponentially
Question 4. Given matrix A=[4213]A = \begin{bmatrix} 4 & 2 \\ 1 & 3 \end{bmatrix}, find the dominant eigenvalue of A.
Fill in the answer: ____________
Question 5. Which of the following statements are true?
a) The eigenvectors corresponding to different eigenvalues are linearly independent.
b) The eigenvectors of a square symmetric matrix are orthogonal.
c) The eigenvectors of a square symmetric matrix can thus form a convenient basis.
d) A statement which is wrong.
Question 6. Consider a sequence of vectors ν0=[11]\nu_0 = \begin{bmatrix} 1 \\ 1 \end{bmatrix} and matrix A=[2112]A = \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix}. If we compute νn=Anν0\nu_n = A^n \nu_0, what will νn\nu_n converge to?
a) A random vector
b) A zero vector
c) A multiple of the dominant eigenvector of A
d) The vector with the smallest eigenvalue
Question 7. A data scientist is simulating a simplified web graph with 3 webpages (A, B, C). The transition probability matrix is: M=[0.50.30.20.20.50.30.30.30.4]M = \begin{bmatrix} 0.5 & 0.3 & 0.2 \\ 0.2 & 0.5 & 0.3 \\ 0.3 & 0.3 & 0.4 \end{bmatrix}
The scientist initializes a visitor distribution vector ν0=[0.30.40.3]\nu_0 = \begin{bmatrix} 0.3 \\ 0.4 \\ 0.3 \end{bmatrix}. She repeatedly applies νk+1=Mνk\nu_{k+1} = M\nu_k.
What kind of matrix is MM, and why is this classification important?
a) MM is an orthogonal matrix, so the vectors vkv_k remain unchanged in length.
b) MM is a symmetric matrix, which ensures all eigenvalues are real.
c) MM is a stochastic matrix, so it preserves probability distributions and has a dominant eigenvalue equal to 1.
d) MM is a diagonal matrix, so it converges faster under repeated multiplication.
Question 8. Suppose after multiple iterations the vector vkv_k converges to: ν∗=[0.340.390.27]\nu^* = \begin{bmatrix} 0.34 \\ 0.39 \\ 0.27 \end{bmatrix}
Which of the following best interprets this result in web ranking?
a) Page B has the highest rank, followed by A and C.
b) Page A is most likely to be visited, so it should be assigned lowest priority.
c) All pages are equally ranked because the initial vector had equal probability.
d) This indicates convergence failed, as stochastic matrices must lead to uniform distributions.
Question 9. Suppose instead of starting with ν0=[0.30.40.3]\nu_0 = \begin{bmatrix} 0.3 \\ 0.4 \\ 0.3 \end{bmatrix}, the scientist starts with ν0=[100]\nu_0 = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}. Will the final ranking vector after many iterations be different?
a) No, because the convergence behavior is governed only by the dominant eigenvalue and eigenvector.
b) Yes, because the initial vector strongly favors page A.
c) No, because the matrix is symmetric, so initial vector doesn’t matter.
d) Yes, because the matrix has complex eigenvalues and depends on initial phase.
Question 10. A weather model predicts transitions between Sunny and Rainy using: A=[0.60.40.20.8],x0=[0.50.5]A = \begin{bmatrix} 0.6 & 0.4 \\ 0.2 & 0.8 \end{bmatrix}, \quad x_0 = \begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix}
The meteorologist applies xk+1=Axkx_{k+1} = Ax_k. What best describes the long-term behavior of xkx_k?
a) The sequence will vanish toward zero.
b) The sequence will oscillate between states.
c) The sequence will converge to a steady-state probability vector.
d) The sequence will explode due to growing norm.
Question 11. A student studies the behavior of transformation: A=[0.5000.3],x0=[23]A = \begin{bmatrix} 0.5 & 0 \\ 0 & 0.3 \end{bmatrix}, \quad x_0 = \begin{bmatrix} 2 \\ 3 \end{bmatrix}
She computes x1=Ax0,x2=Ax1,…x_1 = Ax_0, x_2 = Ax_1, \dots. What is the nature of this sequence as k→∞k \to \infty?
a) The sequence will converge to a zero vector because both eigenvalues have magnitude < 1.
b) The sequence will oscillate due to eigenvalue signs.
c) The sequence will explode since the matrix has entries > 1.
d) The vector will converge to a steady state with unit magnitude.
Question 12. Given three vectors ν1=[123],ν2=[456],ν3=[789]\nu_1 = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}, \nu_2 = \begin{bmatrix} 4 \\ 5 \\ 6 \end{bmatrix}, \nu_3 = \begin{bmatrix} 7 \\ 8 \\ 9 \end{bmatrix}. To determine if {ν1,ν2,ν3}\{\nu_1,\nu_2,\nu_3\} can serve as a basis for R3\mathbb{R}^3, which step is most appropriate?
a) Check if the determinant of A=[ν1ν2ν3]A = [\nu_1 \nu_2 \nu_3].
b) Check if ν1+ν2+ν3=0\nu_1 + \nu_2 + \nu_3 = 0.
c) Verify if any vector can be written as a linear combination of the others.
d) Both A and C.
Question 13. Given: A=[123456789]A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}
What does the determinant of AA tell us about the vectors?
a) The vectors form a basis for R3\mathbb{R}^3.
b) The vectors are linearly dependent and hence do not form a basis.
c) The vectors span R3\mathbb{R}^3 but are not independent.
d) Cannot conclude anything from the determinant.
Question 14. Which of the following statements is always true for a square matrix A∈Rn×nA \in \mathbb{R}^{n \times n} with distinct eigenvalues?
a) The eigenvectors of AA form a linearly independent set.
b) All eigenvectors of AA are orthogonal.
c) The eigenvectors of AA are linearly dependent.
d) AA must be a symmetric matrix.
Question 15. Why are the eigenvectors of a square symmetric matrix considered special?
a) They are always zero vectors.
b) They are complex even when the matrix has real entries.
c) They are orthogonal to each other and can form a basis.
d) They cannot be used to diagonalize the matrix.
Question 16. Let AA be a real symmetric matrix. Which of the following statements is true?
a) The vector that maximizes xTAxx^T A x under the constraint ∥x∥=1\|x\| = 1 is the eigenvector of AA corresponding to its smallest eigenvalue.
b) The vector that minimizes xTAxx^T A x under the constraint ∥x∥=1\|x\| = 1 is the eigenvector of AA corresponding to its largest eigenvalue.
c) The maximum and minimum of xTAxx^T A x under ∥x∥=1\|x\| = 1 are both achieved by arbitrary unit vectors.
d) The vector that maximizes xTAxx^T A x under the constraint ∥x∥=1\|x\| = 1 is the eigenvector corresponding to the largest eigenvalue of AA.
Question 17. Which of the following statements are true regarding eigenvectors and symmetric matrices?
a) Eigenvectors corresponding to different eigenvalues are linearly independent.
b) Eigenvectors of any matrix are always orthogonal.
c) Eigenvectors of a square symmetric matrix are orthogonal.
d) Eigenvectors of a square symmetric matrix can form a basis.
Question 18. What does the correlation coefficient ρyz\rho_{yz} tell us about the relationship between modules visited (y) and clicks (z)?
a) They are unrelated
b) They are moderately correlated
c) They are strongly positively correlated
d) They are inversely related
Question 19. How should the analytics team handle column zz when building a predictive model?
a) Drop one of y or z to reduce redundancy
b) Keep both y and z to increase accuracy
c) Use only z because it is slightly higher in some rows
d) Drop both y and z since they are similar
Question 20. You are working on a dataset with 100 features. After applying PCA, you reduce it to 2 dimensions. These new dimensions:
a) Are linear combinations of original features and orthogonal
b) Represent the mean of the original features
c) Are highly correlated with each other
d) Have lower variance than all other dimensions
Question 21. What is the most principled justification for discarding the 90 components in PCA?
a) The discarded components correspond to the directions where the data varies the most, so they are likely to be noisy.
b) The retained components are likely to contain only noise, so discarding the rest ensures the signal is preserved.
c) The discarded components represent directions with very low variance, which often contain noise rather than meaningful structure.
d) All PCA components contribute equally to reconstruction, so discarding any of them does not affect signal quality.
Question 22. Which mathematical property of the eigenvectors ensures that the new PCA basis does not mix signal and noise directions?
a) Eigenvectors of a matrix are always parallel to each other, preserving component alignment.
b) Eigenvectors of a symmetric matrix are orthogonal, allowing projection without interference.
c) Eigenvectors are complex-valued, making them useful for separating signal from noise.
d) Eigenvectors are unit vectors, ensuring normalization of signal power.
Question 23. What is the most likely visual outcome of reconstructing images using only the 3946 low-variance PCA components?
a) The images will appear distorted or noisy, lacking meaningful structure.
b) The images will retain essential facial features with only minor loss in brightness.
c) The images will be highly detailed, as fine-grained features are captured in low-variance components.
d) The images will look the same, since orthogonality ensures perfect reconstruction from any subset of components.
Question 24. Why is it mathematically sound to use only the top-k eigenvectors for dimensionality reduction in PCA?
a) Because the top-k eigenvectors span the entire null space of the original matrix.
b) Because eigenvectors with smaller eigenvalues are linearly dependent and redundant.
c) Because PCA only works with orthogonal matrices of full rank.
d) Because these directions correspond to maximum variance, preserving most information with minimal components.
Question 25. Which of the following best explains why PCA was effective in fraud detection case?
a) PCA minimized the variance within each individual feature, making the data easier to compress.
b) PCA transformed the features into a new space where components are uncorrelated and high-variance directions are preserved, reducing dimensionality with minimal loss of information.
c) PCA converted all non-linear features into linear ones, which improved model interpretability.
d) PCA forced all features to have zero mean and unit variance, which improved classification accuracy.
Question 26. What mathematical guarantee offered by PCA ensures that the model is not negatively affected by redundant or overlapping features?
a) PCA maximizes correlation between transformed features to preserve group structure.
b) PCA aligns features with the original feature axes, making reconstruction error zero.
c) PCA ensures that the covariance between the new dimensions is minimized, thereby removing redundancy.
d) PCA clusters features into distinct groups based on their variance contribution.
Question 27. What is the role of eigenfaces in compressing the face image dataset?
a) They duplicate all faces to save space.
b) They reduce resolution of images for faster display.
c) They form a lower-dimensional basis to represent face images efficiently.
d) They are used to colorize grayscale images.
Question 28. You have 500 images, each originally 10,000 dimensions. After projecting onto the top 100 eigenfaces, how many scalar values are needed to store all 500 compressed images?
a) 10,000
b) 50,000
c) 5,000,000
d) 600
Question 29. What is the correct form of the Singular Value Decomposition (SVD) of a real matrix A∈Rm×nA \in \mathbb{R}^{m \times n}?
a) A=UΣVTA = U \Sigma V^T
b) A=UDUTA = U D U^T
c) A=VΣVTA = V \Sigma V^T
d) A=P∇PTA = P \nabla P^T
Question 30. In the SVD of a real matrix A=UΣVTA = U \Sigma V^T, what is the nature of the matrix Σ\Sigma?
a) A diagonal matrix with real values
b) A diagonal matrix with non-negative real numbers
c) An upper triangular matrix
d) A symmetric matrix
Deep Learning IIT Ropar Week 5 Nptel Assignment Answers (Jan-Apr 2025)
Course Link: Click Here
- Which of the following is the most appropriate description of the method used in PCA to achieve dimensionality reduction?
A. PCA achieves this by discarding a random subset of features in the dataset
B. PCA achieves this by selecting those features in the dataset along which the variance of the dataset is maximised
C. PCA achieves this by retaining the features in the dataset along which the variance of the dataset is minimised
D. PCA achieves this by looking for those directions in the feature space along which the variance of the dataset is maximised
View Answer
- What is/are the limitations of PCA?
A. It can only identify linear relationships in the data.
B. It can be sensitive to outliers in the data.
C. It is computationally less efficient than autoencoders
D. It can only reduce the dimensionality of a dataset by a fixed amount.
View Answer
- The following are possible numbers of linearly independent eigenvectors for a 7×7 matrix. Choose the incorrect option.
A. 1
B. 3
C. 9
D. 5
E. 8
View Answer
- Find the singular values of the following matrix: [−43−6−8]
A. σ1 = 10, σ2 = 5
B. σ1 = 1, σ2 = 0
C. σ1 = 100, σ2 = 25
D. σ1 = σ2 = 0
View Answer
- PCA is performed on a mean-centred dataset in R³. If the first principal component is 16√(1, −1, 2), which of the following could be the second principal component?
A. (1, −1, 2)
B. (0, 0, 0)
C. 15√(0, 1, 2)
D. 12√(−1, −1, 0)
View Answer
- What is the mean of the given data points x1, x2, x3?
A. [11]
B. [1.67]
C. [2]
D. [0.33]
View Answer
- The covariance matrix C = 1/n Σ (x − x̄)(x − x̄)T is given by: (x̄ is mean of the data points)
A. [8.66 −7.33; −7.33 8.66]
B. [2.88 −2.44; −2.44 2.88]
C. [0.22 −0.22; −0.22 0.22]
D. [5.33 −5.33; −0.33 0.33]
View Answer
- The maximum eigenvalue of the covariance matrix C is:
A. 1
B. 5.33
C. 0.44
D. 0.5
View Answer
- The eigenvector corresponding to the maximum eigenvalue of the given matrix C is:
A. [1, 1]
B. [−1, 1]
C. [0.670]
D. [−1.481]
View Answer
- Given that A is a 2×2 matrix, what is the determinant of A, if its eigenvalues are 6 and 7?
View Answer
Deep Learning IIT Ropar Week 5 Nptel Assignment Answers (July-Dec 2024)
1. Which of the following is a measure of the amount of variance explained by a principal component in PCA?
a) Covariance
b) Correlation
c) Mean absolute deviation
d) Eigenvalue
Answer: d) Eigenvalue
2. What is/are the limitations of PCA?
a) It is computationally less efficient than autoencoders
b) It can only reduce the dimensionality of a dataset by a fixed amount.
c) It can only identify linear relationships in the data.
d) It can be sensitive to outliers in the data.
Answer: d) It can be sensitive to outliers in the data.
3. Which of the following is a property of eigenvalues of a symmetric matrix?
a) Eigenvalues are always positive
b) Eigenvalues are always negative
c) Eigenvalues are always real
d) Eigenvalues can be complex numbers with imaginary parts non-zero
Answer: c) Eigenvalues are always real
4. The eigenvalues of A are 3, 4. Which of the following are the eigenvalues of A³?
a) 3, 4
b) 9, 16
c) 27, 64
d) √3, √4
Answer: b) 9, 16
5. If we have a 12×12 matrix having entries from R, how many linearly independent eigenvectors corresponding to real eigenvalues are possible for this matrix?
a) 10
b) 24
c) 12
d) 6
Answer: c) 12
6. What is the mean of the given data points x₁, x₂, x₃?
a) [5.5]
b) [1.67, 1.67]
c) [2.2]
d) [1.5, 1.5]
Answer: b) [1.67, 1.67]
7. The covariance matrix C=1/n ∑(x−x̄)(x−x̄)T is given by:
a) [0.22, −0.11; −0.11, 0.22]
b) [0.33, −0.17; −0.17, 0.33]
c) [0.22, −0.22; −0.22, 0.22]
d) [0.33, −0.33; −0.33, 0.33]
Answer: a) [0.22, −0.11; −0.11, 0.22]
8. The maximum eigenvalue of the covariance matrix C is:
a) 0.33
b) 0.67
c) 1
d) 0.5
Answer: d) 0.5
9. The eigenvector corresponding to the maximum eigenvalue of the given matrix C is:
a) [0.71, 0.71]
b) [−0.71, 0.71]
c) [−1, 1]
d) [1, 1]
Answer: a) [0.71, 0.71]
10. What is the determinant of a 2×2 matrix that has eigenvalues of 4 and 5?
Answer: 20
These are Deep Learning IIT Ropar Week 5 Nptel Assignment Answers
Check here all Deep Learning IIT Ropar Nptel Assignment Answers : Click here
For answers to additional Nptel courses, please refer to this link: NPTEL Assignment Answers