Deep Learning IIT Ropar Week 11 Nptel Assignment Answers

Are you looking for the Deep Learning IIT Ropar Week 11 NPTEL Assignment Answers 2025 (jan-Apr)? You’ve come to the right place! Access the most accurate and up-to-date solutions for your Week 11 assignment in the Deep Learning course offered by IIT Ropar.



Deep Learning IIT Ropar Week 11 Nptel Assignment Answers (Jan-Apr 2025)

Course Link: Click Here


1) For which of the following problems are RNNs suitable?

a) Generating a description from a given image.
b) Forecasting the weather for the next N days based on historical weather data.
c) Converting a speech waveform into text.
d) Identifying all objects in a given image.

View Answer


2) Suppose that we need to develop an RNN model for sentiment classification. The input to the model is a sentence composed of five words, and the output is the sentiment (positive or negative). Assume that each word is represented as a vector of length 100×1, and the output labels are one-hot encoded. Further, the state vector st is initialized with all zeros of size 30×1. How many parameters (including bias) are there in the network?

View Answer


3) Select the correct statements about GRUs:

a) GRUs have fewer parameters compared to LSTMs.
b) GRUs use a single gate to control both input and forget mechanisms.
c) GRUs are less effective than LSTMs in handling long-term dependencies.
d) GRUs are a type of feedforward neural network.

View Answer


4) What is the main advantage of using GRUs over traditional RNNs?

See also  Deep Learning Week 8 Nptel Assignment Answers

a) They are simpler to implement.
b) They solve the vanishing gradient problem.
c) They require less computational power.
d) They can handle non-sequential data.

View Answer


5) The statement that LSTM and GRU solve both the problem of vanishing and exploding gradients in RNN is:

a) True
b) False

View Answer


Deep Learning IIT Ropar Week 11 Nptel Assignment Answers


6) What is the vanishing gradient problem in training RNNs?

a) The weights of the network converge to zero during training.
b) The gradients used for weight updates become too large.
c) The network becomes overfit to the training data.
d) The gradients used for weight updates become too small.

View Answer


7) What is the role of the forget gate in an LSTM network?

a) To determine how much of the current input should be added to the cell state.
b) To determine how much of the previous time step’s cell state should be retained.
c) To determine how much of the current cell state should be output.
d) To determine how much of the current input should be output.

View Answer


8) How does LSTM prevent the problem of vanishing gradients?

a) Different activation functions, such as ReLU, are used instead of sigmoid in LSTM.
b) Gradients are normalized during backpropagation.
c) The learning rate is increased in LSTM.
d) Forget gates regulate the flow of gradients during backpropagation.

View Answer


Deep Learning IIT Ropar Week 11 Nptel Assignment Answers


9) We are given an RNN with ||W||=2.5. The activation function used in the RNN is logistic. What can we say about ∇=∥∥∂s20∂s1∥∥?

See also  Deep Learning Week 1 Assignment Answers Nptel

a) Value of ∇ is very high.
b) Value of ∇ is close to 0.
c) Value of ∇ is 2.5.
d) Insufficient information to say anything.

View Answer


10) Select the true statements about BPTT:

a) The gradients of Loss with respect to parameters are added across time steps.
b) The gradients of Loss with respect to parameters are subtracted across time steps.
c) The gradient may vanish or explode if timesteps are too large.
d) The gradient may vanish or explode if timesteps are too small.

View Answer


Deep Learning IIT Ropar Week 11 Nptel Assignment Answers

More weeks of Deep Learning: Click Here

More Nptel Courses: https://progiez.com/nptel

Deep Learning IIT Ropar Week 11 Nptel Assignment Answers