Deep Learning IIT Ropar Week 12 Nptel Assignment Answers


Are you looking for the Deep Learning IIT Ropar Week 12 NPTEL Assignment Answers 2025 (jan-Apr)? You’ve come to the right place! Access the most accurate and up-to-date solutions for your Week 12 assignment in the Deep Learning course offered by IIT Ropar.



Deep Learning IIT Ropar Week 12 Nptel Assignment Answers (Jan-Apr 2025)

Course Link: Click Here


1. What is the primary purpose of the attention mechanism in neural networks?

a) To reduce the size of the input data
b) To increase the complexity of the model
c) To eliminate the need for recurrent connections
d) To focus on specific parts of the input sequence

View Answer


2. Which of the following are the benefits of using attention mechanisms in neural networks?

a) Improved handling of long-range dependencies
b) Enhanced interpretability of model predictions
c) Ability to handle variable-length input sequences
d) Reduction in model complexity

View Answer


3. If we make the vocabulary for an encoder-decoder model using the given sentence. What will be the size of our vocabulary?
Sentence: Attention mechanisms dynamically identify critical input components, enhancing contextual understanding and boosting performance

a) 13
b) 14
c) 15
d) 16

View Answer


4. We are performing the task of Machine Translation using an encoder-decoder model. Choose the equation representing the Encoder model.

a) s₀ = CNN(xᵢ)
b) s₀ = RNN(sₜ₋₁, e(ŷₜ₋₁))
c) s₀ = RNN(xᵢₜ)
d) s₀ = RNN(hₜ₋₁, xᵢₜ)

View Answer


5. Which of the following attention mechanisms is most commonly used in the Transformer model architecture?

a) Additive attention
b) Dot product attention
c) Multiplicative attention
d) None of the above

View Answer


6. Which of the following is NOT a component of the attention mechanism?

a) Decoder
b) Key
c) Value
d) Query
e) Encoder

View Answer


7. In a hierarchical attention network, what are the two primary levels of attention?

a) Character-level and word-level
b) Word-level and sentence-level
c) Sentence-level and document-level
d) Paragraph-level and document-level

View Answer


8. Which of the following are the advantages of using attention mechanisms in encoder-decoder models?

a) Reduced computational complexity
b) Ability to handle variable-length input sequences
c) Improved gradient flow during training
d) Automatic feature selection
e) Reduced memory requirements

View Answer


9. In the encoder-decoder architecture with attention, where is the context vector typically computed?

a) In the encoder
b) In the decoder
c) Between the encoder and decoder
d) After the decoder

View Answer


10. Which of the following output functions is most commonly used in the decoder of an encoder-decoder model for translation tasks?

a) Softmax
b) Sigmoid
c) ReLU
d) Tanh

View Answer


Deep Learning IIT Ropar Week 12 Nptel Assignment Answers

More weeks of Deep Learning: Click Here

More Nptel Courses: https://progiez.com/nptel

Deep Learning IIT Ropar Week 12 Nptel Assignment Answers