Deep Learning IIT Ropar Week 9 Nptel Assignment Answers

Are you looking for the Deep Learning IIT Ropar Week 9 NPTEL Assignment Answers 2025 (jan-Apr)? You’ve come to the right place! Access the most accurate and up-to-date solutions for your Week 9 assignment in the Deep Learning course offered by IIT Ropar.



Deep Learning IIT Ropar Week 9 Nptel Assignment Answers
Deep Learning IIT Ropar Week 9 Nptel Assignment Answers

Deep Learning IIT Ropar Week 9 Nptel Assignment Answers (Jan-Apr 2025)

Course Link: Click Here


1) What is the disadvantage of using Hierarchical Softmax?

a) It requires more memory to store the binary tree
b) It is slower than computing the softmax function directly
c) It is less accurate than computing the softmax function directly
d) It is more prone to overfitting than computing the softmax function directly

View Answer


2) Consider the following corpus: “AI driven user experience optimization. Perception of AI decision making speed. Intelligent interface adaptation system. AI system engineering for enhanced processing efficiency.” What is the size of the vocabulary of the above corpus?

a) 18
b) 20
c) 22
d) 19

View Answer


3) We add incorrect pairs into our corpus to maximize the probability of words that occur in the same context and minimize the probability of words that occur in different contexts. This technique is called:

a) Negative sampling
b) Hierarchical softmax
c) Contrastive estimation
d) Glove representations

View Answer


Deep Learning IIT Ropar Week 9 Nptel Assignment Answers


4) Let X be the co-occurrence matrix such that the (i,j)-th entry of X captures the PMI between the i-th and j-th word in the corpus. Every row of X corresponds to the representation of the i-th word in the corpus. Suppose each row of X is normalized (i.e., the L2 norm of each row is 1), then the (i,j)-th entry of XXT captures the:

See also  Deep Learning Week 7 Nptel Assignment Answers

a) PMI between word i and word j
b) Euclidean distance between word i and word j
c) Probability that word i
d) Cosine similarity between word i

View Answer


5) Suppose that we use the continuous bag of words (CBOW) model to find vector representations of words. Suppose further that we use a context window of size 3 (that is, given the 3 context words, predict the target word P(wt|(wi,wj,wk))). The size of word vectors (vector representation of words) is chosen to be 100, and the vocabulary contains 20,000 words. The input to the network is the one-hot encoding (also called 1-of-V encoding) of word(s). How many parameters (weights), excluding bias, are there in Wword? Enter the answer in thousands. For example, if your answer is 50,000, then just enter 50.

View Answer


Deep Learning IIT Ropar Week 9 Nptel Assignment Answers


6) You are given the one-hot representation of two words below: GEMINI = [1, 0, 0, 0, 1], CLAUDE = [0, 0, 0, 1, 0]. What is the Euclidean distance between GEMINI and CLAUDE?

View Answer


7) Let count(w,c) be the number of times the words w and c appear together in the corpus (i.e., occur within a window of a few words around each other). Further, let count(w) and count(c) be the total number of times the word w and c appear in the corpus respectively, and let N be the total number of words in the corpus. The PMI between w and c is then given by:

a) log(count(w,c) * count(w) / (N * count(c)))
b) log(count(w,c) * count(c) / (N * count(w)))
c) log(count(w,c) * N / (count(w) * count(c)))

View Answer


8) Consider a skip-gram model trained using hierarchical softmax for analyzing scientific literature. We observe that the word embeddings for ‘Neuron’ and ‘Brain’ are highly similar. Similarly, the embeddings for ‘Synapse’ and ‘Brain’ also show high similarity. Which of the following statements can be inferred?

See also  Deep Learning | Week 9

a) ‘Neuron’ and ‘Brain’ frequently appear in similar contexts
b) The model’s learned representations will indicate a high similarity between ‘Neuron’ and ‘Synapse’
c) The model’s learned representations will not show a high similarity between ‘Neuron’ and ‘Synapse’
d) According to the model’s learned representations, ‘Neuron’ and ‘Brain’ have a low cosine similarity

View Answer


Deep Learning IIT Ropar Week 9 Nptel Assignment Answers


9) Suppose we are learning the representations of words using Glove representations. If we observe that the cosine similarity between two representations vi and vj for words ‘i’ and ‘j’ is very high, which of the following statements is true? (parameter bi = 0.02 and bj = 0.07)

a) Xij = 0.04
b) Xij = 0.17
c) Xij = 0
d) Xij = 0.95

View Answer


10) Which of the following is an advantage of using the skip-gram method over the bag-of-words approach?

a) The skip-gram method is faster to train
b) The skip-gram method performs better on rare words
c) The bag-of-words approach is more accurate
d) The bag-of-words approach is better for short texts

View Answer


Deep Learning IIT Ropar Week 9 Nptel Assignment Answers

More weeks of Deep Learning: Click Here

More Nptel Courses: https://progiez.com/nptel

Deep Learning IIT Ropar Week 9 Nptel Assignment Answers