ML Deep Learning Fundamentals Applications Week 3 Answers
Are you searching for reliable ML Deep Learning Fundamentals Applications Week 3 Answers 2024? Look no further! Our solutions are designed to provide clear, detailed answers, helping you navigate your NPTEL course with confidence.
Table of Contents
ML Deep Learning Fundamentals Applications Week 3 Answers (July-Dec 2024)
Q1.The bandwidth parameter in the Parzen Window method determines:
The number of neighbors to consider for classification
The size of the neighborhood around a test instance
The dimensionality of the feature space.
The complexity of the classifier
Answer: Updating Soon (in progress)
Q2.If the number of data samples becomes very large.
Bayesian Estimation is worse than MLE
Maximum Likelihood estimates are slightly bad
Bayesian Estimation performs same as MLE
None
Answer: Updating Soon (in progress)
For answers or latest updates join our telegram channel: Click here to join
These are ML Deep Learning Fundamentals Applications Week 3 Answers
Q3. What happens when k=1 in k -Nearest Neighbor algorithm:
Underfitting
Overfitting
High testing accuracy
All the above
Answer: Updating Soon (in progress)
Q4. There are 18 points in an axis plane namely –
[(0.8,0.8)t,(1,1)t,(1.2,0.8)t,(0.8,1.2)t,(1.2,1.2)t]
,
belong to class 1;
[(4,3)t,(3.8,2.8)t,(4.2,2.8)t,(3.8,3.2)t(4.2,3.2)t,(4.4,2.8)t,(4.4,4.4)t]
,
belong to class 2;
[(3.2,0.4)t,(3.2,0.7)t,(3.8,0.5)t,(3.5,1)t,(4,1)t,(4,0.7)t]
,
belong to class 3.
A new point
P=(4.2,1.8)t
introduces into the map. The point P belongs to which class? Use k
-nearest neighbor technique with k=5
to calculate the result.
Class 1
Class 2
Class 3
None of the above
Answer: Updating Soon (in progress)
For answers or latest updates join our telegram channel: Click here to join
These are ML Deep Learning Fundamentals Applications Week 3 Answers
Q5. Suppose we have two training data points located at 0.5 and 0.7, and we use 0.3 as its rectangle window width. Using the Parzen window technique, what would be the probability density if we assume the query point is 0.5?
0.5
0.75
2.22
1.67
Answer: Updating Soon (in progress)
Q6. Suppose that X is a discrete random variable with the following probability
mass function: where is a parameter.
(0≤θ≤1)
The following 10 independent observations were taken from such a distribution:
(3,0,2,1,3,2,1,0,2,1)
. What is the maximum likelihood estimate of θ
?
2
1
0.5
0
Answer: Updating Soon (in progress)
For answers or latest updates join our telegram channel: Click here to join
These are ML Deep Learning Fundamentals Applications Week 3 Answerss
Q7. Which of the following statements are true about k
nearest neighbor (KNN)-
Odd value of “K” preferred over even values.
Does more computation on test time rather than train time.
Work well with high dimension.
The optimum value of K for KNN is highly independent on the data.
Answer: Updating Soon (in progress)
Q8.The disadvantage of using k-NN as classifier:
Fails while handling large dataset
Fails while handling small dataset
Sensitive to outliers
Training is required
Answer: Updating Soon (in progress)
For answers or latest updates join our telegram channel: Click here to join
These are ML Deep Learning Fundamentals Applications Week 3 Answers
Q9. Consider single observation X that depends on a random parameter .Suppose θ
has a prior distribution
fθ(θ)=λe−λθforθ≥0,λ>0fxθ(x)=θe−θx|x|>0
Find the MAP estimation of θ
1λ+X
1λ−X
λX
X
Answer: Updating Soon (in progress)
Q10.The MLE for the data samples X={x1,x2,…,xi,…,xk} with the Bernoulli distribution is
n⋅xk
xkn
Mean of xi
None
Answer: Updating Soon (in progress)
For answers or latest updates join our telegram channel: Click here to join
These are ML Deep Learning Fundamentals Applications Week 3 Answers
All weeks of Introduction to Machine Learning: Click Here
More Nptel Courses: https://progiez.com/nptel-assignment-answers