Introduction To Machine Learning IIT-KGP Nptel Week 2 Assignment Answers

Are you looking for Nptel Introduction To Machine Learning IIT-KGP Week 2 Answers 2024? This guide offers comprehensive assignment solutions tailored to help you master key machine learning concepts such as supervised learning, regression, and classification.

Course Link: Click Here


Introduction To Machine Learning IIT-KGP Week 2 Answers
Introduction To Machine Learning IIT-KGP Week 2 Answers

For answers or latest updates join our telegram channel: Click here to join

Introduction To Machine Learning IIT-KGP Week 2 Answers (July-Dec 2024)


Q1.In a binary classification problem, out of 30 data points 10 belong to class I and 20 belong to class II. What is the entropy of the data set?
Α. 0.97
B. 0
C. 0.91
D. 0.67

Answer: C. 0.91


Q2.Which of the following is false?
A. Bias is the true error of the best classifier in the concept class
B. Bias is high if the concept class cannot model the true data distribution well
C. High bias leads to overfitting

Answer: C. High bias leads to overfitting


For answers or latest updates join our telegram channel: Click here to join

These are Introduction To Machine Learning IIT-KGP Week 2 Answers


Q3.Decision trees can be used for the problems where

  1. the attributes are categorical.
  2. the attributes are numeric valued.
  3. the attributes are discrete valued.
    A. 1 only
    B. 1 and 2 only
    C. 1, 2 and 3

Answer: C. 1, 2 and 3


Q4.In linear regression, our hypothesis is is h(x) = 0 + 01 0₁x, the training data is given in the 0 1 table.
m Σ((x)- i=1 θ If the cost function is J(0) = 1 2m What is the value of J(0) when 8 = (1,1) ? 2 y), where m is no. of training data points.
A. 0
B. 2
C. 1
D. 0.25

See also  Introduction To Machine Learning IIT-KGP Nptel Week 4 Assignment Answers

Answer: C. 1


For answers or latest updates join our telegram channel: Click here to join

These are Introduction To Machine Learning IIT-KGP Week 2 Nptel Assignment Answers


Q5.The value of information gain in the following decision tree is:
Entropy 0.996) Examples=30
Entropy 0.787 Examples=17
Entropy=0.391 Examples 13
Α. 0.380
Β. 0.620
C. 0.190
D. 0.477

Answer: Α. 0.380


Q6 .What is true for Stochastic Gradient Descent?
A. In every iteration, model parameters are updated based on multiple training samples.
B. In every iteration, model parameters are updated based on one training sample
C. In every iteration, model parameters are updated based on all training samples
D. None of the above

Answer:B. In every iteration, model parameters are updated based on one training sample


For answers or latest updates join our telegram channel: Click here to join

These are Introduction To Machine Learning IIT-KGP Week 2 Nptel Assignment Answers


Answer Questions 7-8 with the data given below:
ISRO wants to discriminate between Martians (M) and Humans (H) based on the following features: Green ∈ {N,Y), Legs ∈ {2,3}, Height ∈ {S,T), Smelly ∈ {N,Y). The decision variable is Species. The training data is as follows:

Q7.The entropy of the entire dataset is
A. 0.5
B. 1
C. 0
D. 0.1

Answer: D. 0.1


Q8.Which attribute will be the root of the decision tree (if information gain is used to create the decision tree) and what is the information gain due to that attribute?
A. Green, 0.45
B. Legs, 0.4
C. Height, 0.8
D. Smelly,0.7

See also  Introduction to Machine Learning | Week 4

Answer: B. Legs, 0.4


For answers or latest updates join our telegram channel: Click here to join

These are Introduction To Machine Learning IIT-KGP Week 2 Nptel Assignment Answers


Q9.In Linear Regression the output is:
A. Discrete
B. Continuous and always lies in a finite range
C. Continuous
D. May be discrete or continuous

Answer: C. Continuous


For answers or latest updates join our telegram channel: Click here to join

These are Introduction To Machine Learning IIT-KGP Week 2 Nptel Assignment Answers


Q10.Identify whether the following statement is true or false?
“Overfitting is more likely when the set of training data is small”
A. True
B. False

Answer: A. True


For answers or latest updates join our telegram channel: Click here to join

These are Introduction To Machine Learning IIT-KGP Week 2 Nptel Assignment Answers


All weeks of Introduction to Machine Learning: Click Here

More Nptel Courses: https://progiez.com/nptel-assignment-answers


Introduction To Machine Learning IIT-KGP Week 2 Answers (Jan-Apr 2024)

Link to Enroll: Click Here

1. In a binary classification problem, out of 30 data points 12 belong to class I and 18 belong to class II. What is the entropy of the data set?

A. 0.97
B 0
C. 1
D. 0.67

Answer:- c


2. Decision trees can be used for the problems where

A. the attributes are categorical.
B. the attributes are numeric valued.
C. the attributes are discrete valued.
D. In all the above cases.

Answer:- d


3. Which of the following is false?

A. Variance is the error of the trained classifier with respect to the best classifier in the concept class.
B. Variance depends on the training set size.
C. Variance increases with more training data.
D. Variance increases with more complicated classifiers.

See also  Introduction to Machine Learning | Week 1

Answer:- c


4. In linear regression, our hypothesis is h (x) = 6+ 0x, the training data is given in the table. A
What is the value of J(0) when 6 = (1,1).

A. 0
B. 1
C. 2
D. 0.5

Answer:- b


5. The value of information gain in the following decision tree is:

A. 0.380
B. 0.620
C. 0.190
D. 0.477

Answer:- a


6. What is true for Stochastic Gradient Descent?

A. In every iteration, model parameters are updated for multiple training samples
B. In every iteration, model parameters are updated for one training sample
C. In every iteration, model parameters are updated for all training samples
D. None of the above

Answer:- b


7. The entropy of the entire dataset is

A. 0.5
B. 1
C. 0
D. 0.1

Answer:- c


8. Which attribute will be the root of the decision tree?

A. Green
B. Legs
C. Height
D. Smelly

Answer:- b


9. In Linear Regression the output is:

A. Discrete
B. Continuous and always lies in a finite range
C. Continuous
D. May be discrete or continuous

Answer:- c


10. Identify whether the following statement is true or false? Overfitting is more likely when the set of training data is small

A. True
B. False

Answer:- a