# Introduction To Machine Learning IIT-KGP Nptel Week 2 Assignment Answers

Are you looking for Nptel Introduction To Machine Learning IIT-KGP Week 2 Answers 2024? This guide offers comprehensive assignment solutions tailored to help you master key machine learning concepts such as supervised learning, regression, and classification.

## Introduction To Machine Learning IIT-KGP Week 2 Answers (July-Dec 2024)

Q1.In a binary classification problem, out of 30 data points 10 belong to class I and 20 belong to class II. What is the entropy of the data set?
Α. 0.97
B. 0
C. 0.91
D. 0.67

Q2.Which of the following is false?
A. Bias is the true error of the best classifier in the concept class
B. Bias is high if the concept class cannot model the true data distribution well
C. High bias leads to overfitting

These are Introduction To Machine Learning IIT-KGP Week 2 Answers

Q3.Decision trees can be used for the problems where

1. the attributes are categorical.
2. the attributes are numeric valued.
3. the attributes are discrete valued.
A. 1 only
B. 1 and 2 only
C. 1, 2 and 3

Answer: C. 1, 2 and 3

Q4.In linear regression, our hypothesis is is h(x) = 0 + 01 0₁x, the training data is given in the 0 1 table.
m Σ((x)- i=1 θ If the cost function is J(0) = 1 2m What is the value of J(0) when 8 = (1,1) ? 2 y), where m is no. of training data points.
A. 0
B. 2
C. 1
D. 0.25

These are Introduction To Machine Learning IIT-KGP Week 2 Nptel Assignment Answers

Q5.The value of information gain in the following decision tree is:
Entropy 0.996) Examples=30
Entropy 0.787 Examples=17
Entropy=0.391 Examples 13
Α. 0.380
Β. 0.620
C. 0.190
D. 0.477

Q6 .What is true for Stochastic Gradient Descent?
A. In every iteration, model parameters are updated based on multiple training samples.
B. In every iteration, model parameters are updated based on one training sample
C. In every iteration, model parameters are updated based on all training samples
D. None of the above

Answer:B. In every iteration, model parameters are updated based on one training sample

These are Introduction To Machine Learning IIT-KGP Week 2 Nptel Assignment Answers

Answer Questions 7-8 with the data given below:
ISRO wants to discriminate between Martians (M) and Humans (H) based on the following features: Green ∈ {N,Y), Legs ∈ {2,3}, Height ∈ {S,T), Smelly ∈ {N,Y). The decision variable is Species. The training data is as follows:

Q7.The entropy of the entire dataset is
A. 0.5
B. 1
C. 0
D. 0.1

Q8.Which attribute will be the root of the decision tree (if information gain is used to create the decision tree) and what is the information gain due to that attribute?
A. Green, 0.45
B. Legs, 0.4
C. Height, 0.8
D. Smelly,0.7

These are Introduction To Machine Learning IIT-KGP Week 2 Nptel Assignment Answers

Q9.In Linear Regression the output is:
A. Discrete
B. Continuous and always lies in a finite range
C. Continuous
D. May be discrete or continuous

These are Introduction To Machine Learning IIT-KGP Week 2 Nptel Assignment Answers

Q10.Identify whether the following statement is true or false?
“Overfitting is more likely when the set of training data is small”
A. True
B. False

These are Introduction To Machine Learning IIT-KGP Week 2 Nptel Assignment Answers

## Introduction To Machine Learning IIT-KGP Week 2 Answers (Jan-Apr 2024)

1. In a binary classification problem, out of 30 data points 12 belong to class I and 18 belong to class II. What is the entropy of the data set?

A. 0.97
B 0
C. 1
D. 0.67

2. Decision trees can be used for the problems where

A. the attributes are categorical.
B. the attributes are numeric valued.
C. the attributes are discrete valued.
D. In all the above cases.

3. Which of the following is false?

A. Variance is the error of the trained classifier with respect to the best classifier in the concept class.
B. Variance depends on the training set size.
C. Variance increases with more training data.
D. Variance increases with more complicated classifiers.

4. In linear regression, our hypothesis is h (x) = 6+ 0x, the training data is given in the table. A
What is the value of J(0) when 6 = (1,1).

A. 0
B. 1
C. 2
D. 0.5

5. The value of information gain in the following decision tree is:

A. 0.380
B. 0.620
C. 0.190
D. 0.477

6. What is true for Stochastic Gradient Descent?

A. In every iteration, model parameters are updated for multiple training samples
B. In every iteration, model parameters are updated for one training sample
C. In every iteration, model parameters are updated for all training samples
D. None of the above

7. The entropy of the entire dataset is

A. 0.5
B. 1
C. 0
D. 0.1

8. Which attribute will be the root of the decision tree?

A. Green
B. Legs
C. Height
D. Smelly

9. In Linear Regression the output is:

A. Discrete
B. Continuous and always lies in a finite range
C. Continuous
D. May be discrete or continuous