Introduction to Machine Learning Nptel Week 6 Answers

Are you looking for Introduction to Machine Learning Nptel Week 6 Answers? You’ve come to the right place! Access the latest and most accurate solutions for your Week 6 assignment in the Introduction to Machine Learning course.

Course Link: Click Here


Introduction to Machine Learning Nptel Week 6 Answers
Introduction to Machine Learning Nptel Week 6 Answers

Introduction to Machine Learning Nptel Week 6 Answers (July-Dec 2024)


1. Entropy for a 90-10 split between two classes is:

A) 0.469
B) 0.195
C) 0.204
D) None of the above

Answer: A) 0.469


2. Consider a dataset with only one attribute (categorical). Suppose, there are 8 unordered values in this attribute, how many possible combinations are needed to find the best split-point for building the decision tree classifier?

A) 511
B) 1023
C) 512
D) 127

Answer: A) 511


3. Having built a decision tree, we are using reduced error pruning to reduce the size of the tree. We select a node to collapse. For this particular node, on the left branch, there are three training data points with the following outputs: 5, 7, 9.6, and for the right branch, there are four training data points with the following outputs: 8.7, 9.8, 10.5, 11. The average value of the outputs of data points denotes the response of a branch. The original responses for data points along the two branches (left & right respectively) were response−left and response−right and the new response after collapsing the node is response−new. What are the values for response−left, response−right, and response−new (numbers in the option are given in the same order)?

A) 9.6, 11, 10.4
B) 7.2; 10; 8.8
C) 5, 10.5, 15
D) Depends on the tree height

Answer: B) 7.2; 10; 8.8


4. Which of the following is a good strategy for reducing the variance in a decision tree?

A) If improvement of taking any split is very small, don’t make a split. (Early Stopping)
B) Stop splitting a leaf when the number of points is less than a set threshold K.
C) Stop splitting all leaves in the decision tree when any one leaf has less than a set threshold K points.
D) None of the Above.

See also  Introduction to Machine Learning | Week 12

Answer: B) Stop splitting a leaf when the number of points is less than a set threshold K


These are Introduction to Machine Learning Nptel Week 6 Answers


5. Which of the following statements about multiway splits in decision trees with categorical features is correct?

A) They always result in deeper trees compared to binary splits
B) They always provide better interpretability than binary splits
C) They can lead to overfitting when dealing with high-cardinality categorical features
D) They are computationally less expensive than binary splits for all categorical features

Answer: C) They can lead to overfitting when dealing with high-cardinality categorical features


6. Which of the following statements about imputation in data preprocessing is most accurate?

A) Mean imputation is always the best method for handling missing numerical data
B) Imputation should always be performed after splitting the data into training and test sets
C) Missing data is best handled by simply removing all rows with any missing values
D) Multiple imputation typically produces less biased estimates than single imputation methods

Answer: D) Multiple imputation typically produces less biased estimates than single imputation methods


7. Consider the following dataset:

Which among the following split-points for feature2 would give the best split according to the misclassification error?

A) 186.5
B) 188.6
C) 189.2
D) 198.1

Answer: C) 189.2


These are Introduction to Machine Learning Nptel Week 6 Answers

All weeks of Introduction to Machine Learning: Click Here

For answers to additional Nptel courses, please refer to this link: NPTEL Assignment Answers


Introduction to Machine Learning Nptel Week 6 Answers (JAN-APR 2024)

Course name: Introduction to Machine Learning

Course Link: Click Here

For answers or latest updates join our telegram channel: Click here to join

See also  NPTEL Introduction To Machine Learning IITKGP ASSIGNMENT 8

These are Introduction to Machine Learning Week 6 Assignment Answers


Q1. From the given dataset, choose the optimal decision tree learned by a greedy approach:
a)
b)
c)
d) None of the above.

Answer: c)


Q2. Which of the following properties are characteristic of decision trees?
High bias
High variance
Lack of smoothness of prediction surfaces
Unbounded parameter set

Answer: B, C, D


For answers or latest updates join our telegram channel: Click here to join

These are Introduction to Machine Learning Week 6 Assignment Answers


Q3. Entropy for a 50−50 split between two classes is:
0
0.5
1
None of the above

Answer: 1


Q4. Having built a decision tree, we are using reduced error pruning to reduce the size of the tree. We select a node to collapse. For this particular node, on the left branch, there are 3 training data points with the following feature values: 5, 7, 9.6 and for the right branch, there are four training data points with the following feature values: 8.7, 9.8, 10.5, 11. What were the original responses for data points along the two branches (left & right respectively) and what is the new response after collapsing the node?
10.8,13.33,14.48
10.8,13.33,12.06
7.2,10,8.8
7.2,10,8.6

Answer: 7.2,10,8.8


For answers or latest updates join our telegram channel: Click here to join

These are Introduction to Machine Learning Week 6 Assignment Answers


Q5. Given that we can select the same feature multiple times during the recursive partitioning of the input space, is it always possible to achieve 100% accuracy on the training data (given that we allow for trees to grow to their maximum size) when building decision trees?
Yes
No

Answer: No


Q6. Suppose on performing reduced error pruning, we collapsed a node and observed an improvement in the prediction accuracy on the validation set. Which among the following statements are possible in light of the performance improvement observed?
The collapsed node helped overcome the effect of one or more noise affected data points in the training set
The validation set had one or more noise affected data points in the region corresponding to the collapsed node
The validation set did not have any data points along at least one of the collapsed branches
The validation set did not contain data points which were adversely affected by the collapsed node.

See also  NPTEL Introduction To Machine Learning IITKGP ASSIGNMENT 7

Answer: A, B, C


For answers or latest updates join our telegram channel: Click here to join

These are Introduction to Machine Learning Week 6 Assignment Answers


Q7. Consider the following data set:
Considering ‘profitable’ as the binary values attribute we are trying to predict, which of the attributes would you select as the root in a decision tree with multi-way splits using the cross-entropy impurity measure?

price
maintenance
capacity
airbag

Answer: capacity


Q8. For the same data set, suppose we decide to construct a decision tree using binary splits and the Gini index impurity measure. Which among the following feature and split point combinations would be the best to use as the root node assuming that we consider each of the input features to be unordered?
price – {low, med}|{high}
maintenance – {high}|{med, low}
maintenance – {high, med}|{low}
capacity – {2}|{4, 5}

Answer: maintenance – {high, med}|{low}


For answers or latest updates join our telegram channel: Click here to join

These are Introduction to Machine Learning Week 6 Assignment Answers

More Weeks of Introduction to Machine Learning: Click here

More Nptel Courses: https://progiez.com/nptel-assignment-answers