# Introduction to Machine Learning | Week 8

**Session: JAN-APR 2024**

**Course name: Introduction to Machine Learning**

**Course Link: Click Here**

**For answers or latest updates join our telegram channel: Click here to join **

#### These are Introduction to Machine Learning Week 8 Assignment 8 Answers

#### Q1. Consider the Bayesian network given below. Which of the following statement(s) is/are correct?

B is independent of F, given D.

A is independent of E, given C.

E and F are not independent, given D.

A and B are not independent, given D.

**Answer: a), d)**

**Q2. Select the correct statement(s) from the ones given below.**

Naive Bayes models are a special case of Bayesian networks.

Naive Bayes models are a generalization of Bayesian networks.

With no independence among the variables, a Bayesian network representing a distribution over n

variables would have n(n−1)2 edges.

With no independence among the variables, a Bayesian network representing a distribution over n variables would have n−1 edges.

**Answer: a), c)**

**For answers or latest updates join our telegram channel: Click here to join **

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q3. A decision tree classifier learned from a fixed training set achieves 100% accuracy. Which of the following models trained using the same training set will also achieve 100% accuracy? (Assume P(xi|c)as Gaussians)I Logistic Regressor.II A polynomial of degree one kernel SVM.III A linear discriminant function.IV Naive Bayes classifier.**

I

I and II

IV

III

None of the above.

**Answer: None of the above.**

**Q4. Which of the following points would Bayesians and frequentists disagree on?**

The use of a non-Gaussian noise model in probabilistic regression.

The use of probabilistic modelling for regression.

The use of prior distributions on the parameters in a probabilistic model.

The use of class priors in Gaussian Discriminant Analysis.

The idea of assuming a probability distribution over models

**Answer: c), e)**

**For answers or latest updates join our telegram channel: Click here to join **

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q5. Consider the following data for 500 instances of home, 600 instances of office and 700 instances of factory type buildingsSuppose a building has a balcony and power-backup but is not multi-storied. According to the Naive Bayes algorithm, it is of type**

Home

Office

Factory

**Answer: Factory**

**Q6. In AdaBoost, we re-weight points giving points misclassified in previous iterations more weight. Suppose we introduced a limit or cap on the weight that any point can take (for example, say we introduce a restriction that prevents any point’s weight from exceeding a value of 10). Which among the following would be an effect of such a modification? (Multiple options may be correct)**

We may observe the performance of the classifier reduce as the number of stages increase

It makes the final classifier robust to outliers

It may result in lower overall performance

It will make the problem computationally infeasible

**Answer: b), c)**

**For answers or latest updates join our telegram channel: Click here to join **

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q7. While using Random Forests, if the input data is such that it contains a large number (> 80%) of irrelevant features (the target variable is independent of the these features), which of the following statements are TRUE?**

Random Forests have reduced performance as the fraction of irrelevant features increases.

Random forests have increased performance as the fraction of irrelevant features increases.

The fraction of irrelevant features doesn’t impact the performance of random forest.

**Answer: a) Random Forests have reduced performance as the fraction of irrelevant features increases.**

**Q8. Suppose you have a 6 class classification problem with one input variable. You decide to use logistic regression to build a predictive model. What is the minimum number of (β0,β) parameter pairs that need to be estimated?**

6

12

5

10

**Answer: 5**

**For answers or latest updates join our telegram channel: Click here to join **

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

More Weeks of Introduction to Machine Learning: Click here

More Nptel Courses: https://progiez.com/nptel-assignment-answers

**Session: JULY-DEC 2023**

**Course Name: Introduction to Machine Learning**

**Course Link: Click Here**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q1. The figure below shows a Bayesian Network with 9 variables, all of which are binary.Which of the following is/are always true for the above Bayesian Network?**

P(A,B|G)=P(A|G)P(B|G)

P(A,I)=P(A)P(I)

P(B,H|E,G)=P(B|E,G)P(H|E,G)

P(C|B,F)=P(C|F)

**Answer: P(A,I)=P(A)P(I)**

**Q2. Consider the following data for 20 budget phones, 30 mid-range phones, and 20 high-end phones:Consider a phone with 2 SIM card slots and NFC but no 5G compatibility. Calculate the probabilities of this phone being a budget phone, a mid-range phone, and a high-end phone using the Naive Bayes method. The correct ordering of the phone type from the highest to the lowest probability is?**

Budget, Mid-Range, High End

Budget, High End, Mid-Range

Mid-Range, High End, Budget

High End, Mid-Range, Budget

**Answer: Mid-Range, High End, Budget**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q3. A dataset with two classes is plotted below.Does the data satisfy the Naive Bayes assumption?**

Yes

No

The given data is insufficient

None of these

**Answer: No**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q4. A company hires you to look at their classification system for whether a given customer would potentially buy their product. When you check the existing classifier on different folds of the training set, you find that it manages a low accuracy of usually around 60%. Sometimes, it’s barely above 50%.With this information in mind, and without using additional classifiers, which of the following ensemble methods would you use to increase the classification accuracy effectively?**

Committee Machine

AdaBoost

Bagging

Stacking

**Answer: AdaBoost**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q5. Which of the following algorithms don’t use learning rate as a hyperparameter?**

Random Forests

Adaboost

KNN

PCA

**Answer: A, C, D**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q6. Consider the two statements:Statement 1: Bayesian Networks need not always be Directed Acyclic Graphs (DAGs)Statement 2: Each node in a bayesian network represents a random variable, and each edge represents conditional dependence.Which of these are true?**

Both the statements are True.

Statement 1 is true, and statement 2 is false.

Statement 1 is false, and statement 2 is true.

Both the statements are false.

**Answer: Statement 1 is false, and statement 2 is true.**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q7. A dataset with two classes is plotted below.Does the data satisfy the Naive Bayes assumption?**

Yes

No

The given data is insufficient

None of these

**Answer: Yes**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q8. Consider the below dataset:Suppose you have to classify a test example “The ball won the race to the boundary” and are asked to compute P(Cricket |“The ball won the race to the boundary”), what is an issue that you will face if you are using Naive Bayes Classifier, and how will you work around it? Assume you are using word frequencies to estimate all the probabilities.**

There won’t be a problem, and the probability of P(Cricket |“The ball won the race to the boundary”) will be equal to 1.

Problem: A few words that appear at test time do not appear in the dataset.

Solution: Smoothing.

Problem: A few words that appear at test time appear more than once in the dataset.

Solution: Remove those words from the dataset.

None of these

**Answer: Problem: A few words that appear at test time do not appear in the dataset.Solution: Smoothing.**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

More Weeks of INTRODUCTION TO MACHINE LEARNING: Click here

More Nptel Courses: Click here

**Session: JAN-APR 2023**

**Course Name: Introduction to Machine Learning**

**Course Link: Click Here**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q1. The Naive Bayes classifier makes the assumption that the __are independent given the ___.**

a. features, class labels

b. class labels, features

c. features, data points

d. there is no such assumption

**Answer: a. features, class labels**

**Q2. Can the decision boundary produced by the Naive Bayes algorithm be non-linear?**

a. no

b. yes

**Answer: b. yes**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q3. A major problem of using the one vs. rest multi-class classification approach is:**

a. class imbalance

b. increased time complexity

**Answer: a. class imbalance**

**Q4. Consider the problem of learning a function X→Y, where Y is Boolean. X is an input vector (X1,X2), where X1 is categorical and takes 3 values, and X2 is a continuous variable (normally distributed). What would be the minimum number of parameters required to define a Naive Bayes model for this function?**

a. 8

b. 10

c. 9

d. 5

**Answer: c. 9**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q5. In boosting, the weights of data points that were miscalssified are __ as training progresses.**

a. decreased

b. increased

c. first decreased and then increased

d. kept unchanged

**Answer: b. increased**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q6. In a random forest model let m<<p be the number of randomly selected features that are used to identify the best split at any node of a tree. Which of the following are true? (p is the original number of features) (Multiple options may be correct)**

a. increasing m reduces the correlation between any two trees in the forest

b. decreasing m reduces the correlation between any two trees in the forest

c. increasing m increases the performance of individual trees in the forest

d. decreasing m increases the performance of individual trees in the forest

**Answer: b, c**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q7. Consider the following graphical model, which of the following are false about the model? (multiple options may be correct)**

a. A is independent of B when C is known

b. D is independent of A when C is known

c. D is not independent of A when B is known

d. D is not independent of A when C is known

**Answer: a, b**

**Q8. Consider the Bayesian network given in the previous question. Let ‘A’, ‘B’, ‘C’, ‘D’and ‘E’denote the random variables shown in the network. Which of the following can be inferred from the network structure?**

a. ‘A’causes ‘D’

b. ‘E’causes ‘D’

c. ‘C’causes ‘A’

d. options (a) and (b) are correct

e. none of the above can be inferred

**Answer: e. none of the above can be inferred**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

More Weeks of Introduction to Machine Learning: Click Here

More Nptel courses: https://progiez.com/nptel

**Session: JUL-DEC 2022**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

Course Name: INTRODUCTION TO MACHINE LEARNING

Link to Enroll: Click Here

**Q1. The figure below shows a Bayesian Network with 9 variables, all of which are binary.****Which of the following is/are always true for the above Bayesian Network?**

a. P(A,B|G)=P(A|G)P(B|G)P(A,B|G)=P(A|G)P(B|G)

b. P(A,I)=P(A)P(I)P(A,I)=P(A)P(I)

c. P(B,H|E,G)=P(B|E,G)P(H|E,G)P(B,H|E,G)=P(B|E,G)P(H|E,G)

d. P(C|B,F)=P(C|F)P(C|B,F)=P(C|F)

**Answer: c, d**

**Q2. Consider the following data for 20 budget phones, 30 mid-range phones, and 20 high-end phones:Consider a phone with 2 SIM card slots and NFC but no 5G compatibility. Calculate the probabilities of this phone being a budget phone, a mid-range phone, and a high-end phone using the Naive Bayes method. The correct ordering of the phone type from the highest to the lowest probability is?**

a. Budget, Mid-Range, High End

b. Budget, High End, Mid-Range

c. Mid-Range, High End, Budget

d. High End, Mid-Range, Budget

**Answer: c. Mid-Range, High End, Budget**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q3. Consider the following dataset where outlook, temperature, humidity, and wind are independent features, and play is the dependent feature.****Find the probability that the student will not play given that x = (Outlook=sunny, Temperature=66, Humidity=90, Windy=True) using the Naive Bayes method. (Assume the continuous features are represented as Gaussian distributions).**

a. 0.0001367

b. 0.0000358

c. 0.0000236

d. 1

**Answer: c. 0.0000236**

**Q4. Which among Gradient Boosting and AdaBoost is less susceptible to outliers considering their respective loss functions?**

a. AdaBoost

b. Gradient Boost

c. On average, both are equally susceptible.

**Answer: b. Gradient Boost**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q5. How do you prevent overfitting in random forest models?**

a. Increasing Tree Depth.

b. Increasing the number of variables sampled at each split.

c. Increasing the number of trees.

d. All of the above.

**Answer: d. All of the above.**

**Q6. A dataset with two classes is plotted below.****Does the data satisfy the Naive Bayes assumption?**

a. Yes

b. No

c. The given data is insufficient

d. None of these

**Answer: a. Yes**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

**Q7. Ensembling in random forest classifier helps in achieving:**

a. reduction of bias error

b. reduction of variance error

c. reduction of data dimension

d. none of the above

**Answer: c. reduction of data dimension**

**These are Introduction to Machine Learning Week 8 Assignment 8 Answers**

More NPTEL Solutions: https://progiez.com/nptel