# INTRODUCTION TO MACHINE LEARNING Week 4

**Session: JAN-APR 2023**

**Course Name: Introduction to Machine Learning**

**Course Link: Click Here**

**These are Introduction to Machine Learning Week 4 Assignment 4 Answers**

**Q1. Consider a Boolean function in three variables, that returns True if two or more variables out of three are True, and False otherwise. Can this function be implemented using the perceptron algorithm?**

a. no

b. yes

**Answer: b. yes**

**Q2. For a support vector machine model, let xi be an input instance with label yi. If yi(β^0+xTiβ^)>1, where β0 and β^) are the estimated parameters of the model, then**

a. xi is not a support vector

b. xi is a support vector

c. xi is either an outlier or a support vector

d. Depending upon other data points, x i may or may not be a support vector.

**Answer: a. xi is not a support vector**

**These are Introduction to Machine Learning Week 4 Assignment 4 Answers**

**Q3. Suppose we use a linear kernel SVM to build a classifier for a 2-class problem where the training data points are linearly separable. In general, will the classifier trained in this manner be always the same as the classifier trained using the perceptron training algorithm on the same training data?**

a. Yes

b. No

**Answer: b. No**

**For Q4,5: Kindly download the synthetic dataset from the following link**

Click here to view the dataset

The dataset contains 1000 points and each input point contains 3 features.

**Q4. Train a linear regression model (without regularization) on the above dataset. Report the coefficients of the best fit model. Report the coefficients in the following format: β0,β1,β2,β3. (You can round-off the accuracy value to the nearest 2-decimal point number.)**

a. -1.2, 2.1, 2.2, 1

b. 1, 1.2, 2.1, 2.2

c. -1, 1.2, 2.1, 2.2

d. 1, -1.2, 2.1, 2.2

e. 1, 1.2, -2.1, -2.2

**Answer: d. 1, -1.2, 2.1, 2.2**

**These are Introduction to Machine Learning Week 4 Assignment 4 Answers**

**Q5. Train an l2 regularized linear regression model on the above dataset. Vary the regularization parameter from 1 to 10. As you increase the regularization parameter, absolute value of the coefficients (excluding the intercept) of the model:**

a. increase

b. first increase then decrease

c. decrease

d. first decrease then increase

**Answer: c. decrease**

**These are Introduction to Machine Learning Week 4 Assignment 4 Answers**

**For Q6,7: Kindly download the modified version of Iris dataset from this link.**

**Available at: (Click here to view the Iris dataset)**

The dataset contains 150 points and each input point contains 4 features and belongs to one among three classes. Use the first 100 points as the training data and the remaining 50 as test data. In the following questions, to report accuracy, use test dataset. You can round-off the accuracy value to the nearest 2-decimal point number. (Note: Do not change the order of data points.)

**These are Introduction to Machine Learning Week 4 Assignment 4 Answers**

**Q6. Train an l2 regularized logistic regression classifier on the modified iris dataset. We recommend using sklearn. Use only the first two features for your model. We encourage you to explore the impact of varying different hyperparameters of the model. Kindly note that the C parameter mentioned below is the inverse of the regularization parameter λ. As part of the assignment train a model with the following hyperparameters:Model: logistic regression with one-vs-rest classifier, C=1e4For the above set of hyperparameters, report the best classification accuracy**

a. 0.88

b. 0.86

c. 0.98

d. 0.68

**Answer: b. 0.86**

**These are Introduction to Machine Learning Week 4 Assignment 4 Answers**

**Q7. Train an SVM classifier on the modified iris dataset. We recommend using sklearn. Use only the first two features for your model. We encourage you to explore the impact of varying different hyperparameters of the model. Specifically try different kernels and the associated hyperparameters. As part of the assignment train models with the following set of hyperparametersRBF-kernel, gamma=0.5, one-vs-rest classifier, no-feature-normalization. Try C=0.01,1,10. For the above set of hyperparameters, report the best classification accuracy along with total number of support vectors on the test data.**

a. 0.92, 69

b. 0.88, 40

c. 0.88, 69

d. 0.98, 41

**Answer: c. 0.88, 69**

**These are Introduction to Machine Learning Week 4 Assignment 4 Answers**

More Weeks of Introduction to Machine Learning: Click Here

More Nptel courses: https://progiez.com/nptel

**Session: JUL-DEC 2022**

**These are Introduction to Machine Learning Week 4 Assignment 4 Answers**

**These are Introduction to Machine Learning Week 4 Assignment 4 Answers**Course Name: INTRODUCTION TO MACHINE LEARNING

Link to Enroll: Click Here

**Q1. Consider the 1-dimensional dataset:**

State true or false: The dataset becomes linearly separable after using basis expansion with the following basis function ϕ(x)=[1×3]ϕ(x)=[1×3]

a. True

b.False

**Answer: a. True**

**Q2. Consider a linear SVM trained with nn labeled points in R2R2 without slack penalties and resulting in k=2k=2 support vectors, where n>100n>100. By removing one labeled training point and retraining the SVM classifier, what is the maximum possible number of support vectors in the resulting solution?**

a. 1

b. 2

c. 3

d. n − 1

e. n

**Answer: b. 2**

**These are Introduction to Machine Learning Week 4 Assignment 4 Answers**

**Q3. Which of the following are valid kernel functions?**

a. (1+<x,x’>)d(1+<x,x′>)d

b. tanh(K1<x,x’>+K2)

c. exp(−γ||x−x’||2)

**Answer: a, b, c**

**Q4. Consider the following dataset:**

**Which of these is not a support vector when using a Support Vector Classifier with a polynomial kernel with degree =3,C=1,=3,C=1, and gamma =0.1?=0.1?**

a. 3

b.1

c. 9

d. 10

**Answer: b.1**

**These are Introduction to Machine Learning Week 4 Assignment 4 Answers**

**Q5. Consider an SVM with a second order polynomial kernel. Kernel 1 maps each input data point xx to K1(x)=[x x2]. Kernel 2 maps each input data point xx to K2(x)=[3x 3×2]K2(x). Assume the hyper-parameters are fixed. Which of the following option is true?**

a. The margin obtained using K2(x)K2(x) will be larger than the margin obtained using K1(x)K1(x).

b. The margin obtained using K2(x)K2(x) will be smaller than the margin obtained using K1(x)K1(x).

c. The margin obtained using K2(x)K2(x) will be the same as the margin obtained using K1(x)K1(x).

**Answer: c. The margin obtained using K2(x)K2(x) will be the same as the margin obtained using K1(x)K1(x).**

**Q6. Train a Linear perceptron classifier on the modified iris dataset. Report the best classification accuracy for l1 and elasticnet penalty terms.**

a. 0.82, 0.64

b. 0.90, 0.71

c. 0.84, 0.82

d. 0.78, 0.64

**Answer: b. 0.90, 0.71**

**These are Introduction to Machine Learning Week 4 Assignment 4 Answers**

**Q7. Train an SVM classifier on the modified iris dataset. We encourage you to explore the impact of varying different hyperparameters of the model. Specifically, try different kernels and the associated hyperparameters. As part of the assignment, train models with the following set of hyperparameterspoly, gamma=0.4gamma=0.4, one-vs-rest classifier, no-feature-normalization.**

a. 0.98

b. 0.96

c. 0.92

d. 0.94

**Answer: a. 0.98**

**These are Introduction to Machine Learning Week 4 Assignment 4 Answers**

More weeks solution of this course: https://progies.in/answers/nptel/introduction-to-machine-learning

More NPTEL Solutions: https://progies.in/answers/nptel

* The material and content uploaded on this website are for general information and reference purposes only. Please do it by your own first. COPYING MATERIALS IS STRICTLY PROHIBITED.