$35.00
Description
Submission instructions
Submit your solutions electronically on the course Gradescope site as PDF les.
If you plan to typeset your solutions, please use the LaTeX solution template. If you must submit scanned handwritten solutions, please use a black pen on blank white paper and a highquality scanner app.
Recall that the ID3 algorithm iteratively grows a decision tree from the root downwards. On each iteration, the algorithm replaces one leaf node with an internal node that splits the data based on one decision attribute (or feature). In particular, the ID3 algorithm chooses the split that reduces the entropy the most, but there are other choices. For example, since our goal in the end is to have the lowest error, why not instead choose the split that reduces error the most? In this problem, we will explore one reason why reducing entropy is a better criterion.
Consider the following simple setting. Let us suppose each example is described by n boolean features: X = hX_{1}; : : : ; X_{n}i, where X_{i} 2 f0; 1g, and where n 4. Furthermore, the target function to be learned is f : X ! Y , where Y = X_{1} _ X_{2} _ X_{3}. That is, Y = 1 if X_{1} = 1 or X_{2} = 1 or X_{3} = 1, and Y = 0 otherwise. Suppose that your training data contains all of the 2^{n} possible examples, each labeled by f. For example, when n = 4, the data set would be

^{X}1 ^{X}2 ^{X}3 ^{X}4
Y
^{X}1 ^{X}2 ^{X}3 ^{X}4
Y
0
0
0
0
0
0
0
0
1
0
1
0
0
0
1
1
0
0
1
1
0
1
0
0
1
0
1
0
1
1
1
1
0
0
1
1
1
0
1
1
0
0
1
0
1
0
0
1
1
1
1
0
1
0
1
1
0
1
1
1
0
1
1
0
1
0
1
1
1
1
1
1
1
0
1
1
1
1
1
1

(5 pts) How many mistakes does the best 1leaf decision tree make over the 2^{n} training examples? (The 1leaf decision tree does not split the data even once. Make sure you answer for the general case when n 4.)

(5 pts) Is there a split that reduces the number of mistakes by at least one? (That is, is there a decision tree with 1 internal node with fewer mistakes than your answer to part (a)?) Why or why not?

(5 pts) What is the entropy of the output label Y for the 1leaf decision tree (no splits at all)?

(5 pts) Is there a split that reduces the entropy of the output Y by a nonzero amount? If so, what is it, and what is the resulting conditional entropy of Y given this split?

Entropy and Information [5 pts]
The entropy of a Bernoulli (Boolean 0/1) random variable X with p(X = 1) = q is given by
B(q) = q log q (1 q) log(1 q):
Suppose that a set S of examples contains p positive examples and n negative examples. The
p 

entropy of S is de ned as H(S) = B 
. 

p+n 
(a) (5 pts) Based on an attribute X_{j} , we split our examples into k disjoint subsets S_{k}, with p_{k} positive and nk negative examples in each. If the ratio ^{p}^{k} is the same for all k, show that
p_{k}+n_{k}
the information gain of this attribute is 0.

kNearest Neighbors and Crossvalidation [15 pts]
In the following questions you will consider a knearest neighbor classi er using Euclidean distance metric on a binary classi cation task. We assign the class of the test point to be the class of the majority of the k nearest neighbors. Note that a point can be its own neighbor.
Figure 1: Dataset for KNN binary classi cation task.

(5 pts) What value of k minimizes the training set error for this dataset? What is the resulting training error?

(5 pts) Why might using too large values k be bad in this dataset? Why might too small values of k also be bad?

(5 pts) What value of k minimizes leaveoneout crossvalidation error for this dataset? What is the resulting error?

Programming exercise : Applying decision trees and knearest neighbors [60 pts]
Submission instructions
Only provide answers and plots. Do not submit code.
Introduction^{1}
The sinking of the RMS Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships.
One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upperclass.
In this problem, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy.
Starter Files
code and data
code : titanic.py
data : titanic_train.csv
documentation
Decision Tree Classi er:
http://scikitlearn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
KNearest Neighbor Classi er:
http://scikitlearn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html
CrossValidation:
http://scikitlearn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html
Metrics:
http://scikitlearn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html
Download the code and data sets from the course website. For more information on the data set, see the Kaggle description: https://www.kaggle.com/c/titanic/data. (The provided data sets
are modi ed versions of the data available from Kaggle.^{2})
Note that any portions of the code that you must modify have been indicated with TODO. Do not change any code outside of these blocks.
4.1 Visualization [5 pts]
One of the rst things to do before trying any formal machine learning technique is to dive into the data. This can include looking for funny values in the data, looking for outliers, looking at the range of feature values, what features seem important, etc.

(5 pts) Run the code (titanic.py) to make histograms for each feature, separating the examples by class (e.g. survival). This should produce seven plots, one for each feature, and each plot should have two overlapping histograms, with the color of the histogram indicating the class. For each feature, what trends do you observe in the data?
4.2 Evaluation [55 pts]
Now, let us use scikitlearn to train a DecisionTreeClassifier and KNeighborsClassifier on the data.
Using the predictive capabilities of the scikitlearn package is very simple. In fact, it can be carried out in three simple steps: initializing the model, tting it to the training data, and predicting new values.^{3}

(5 pts) Before trying out any classi er, it is often useful to establish a baseline. We have implemented one simple baseline classi er, MajorityVoteClassifier, that always predicts the majority class from the training set. Read through the MajorityVoteClassifier and its usage and make sure you understand how it works.
Your goal is to implement and evaluate another baseline classi er, RandomClassifier, that predicts a target class according to the distribution of classes in the training data set. For example, if 60% of the examples in the training set have Survived = 0 and 40% have Survived = 1, then, when applied to a test set, RandomClassifier should randomly predict 60% of the examples as Survived = 0 and 40% as Survived = 1.
Implement the missing portions of RandomClassifier according to the provided speci cations. Then train your RandomClassifier on the entire training data set, and evaluate its training error. If you implemented everything correctly, you should have an error of 0:485.

(5 pts) Now that we have a baseline, train and evaluate a DecisionTreeClassifier (using the class from scikitlearn and referring to the documentation as needed). Make sure

Passengers with missing values for any feature have been removed. Also, the categorical feature Sex has been mapped to {‘female’: 0, ‘male’: 1} and Embarked to {‘C’: 0, ‘Q’: 1, ‘S’: 2}. If you are interested more in this process of data munging, Kaggle has an excellent tutorial available at https://www.kaggle.com/c/titanic/
details/gettingstartedwithpythonii.
^{3}Note that almost all of the model techniques in scikitlearn share a few common named functions, once they are initialized. You can always nd out more about them in the documentation for each model. These are
somemodelname.fit(…), somemodelname.predict(…), and somemodelname.score(…).
5
you initialize your classi er with the appropriate parameters; in particular, use the `entropy’ criterion discussed in class. What is the training error of this classi er?

(5 pts) Similar to the previous question, train and evaluate a KNeighborsClassifier (using the class from scikitlearn and referring to the documentation as needed). Use k=3, 5 and 7 as the number of neighbors and report the training error of this classi er.

(10 pts) So far, we have looked only at training error, but as we learned in class, training error is a poor metric for evaluating classi ers. Let us use crossvalidation instead.
Implement the missing portions of error(…) according to the provided speci cations. You may nd it helpful to use train_test_split(…) from scikitlearn. To ensure that we always get the same splits across di erent runs (and thus can compare the classi er results), set the random_state parameter to be the trial number.
Next, use your error(…) function to evaluate the training error and (crossvalidation) test error of each of your four models (for the KNeighborsClassifier, use k=5). To do this, generate a random 80=20 split of the training data, train each model on the 80% fraction, evaluate the error on either the 80% or the 20% fraction, and repeat this 100 times to get an average result. What are the average training and test error of each of your classi ers on the Titanic data set?

(10 pts) One way to nd out the best value of k for KNeighborsClassifier is nfold cross validation. Find out the best value of k using 10fold cross validation. You may nd the cross_val_score(…) from scikitlearn helpful. Run 10fold cross validation for all odd numbers ranging from 1 to 50 as the number of neighbors. Then plot the validation error against the number of neighbors, k. Include this plot in your writeup, and provide a 12 sentence description of your observations. What is the best value of k?

(10 pts) One problem with decision trees is that they can over t to training data, yielding complex classi ers that do not generalize well to new data. Let us see whether this is the case for the Titanic data.
One way to prevent decision trees from over tting is to limit their depth. Repeat your crossvalidation experiments but for increasing depth limits, speci cally, 1; 2; : : : ; 20. Then plot the average training error and test error against the depth limit. Include this plot in your writeup, making sure to label all axes and include a legend for your classi ers. What is the best depth limit to use for this data? Do you see over tting? Justify your answers using the plot.

(10 pts) Another useful tool for evaluating classi ers is learning curves, which show how classi er performance (e.g. error) relates to experience (e.g. amount of training data). For this experiment, rst generate a random 90/10 split of the training data and do the following experiments considering the 90% fraction as training and 10% for testing.
Run experiments for the decision tree and knearest neighbors classi er with the best depth limit and k value you found above. This time, vary the amount of training data by starting with splits of 0:10 (10% of the data from 90% fraction) and working up to full size 1:00 (100% of the data from 90% fraction) in increments of 0:10. Then plot the decision tree and knearest neighbors training and test error against the amount of training data. Include this plot in your writeup, and provide a 12 sentence description of your observations.
6