$30.00
Description

Fitting a Na ve Bayes Model, 40 pts. In this question, we’ll t a Na ve Bayes model to the MNIST digits using maximum likelihood. The starter code will download the dataset and parse it for you: Each training sample (t^{(i)}; x^{(i)}) is composed of a vectorized binary image x^{(i)} 2 f0; 1g^{784}, and 1of10 encoded class label t^{(i)}, i.e., t^{(}_{c}^{i)} = 1 means image i belongs to class c.
For p(c j ) = _{c} and p(x_{j} = 1 j c; ; ) = _{jc}, Na ve Bayes de nes the joint probability of the each data point x and its class label c as follows:
784
Y
p(x; c j ; ) = p(c j ; )p(x j c; ; ) = p(c j ) p(x_{j} j c; _{jc}):
j=1
Here, is a matrix of probabilities for each pixel and each class, so its dimensions are 784 10 (Note that in the lecture, we simpli ed notation and didn’t write the probabilities conditioned on the parameters, i.e. p(cj ) is written as p(c) in lecture slides).
For binary data, we can write the Bernoulli likelihood as
(1.1) p(x_{j} j c; _{jc}) = _{jc}^{x}^{j} (1 _{jc})^{(1} ^{x}^{j}^{)};
which is just a way of expressing p(x_{j} = 1jc; _{jc}) = _{jc} and p(x_{j} = 0jc; _{jc}) = 1 _{jc} in a compact form. For the prior p(t j ), we use a categorical distribution (generalization of Bernoulli distribution to multiclass case),

9
9^{t}j
X_{i}
p(t_{c} = 1 j ) = p(c j ) = _{c} or equivalently p(t j ) = _{j=0} _{j} where
_{i} = 1;
=0
where p(c j ) and p(t j ) can be used interchangeably. You will t the parameters and using MLE and MAP techniques, and both cases below, your tting procedure can be written as a few simple matrix multiplication operations.

First, derive the maximum likelihood estimator (MLE) for the classconditional pixel probabilities and the prior . Hint1: We saw in lecture that MLE can be thought of as ‘ratio
1
^
of counts’ for the data, so what should _{jc} be counting? Derivations should be rigorous. Hint2: Similar to the binary case, when calculating the MLE for _{j} for j = 0; 1; :::; 8, write
_{t}(i)
p(t^{(i)} j ) = ^{9}_{j=0} _{j}^{j} and in the loglikelihood replace _{9} = 1 ^{8}_{j=0} _{j}, and then take derivatives w.r.t. _{j}. This will give you the ratio ^_{j}=^_{9} for j = 0; 1; ::; 8. You know that ^_{j}’s
sum up to 1.

Derive the loglikelihood log p(tjx; ; ) for a single training image.

Fit the parameters and using the training set with MLE, and try to report the average
1 
N 
(i) 
jx 
(i) 
^ 

loglikelihood per data point 
_{i=1} log p(t 
; ; ^), using Equation (1.1). What goes 

N 

wrong? (it’s okay if you can’t compute the average loglikelihood here). 

(d) 
^ 

Plot the MLE estimator as 10 separate greyscale images, one for each class. 

(e) 
Derive the Maximum A posteriori Probability (MAP) estimator for the classconditional 

pixel probabilities , using a Beta(3, 3) prior on each _{jc}. Hint: it has a simple nal form, 

and you can ignore the Beta normalizing constant. 

(f) 
Fit the parameters and using the training set with MAP estimators from previous part, 

1 
N 
(i) 
jx 
(i) 
^ 

and report both the average loglikelihood per data point, 
_{i=1} log p(t 
; ; ^), and 

N 

the accuracy on both the training and test set. The accuracy is de ned as the fraction of 

^ 

examples where the true class is correctly predicted using c^ = argmax_{c} log p(t_{c} = 1jx; ; ^). 

(g) 
^ 

Plot the MAP estimator as 10 separate greyscale images, one for each class. 

Generating from a Na ve Bayes Model, 30 pts. De ning a joint probability distribution over the data lets us generate new data, and also lets us answer all sorts of queries about the data. This is why these models are called Generative Models. We will use the Na ve Bayes model trained in previous question to generate data.

True or false: Given this model’s assumptions, any two pixels x_{i} and x_{j} where i 6= j are independent given c.

True or false: Given this model’s assumptions, any two pixels x_{i} and x_{j} where i 6= j are independent after marginalizing over c.

Using the parameters t using MAP in Question 1, produce random image samples from the model. That is, randomly sample and plot 10 binary images from the marginal distribution
^ ^
p(xj ; ^). Hint: To sample from p(x j ; ^), rst sample random variable c from p(c j ^) using j ^{^}
np.random.choice, then depending on the value of c, sample x_{j} from p(x_{j} c; _{jc}) for j =
1; :::; 784 using np.random.binomial(1,..). These functions can take matrix probabilities as input, so your solution to this part should be a few lines of code.

(Optional – 0 pts) One of the advantages of generative models is that they can handle missing data, or be used to answer di erent sorts of questions about the model. Derive p(x_{bottom}jx_{top}; ; ), the marginal distribution of a single pixel in the bottom half of an image given the top half, conditioned on your t parameters. Hint: you’ll have to marginalize over c.

(Optional – 0 pts) For 20 images from the training set, plot the top half the image concatenated with the marginal distribution over each pixel in the bottom half. i.e. the bottom half of the image should use grayscale to represent the marginal probability of each pixel being 1 (darker for values close to 1).
2

Principal Component Analysis, 30 pts. Using the numpy data le digits.npy and the utils.py dataloading helper code, you will nd 6 sets of 16 16 greyscale images in vector format (the pixel intensities are between 0 and 1 and were read into the vectors in a rasterscan manner). The images contain handwritten 2’s and 3’s, scanned from postal envelopes. train2 and train3 contain examples of 2’s and 3’s respectively to be used for training. There are 300 examples of each digit, stored as 300 256 matrices. Note that each data vector is a row of data matrices returned by load_data function. valid2 and valid3 contain data to be used for validation (100 examples of each digit) and test2 and test3 contain test data to be used for nal evaluation only (200 examples of each digit).
Apply the PCA algorithm to the 600 x 256 digit images (computing all 256 of the eigenvalues and eigenvectors, don’t forget to center the data). Then you should plot the (sorted) eigenvalues as a descending curve. This plot shows the spectrum of the data, and roughly tells you how much variance is contained along each eigenvector direction. Then view the rst 3 eigenimages (reshape each of the rst 3 eigenvectors and use imagesc to see these as images) as well as the mean of data. This part is for you to gain some intuition about how PCA works. You do not need to write this part up!

For each image in the validation set, subtract the mean of training data and project it into the lowdimensional space spanned by the rst K principal components of training data. After projection, use a 1NN classi er on K dimensional features (the code vectors) to classify the digit in the lowdimensional space. You need to implement the classi er yourself. You will do the classi cation under di erent K values to see the e ect of K. Here, let K = 2, 5, 10, 20, 30, and under each K, classify the validation digits using 1NN. Plot the results, where the plot should show the curve of validation set classi cation error rates versus number of eigenvectors you keep, i.e., K.

If you wanted to choose a particular model from your experiments as the best, which model (number of eigenvectors) would you select? Why?

Report the performance of your nal classi er over the test data.
3