$30.00

## Description

Overview

In this assignment, we’ll implement a probabilistic model which we can apply to the task of image completion. Basically, we observe the top half of an image of a handwritten digit, and we’d like to predict what’s in the bottom half. An example is shown in Figure 1.

This assignment is meant to give you practice with the techniques covered in the lectures on probabilistic modeling. It’s not as long as it looks from this handout. The solution requires about

8-10 lines of code.

Mixture of Bernoullis model

The images we’ll work with are all 28 ⇥ 28 binary images, i.e. the pixels take values in {0, 1}. We ignore the spatial structure, so the images are represented as 784-dimensional binary vectors.

A mixture of Bernoullis model is like the other mixture models we’ve discussed in this course. Each of the mixture components consists of a collection of independent Bernoulli random variables. I.e., conditioned on the latent variable z = k, each pixel x_{j } is an independent Bernoulli random

^{v}^{ariable } ^{wi}^{t}^{h} ^{p}^{ar}^{am}^{e}^{t}^{e}^{r} ^{✓}_{k}_{,}_{j} ^{:}

_{D}

j

_{p(x}^{(i)} _{|} _{z} _{=}_{ k}_{)} _{=} ^{Y} _{p(x}^{(i)} _{|} _{z} _{=} _{k}_{) } _{(}_{1)}

j=1

_{D}

_{(i)}

^{= } ^{✓}

^{Y} ^{x}_{j}

k,j

^{(1 } ^{✓}

k,j

(i)

1 x

^{) } ^{j}

(2)

j=1

Try to understand where this formula comes from. You’ll find it useful when you do the derivations.

This can be written out as the following generative process:

Figure 1: An example of the observed data (left) and the predictins about the missing part of the image (right).

Sample z from a multinomial distribution with parameter vector ⇡. For j = 1,. .., D:

^{Sample} ^{x}_{j} ^{from} ^{a} ^{Bernoulli} ^{d}^{i}^{s}^{t}^{r}^{i}^{bu}^{t}^{i}^{on} ^{wi}^{t}^{h} ^{p}^{ar}^{am}^{e}^{t}^{e}^{r} ^{✓}_{k}_{,}_{j} ^{,} ^{where} ^{k} ^{is} ^{the} ^{v}^{alue} ^{of} ^{z}^{.}

It can also be written mathematically as:

^{z} ^{⇠} ^{M}^{u}^{l}^{t}^{i}^{n}^{om}^{i}^{al}^{(}^{⇡}^{)} ^{(}^{3)}

^{x}j ^{|} ^{z} ^{=} ^{k} ^{⇠} ^{B}^{e}^{r}^{n}^{ou}^{lli}^{(}^{✓}k,j ^{) } ^{(}^{4)}

Summary of notation

We will refer to three dimensions in our model:

• N = 60,000, the number of training cases. The training cases are indexed by i.

• D = 28 ⇥ 28 = 784, the dimension of each observation vector. The dimensions are indexed by j.

• K , the number of components. The components are indexed by k.

The inputs are represented by X, an N ⇥ D binary matrix. In the E–step, we compute R, the matrix of responsibilities, which is an N ⇥ K matrix. Each row gives the responsibilities for one training case.

The trainable parameters of the model, written out as vectors and matrices, are:

The rows of ⇥ correspond to mixture components, and columns correspond to input dimensions.

Part 1: Learning the parameters (4 marks)

In the first step, we’ll learn the parameters of the model given the responsibilities, using the MAP

criterion. This corresponds to the M–step of the E-M algorithm.

In lecture, we discussed the E-M algorithm in the context of maximum likelihood (ML) learning. This is discussed in detail in Section 5.2 of the Lecture 18 notes, and you should read that section

^{carefully } ^{b}^{efore } ^{starting} ^{t}^{h}^{i}^{s } ^{part. } ^{The } ^{MAP } ^{case} ^{is} ^{only } ^{s}^{li}^{g}^{h}^{t}^{l}^{y} ^{d}^{i}^{↵}^{e}^{r}^{e}^{n}^{t} ^{from } ^{ML:} ^{the} ^{on}^{l}^{y}

^{di}^{↵}^{erence } ^{is} ^{that} ^{w}^{e} ^{add } ^{a } ^{prior } ^{p}^{r}^{ob}^{ab}^{ili}^{t}^{y} ^{t}^{e}^{r}^{m} ^{to} ^{the} ^{o}^{b}^{j}^{ec}^{t}^{i}^{v}^{e} ^{f}^{un}^{c}^{t}^{i}^{on} ^{in } ^{the} ^{M}^{–}^{s}^{t}^{e}^{p}^{.} ^{I}^{n}

particular, recall that in the context of ML, the M–step maximizes the objective function:

k

X X r(i) hlog Pr(z(i) = k) + log p(x(i) | z(i) = k)i , _{(}_{5)}

^{i=1} k=1

k

where the r^{(i) } are the responsibilities computed during the E–step. In the MAP formulation, we add the (log) prior probability of the parameters:

_{N } _{K}

k

^{X} ^{X} r^{(i)} ^{h}log Pr(z^{(i)} = k) + log p(x^{(}i) | z(i) = k)i + log p(⇡) + log p(⇥) (6)

i=1 k=1

Our prior for ⇥ is as follows: every entry is drawn independently from a beta distribution with parameters a and b. The beta distribution is discussed in Section 3.1 of the Lecture 17 notes, but here it is again for reference:

k,j

^{p}^{(}^{✓}k,j ^{)} ^{/} ^{✓}^{a} ^{1}^{(1 } ^{✓}k,j ^{)}^{b } ^{1 } ^{(}^{7)}

Recall that / means “proportional to.” I.e., the distribution has a normalizing constant which we’re ignoring because we don’t need it for the M–step.

^{F}^{or } ^{the} ^{prior } ^{ov}^{er } ^{mixing } ^{p}^{r}^{o}^{p}^{or}^{t}^{i}^{on}^{s } ^{⇡}^{,} ^{w}^{e’ll } ^{use } ^{the} ^{D}^{i}^{r}^{i}^{c}^{h}^{le}^{t} ^{d}^{i}^{s}^{t}^{r}^{i}^{bu}^{t}^{i}^{on}^{,} ^{whi}^{c}^{h } ^{is} ^{t}^{h}^{e}

conjugate prior for the multinomial distribution. It is a distribution over the probability simplex,

_{i.e.} _{the} _{s}_{e}_{t} _{of} _{v}_{ec}_{t}_{or}_{s} _{whi}_{c}_{h} _{define} _{a} _{v}_{alid} _{p}_{r}_{ob}_{ab}_{ili}_{t}_{y} _{d}_{i}_{s}_{t}_{r}_{i}_{bu}_{t}_{i}_{on}_{.}^{3} _{The} _{d}_{i}_{s}_{t}_{r}_{i}_{bu}_{t}_{i}_{on} _{t}_{a}_{k}_{e}_{s} _{the }_{f}_{orm}

p(⇡) / ⇡a1 1⇡a2 1 ·· · ⇡aK 1. (8)

1 2 K

For simplicity, we use a symmetric Dirichlet prior where all the a_{k } parameters are assumed to be equal. Like the beta distribution, the Dirichlet distribution has a normalizing constant which we

don’t need when updating the parameters. The beta distribution is actually the special case of the

_{D}_{i}_{r}_{i}_{c}_{h}_{le}_{t} _{d}_{i}_{s}_{t}_{r}_{i}_{bu}_{t}_{i}_{on} _{for} _{K} _{=} _{2.} _{Y}_{ou} _{can} _{read} _{more} _{a}_{b}_{ou}_{t} _{it} _{on} _{Wiki}_{p}_{edia } _{if} _{y}_{ou}_{’}_{r}_{e} _{i}_{n}_{t}_{e}_{r}_{e}_{s}_{t}_{e}_{d}_{.}^{4}

Your tasks for this part are as follows:

1. (2 marks) Derive the M–step update rules for ⇥ and ⇡ by setting the partial derivatives of

Eqn 6 to zero. Your final answers should have the form:

⇡_{k } ·· ·

^{✓}k,j ^{·}^{·} ^{·}

Be sure to show your steps. You may want to refer to Example 5 from the Lecture 17 notes, and Example 4 from the Lecture 18 notes.

Note: both of these optimization problems involve inequality constraints, i.e. 0 ✓_{k}_{,}_{j} 1 and ⇡_{k } 0. You can ignore the inequality constraints, since they will never be tight. On the other hand, the optimization for ⇡ involves a normalization constraint ⇡_{1 } + ·· · + ⇡_{K } = 1.

You can deal with this using Lagrange multipliers, similarly to Example 4 from Lecture 18. Alternatively, you can define ⇡_{K } = 1 ⇡_{1 } ·· · ⇡_{K } _{1 } and then solve the unconstrained optimization problem for ⇡_{1},. .., ⇡_{K } _{1}. Either approach will work, but the first one is slightly

shorter.

2. (2 marks) Take these formulas and use them to implement the functions Model.update_pi and Model.update_theta in mixture.py. Each one should be implemented in terms of NumPy matrix and vector operations. Each one requires only a few lines of code, and should not involve any for loops.

To help you check your solution, we have provided the function checking.check_m_step. If this check passes, you’re probably in good shape.^{5}

To convince us of the correctness of your implementation, please include the output of running mixture.print_part_1_values(). Note that we also require you to submit mixture.py through MarkUs.

3. (0 marks) The function learn_from_labels learns the parameters of the model from the labeled MNIST images. The values of the latent variables are chosen based on the digit class labels, i.e. the latent variable z^{(i) } is set to k if the ith training case is an example of digit class k. In terms of the code, this means the matrix R of responsibilities has a 1 in the (i, k) entry if the ith image is of class k, and 0 otherwise.

Run learn_from_labels to train the model. It will show you the learned components

^{(i.e. } ^{r}^{o}^{ws} ^{of} ^{⇥}^{)} ^{and } ^{pri}^{n}^{t} ^{the} ^{t}^{r}^{ai}^{n}^{i}^{n}^{g} ^{and } ^{test} ^{log-li}^{k}^{elih}^{oo}^{ds. } ^{Y}^{ou } ^{do} ^{not } ^{n}^{ee}^{d } ^{to } ^{s}^{u}^{b-}

mit anything for this part. It is only for your own satisfaction.

Part 2: Posterior inference (3 marks)

Now we derive the posterior probability distribution p(z | x_{ob}_{s}), where x_{obs } denotes the subset of the pixels which are observed. In the implementation, we will represent partial observations in terms

_{of} _{v}_{ariables } _{m}^{(i}^{)}_{,} _{where} _{m}^{(i)} _{=} _{1} _{if} _{the} _{j}_{th} _{pixel} _{of} _{the} _{ith} _{image} _{is} _{obser}_{v}_{ed, } _{and} _{0} _{otherwise. } _{I}_{n}

j j

j

_{the} _{i}_{m}_{p}_{le}_{m}_{e}_{n}_{t}_{at}_{i}_{on}_{,} _{w}_{e} _{organize} _{the} _{m}^{(}^{i}^{)}_{’}_{s} _{i}_{n}_{t}_{o} _{a} _{matrix } _{M } _{whi}_{c}_{h} _{is} _{the} _{same} _{size} _{as} _{X}_{.}

1. (1 mark) Derive the rule for computing the posterior probability distribution p(z | x). Your final answer should look something like

Pr(z = k | x) = ·· · (9)

where the ellipsis represents something you could actually implement. Note that the image may be only partially observed.

Hints: For this derivation, you probably want to express the observation probabilities in the form of Eqn 2. You may also wish to refer to Example 1 from the Lecture 18 notes.

2. (1 mark) Implement the method Model.compute_posterior using your solution to the pre- vious question. While your answer to Question 1 was probably given in terms of probabilities, we do the computations in terms of log probabilities for numerical stability. We’ve already filled in part of the implementation, so your job is to compute log p(z, x) as described in the method’s doc string.

Your implementation should use NumPy matrix and vector operations, rather than a for loop. Hint: There are two lines in Model.log_likelihood which are almost a solution to this question. You can reuse these lines as part of the solution, except you’ll need to modify them to deal with partial observations.

To help you check your solution, we’ve provided the function checking.check_e_step. Note that this check only covers the case where the image is fully observed, so it doesn’t fully verify your solution to this part.

3. (1 mark) Implement the method Model.posterior_predictive_means, which computes the posterior predictive means of the missing pixels given the observed ones. Hint: this requires only two very short lines of code, one of which is a call to Model.compute_posterior. You may wish to refer to Example 2 from the Lecture 18 notes.

To convince us of the correctness of the implementation for this part and the previous part, please include the output of running mixture.print_part_2_values(). Note that we also require you to submit mixture.py through MarkUs.

4. (0 marks) Run the function train_with_em, which trains the mixture model using E–M.

It plots the log-likelihood as a function of the number of steps.^{6 } You can watch how the mixture components change during training.^{7 } It also shows the model’s image completions after every step. You can watch how they improve over the course of training. At the very end, it outputs the training and test log-likelihoods. The final model for this part should be much better than the one from Part 1. You do not need to submit anything for this part. It’s only for your own satisfaction.

Part 3: Conceptual questions (3 marks)

This section asks you to reflect on the learned model. We tell you the outcomes of the experiments, so that you can do this part independently of the first 2. Each question can be answered in a few sentences.

1. (1 mark) In the code, the default parameters for the beta prior over ⇥ were a = b = 2. If we instead used a = b = 1 (which corresponds to a uniform distribution), the MAP learning algorithm would have the problem that it might assign zero probability to images in the test set. Why might this happen? Hint: what happens if a pixel is always 0 in the training set,

but 1 in the test image? You may want to re-read Section 2.1 of the Lecture 17 notes.

2. (1 mark) The model from Part 2 gets significantly higher average log probabilities on both the training and test sets, compared with the model from Part 1. This is counterintuitive, since the Part 1 model has access to additional information: labels which are part of a true causal explanation of the data (i.e. what digit someone was trying to write). Why do you think the Part 2 model still does better?

3. (1 mark) The function print_log_probs_by_digit_class computes the average log-probabilities for di↵erent digit classes in both the training and test sets. In both cases, images of 1’s are

assigned far higher log-probability than images of 8’s. Does this mean the model thinks 1’s are far more common than 8’s? I.e., if you sample from its distribution, will it generate far more 1’s than 8’s? Why or why not?