Solved-Programming Project -Solution

$35.00 $24.00

In this project, you are going to experience parallel programming with C/C++ using MPI library. You will implement a parallel algorithm for image denoising with the Ising model using Metropolis- Hastings algorithm. Throughout the document you will probably encounter models and methods you have never seen before. There will be things related to probability. But…

You’ll get a: . zip file solution

 

 

Description

5/5 – (2 votes)

In this project, you are going to experience parallel programming with C/C++ using MPI library.

You will implement a parallel algorithm for image denoising with the Ising model using Metropolis-

Hastings algorithm. Throughout the document you will probably encounter models and methods

you have never seen before. There will be things related to probability. But don’t be afraid and

embrace it. Because, surprisingly we will end up with a very simple algorithm with a single line

equation.

The Ising Model

The Ising model, named after the physicist Ernst Ising, is a mathematical model of ferromagnetism

in statistical mechanics. The model consists of discrete variables that represent magnetic dipole

moments of atomic spins that can be in one of two states (+1 or -1). The spins are arranged in

a graph, usually a lattice, allowing each spin to interact with its neighbors.

Did you understand how The Ising model works? No, me neither. But there are two important

things to note in the definition that we wrote in bold.

The Ising model models things that have two states (+1 or -1)

Things interact with its neighbors

Not understanding how The Ising model works in physics won’t stop us to apply it to images.

If we assume that a black and white image is generated using The Ising model, then it means that

if we take random black pixel from the image, it is more likely that this pixel is surrounded by black

pixels (same for the white pixels). This is all we need to know about The Ising model. Now let’s

apply this to image denoising.

Image Denoising

Let the Z be a I × J binary matrix that describes an image where each Z ij is either +1 or −1

(i.e. white or black). By randomly flipping some of the pixels of Z, we obtain the noisy image X

which we are observing. We assume that the noise-free image Z is generated from an Ising model

(parametrized with β) and X is generated by flipping a pixel of Z with a small probability π:

Z ∼ p(Z | β) (1)

F ij ∼ Be(π) (2)

X ij = (−1)

F ij

Z ij

(3)

where

p(Z | β) ∝

E(Z | β)

=

e −E(Z|β)

X

β

(4)

Z ij Z kl

(5)

(i,j)∼(k,l)

Then we want to find the posterior distribution of Z:

p(Z | X, β, π)

=

=

p(Z, X | β, π)

p(X | β, π)

p(Z, X | β, π)

p(Z | β)p(X | Z, π)

X

exp  γ

Z ij X ij + β

ij

(6)

(7)

(8)

X

Z ij Z kl 

(9)

(i,j)∼(k,l)

1 − π

1

. In this formulation, higher γ implies lower noise on the X. Similarly, we

where γ = log

2

π

expect more consistency between the neighbouring pixels if β is higher.

OK, what’s going on”, you thought probably. But, you really think that I’m on

your side. Sorry, but these are the equations that you have to work with. … … … Just

kidding. Last part was true but let’s go step by step and understand what is going on:

1. There is a black and white image Z that we represent its pixels with +1 and −1 (i.e. black or

white). We don’t have this image and this is the image that we want to achieve.

2. There is another black and white image X. This is the noisy version of Z. By noisy we mean

that, some of the pixels of the image Z are flipped. Equations 2 and 3 model this noise relation.

π is the probability of a random flip. If π = 0.1 then ten percent of the pixels of Z will be

flipped to achieve the noisy image X. We start with this noisy image X, and try to reach the

noise-free image Z.

3. We assume that the image Z is generated from an Ising model.

4. By combining all these 3 information, we can statistically show our model as p(Z | X, β, π).

It can be read as the probability of Z given X, β and π. It returns a score for a candidate Z

image that we propose. The score is higher if the candidate image is more likely to the original

image (or more fit to the model). So it is a distribution over all possible Z images and we

want to find the most probable Z image, in another word the Z image that gives the highest

score from the posterior probability. However, this is not straight forward process because the

posterior probability doesn’t have a close form like Gaussian. Thus we can’t find the most

probable image easily. We have to try all possible Z images, but this is not efficient due to

that there are 2 I×J possibilities. Do we give up then? Of course not! Instead of finding the

original image in an instant, we will approach to it step by step.

4

Metropolis-Hastings Markov Chain Monte Carlo

We have a function (the posterior distribution) that tells us which candidate Z image is more similar

to the original image. Instead of choosing random images, it is obvious that we should start with

the image X which is the only thing that is given to us. To reach the noise free image Z we need

to make some modification to X. Metropolis-Hastings algorithm gives us a very simple approach to

do this:

1. Choose a random pixel from the image X

2. Calculate an acceptance probability of flipping the pixel or not.

3. Flip this pixel with the probability of the acceptance probability that is calculated in the

second step.

4. Repeat this process untill it converges.

That is basically all you need to do. Simple as I promised, right? Just one last equation that

shows how to calculate the acceptance probability. And actually this one will be the only equation

you are going to use. So every step we will only calculate how flipping a pixel will effect our outcome.

As you may guess, we will do it by dividing the posterior distributions of two possibilities (flipping

or not):

(t)

(t)

0

Assume a bit flip is proposed in pixel (i, j) at time step t, (a bit flip on Z ij , i.e. Z ij

← −Z ij ),

then

α (t)

p(Z 0 | X, β, π)

p(Z (t) | X, β, π)

#

#

P

0

0

0

exp γZ ij

X ij + β (i,j)∼(k,l) Z ij

Z kl

#

#

=

P

(t)

(t) (t)

exp γZ ij X ij + β (i,j)∼(k,l) Z ij Z kl

#

#

P

(t)

(t) (t)

exp −γZ ij X ij − β (i,j)∼(k,l) Z ij Z kl

#

#

=

P

(t)

(t) (t)

exp γZ ij X ij + β (i,j)∼(k,l) Z ij Z kl

X

(t)

(t) (t)

= exp  −2γZ ij X ij − 2β

Z ij Z kl 

=

(i,j)∼(k,l)

3

(10)

(11)

(12)

(13)Notice that, the summation is over the all the pixels Z kl that are connected to a fixed Z ij . For ex-

ample, if the randomly chosen pixel was (i, j) = (3, 4). Then (k, l) = {(2, 3), (2, 4), (2, 5), (3, 3), (3, 5),

(4, 3), (4, 4), (4, 5)}

The pseudocode for the whole process is as follows and this is actually the only part

you need to understand to do this project:

1. Initialize your β and π priors. Then γ =

1 − π

1

log

2

π

2. Initialize Z (0) ← X

3. At time step t:

3.1. Randomly choose a pixel (i, j).

(t)

(t)

0

3.2. Propose a bit flip on Z ij , i.e. Z ij

← −Z ij .

o

n

p(Z 0 |X,β,π)

3.3. Calculate acceptance probability α (t) = min 1, p(Z

(t) |X,β,π)

3.4. Z (t+1) ← Z 0 , with probability α (t)

3.5. Z (t+1) ← Z (t) , otherwise

Remarks:

1. We don’t know the true values of β and π. According to our prior knowledge (by looking at

the image or via our spidey sense) we are just throwing wild guesses. To remind what these

values represent:

We expect more consistency between the neighbouring pixels if β is higher.

π is our original prior to show the noise probability. So higher π implies higher noise on the X. γ is like the inverse of π and we just come up with it to make equations more readable.

2. A probability can’t be higher than 1. That is why we have min function on step 3.3. while calculating the acceptance probability.

3. Steps 3.4 and 3.5, demonstrate the accepting of a flip with the probability of accepting proba- bility. The most simple method to do that is to select a number from the uniform distribution between 0 and 1. If the selected number is less than the accepting probability, then flip.

4. The method is guaranteed to converge if it runs enough iteration. But the enough iteration can be huge according to your priors and how noisy and big the image is.

5 Parallel Image Denoising

In this project, our aim is to implement the parallel image denoising using MPI. Two different parallel approaches will be discussed. We will assume that the image is a square of size n × n. There will be 1 master and p slave processors and we will assume that n is divisible by the number of slave processors p.

1. In the first approach, each processor is responsible from a group of n/p adjacent rows. Each processor works on (n/p × n) pixels and these pixels are stored locally by the processor. When a pixel is inspected, the processor should inspect all of its neighbors. When the pixel is not on the boundary of two adjacent processors, information about the neighbor pixels are present to the processor. When a pixel on the boundary is inspected, the processor needs to obtain information from the adjacent processor. Consider the black pixel in the below figure.

Processor 1 should communicate with Processor 2 in order to learn the status of the south neighbor of the black pixel. Therefore, each processor should communicate with the adjacent processor at every iteration. Information about those neighbor pixels of the boundary can be stored locally and updated at every iteration.

2. In the second approach, the grid is divided into (n/p × n/p) blocks and each process is re- sponsible from a block. Now the updates are more trickier since each process has more than

1 adjacent process.

6

Implementation

1. For the MPI environment and tutorials please visit the web page. You can find many other tutorials in the web.

2. You are expected to implement the first approach. There will be bonus points for those who also implement the second approach.

3. The program will read the input from a text file and print the result in another text file. The input text file will be a 2D array representation of a black and white noisy image.

The input file must only be read by master processor and distributed to slave (rest) processors by the master processor. The whole array should not be stored in each processor locally.

4. Start by distributing the input among the processesors and let each processor work on its pixels without any communication. Once you accomplish, add the communication.

5. Any functioning of the program regarding the whole program such as printing the output should be done by the master processor.

When two processors are exchanging information about the boundary cells, be careful to avoid deadlock. Before processing a pixel, you need to share boundary pixels between the processors.

Also, before moving on the next iteration, make sure that all processors have finished their

jobs.

7. The names of the input and output files and the values of β and π priors will be given on the

command line, as well as the number of processes. An example execution for a Windows user

that runs on 4 processors and uses input.txt and output.txt files with β = 0.6 and π = 0.1 would be:mpiexec -n 4 project.exe input.txt output.txt 0.6 0.1

7 Important Nice Things

1. We prepared two python scripts for you:

image to text.py: Converts your image into noise-free project ready text input file.

python image to text.py input image output file

make noise.py: Converts your noise-free project ready text input file into noisy project

ready text input file. The code takes pi as a parameter that determines the noise rate.

Try with your own images.

python make noise.py input file pi value output file

text to image.py: Converts your project ready text file into image. So you can see your

noisy image or the output of your code.

python text to image.py input file output image

2. If you say ”What is this wall of text with some fancy images? I need code! ”. Then don’t

worry, we covered you too. We prepared a Jupyter Notebook for you which includes a REAL

WORKING SEQUENTIAL VERSION of THIS METHOD. Hoorraay. Go check it

out: https://github.com/suyunu/Markov-Chain-Monte-Carlo

3. You need to run the main loop for a fairly long time. In the example on the Jupyter Notebook,

to completely denoise a 15%(π = 0.15) noisy image with 640 × 640 pixels we needed to iterate

for nearly 5 million times.

4. Because this is somewhat a random algorithm you may not end up with the same outputs

every time. Also more importantly, as our assumption of the real image coming from the Ising Model which is not really true, you will not end up with the exact same image with the original one, but a very close one.

5. Most importantly, I highly recommend you to take the course CmpE 548 Monte Carlo Methods to learn more about stuff like that. It is fun.

8 Submission

1. The deadline for submitting the project is 26.12.2018 – 23:59. The deadline is strict. We will have demo sessions the following days. In the demo session, you are required to bring your

own laptop and explain how you implemented the project.

2. This is an individual project. Your code should be original. Any similarity between submitted projects or to a source from the web will be accepted as cheating.

3. Your code should be suficiently commented and indented.