$30.00
Description
Instructions: Solutions to problems 1 and 2 are to be submitted via Quercus and only PDF les will be accepted. (Ideally, you should submit only one le.) You are strongly encouraged to do problems 3 and 4 but these are not to be submitted for grading.

In this problem, you will use the discrete cosine transform (DCT) to denoise an image. An R function to compute a twodimensional DCT is available in the R package dtt { this package contains functions to compute a number of \trigonometric” transforms. The R function that will be used in this package is mvdct.

In lecture, we de ned matrix transforms for vectors of length n, focusing on families of matrices fA_{n}g satisfying A^{T}_{n} A_{n} = D_{n} where D_{n} is a diagonal matrix.
Suppose that Z is an m n pixel image, which we can represent as an m n matrix. Then using fA_{n}g, we can de ne a transform of Z as follows:
Zb = A_{m}ZA^{T}_{n}
Given the m n matrix Zb, show that we can reconstruct the original image by Z = D_{m}^{1}A^{T}_{m}ZAb_{n}D_{n} ^{1}.

To denoise an image using the DCT, we rst compute Zb and then perform hard or softthresholding to obtain a thresholded (or shrunken) transform Zb (which will typically be \sparser” than Zb). Our hope is that the components of Zb corresponding to noise in the image are eliminated and so the denoised image can be obtained by applying the inverse transform to Zb .
Using mvdct, write R functions to perform both hard and softthresholding (dependent on a parameter ). Your functions should not threshold the (1; 1) component of the DCT matrix. (A simple R function to do hard thresholding will be provided on Quercus; this can be modi ed to do soft thresholding.)

The le boats.txt contains a noisy 256 256 pixel grayscale image of sailboats. Its entries are numbers between 0 and 1 where 0 represents black and 1 represents white. The data can be read into R and displayed as an image as follows:

boats < matrix(scan(“boats.txt”),ncol=256,byrow=T)

image(boats, axes=F, col=grey(seq(0,1,length=256)))
Using your functions from part (b), try to denoise the image as best as possible. (This is quite subjective but try di erent methods and parameter values.)
Note: You should not expect your noisereduced image to be a drastic improvement over noisy image; in fact, connaisseurs of pointillism may nd prefer the noisy image. There are two issues here: First, we are applying noise reduction to the whole image rather than dividing the image into smaller subimages and applying noise reduction to each subimage. Second, even very good noise reduction tends to wash out some details, thereby rendering the noisereduced image less visually appealing.

Suppose that U and V are independent Poisson random variables with means _{u} and _{v}. We then de ne X = U + 2V , which is said to have a Hermite distribution with parameters
_{u} and _{v}. (The Hermite distribution is the distribution of a sum of two correlated Poisson random variables.)
(a) Show that the probability generating function of X is
h i
g(s) = E(s^{X} ) = exp _{u}(s 1) + _{v}(s^{2} 1) :

The distribution of X can, in theory, be obtained exactly in closedform with exact computation for given _{u} and _{v} somewhat more di cult. However, we can approximate the distribution very well by combining the exact probability generating with the discrete
Fourier transform. The key to doing this is to nd M such that P (X M) is very small so that we can use the discrete Fourier transform to compute (to a very good approximation)
P (X = x) for x = 0; 1; ; M 1.
One approach to determining M is to use the probability generating function of X and Markov’s inequality. Speci cally, if s > 1 we have
P (X M) = P (s^{X} s^{M} ) 
E(s^{X} ) 
exp [ _{u}(s 1) + _{v}(s^{2} 1)] 

= 

_{s}M 
_{s}m 

Use this fact to show that for P (X M) , we can take 

_{u}(s 
1) + _{v}(s^{2} 1) 
ln( ) 

M = inf 

ln(s) 

s>1 

Given M (which depends on ), the algorithm for determining the distribution of X goes as follows:


Evaluate the probability generating function g(s) at s = s_{k} = exp( 2 k=M) for k = 0; ; M 1; the values of s can be created in R as follows:

> s < exp(2*pi*1i*c(0:(M1))/M)
2. Evaluate P (X = x) by computing the inverse FFT of the sequence fg(s_{k}) : k =

0; ; M 1g:
_{1} M 1
x
g(s_{k})
P (X = x) =
M _{k=0} ^{exp}
2 _{M} k
X
Write an R function to implement this algorithm where M is determined using the method in part (b) with = 10 ^{5}. Use this function to evaluate the distribution of X for the following two cases:

_{u} = 1 and _{v} = 5;

_{u} = 0:1 and _{v} = 2.
Note that you do not need to evaluate the bound M with great precision; for example, a simple approach is to take a discrete set of points S = f1 < s_{1} < s_{2} < < s_{k}g and de ne

_{u}(s
1) + _{v}(s^{2} 1) ln( )
M = min
ln(s)
s2S
where = s_{i+1} s_{i} and s_{k} are determined graphically (that is, by plotting the appropriate function) so that you are convinced that the value of M is close to the actual in mum.
Supplemental problems (not to hand in):
3. As noted in lecture, catastrophic cancellation in the subtraction x y can occur when x
and y are subject to roundo error. Speci cally, if (x) = x(1 + u) and (y) = y(1 + v) then
(x) (y) = x y + (xu yv)
where the absolute error jxu yvj can be very large if both x and y are large; in some cases,
this error may swamp the object we are trying to compute, namely x y, particularly if
jx yj is relatively small compared to jxj and jyj. For example, if we compute the sample variance using the right hand side of the identity

n
n
n
n
!
2
(1)
(x_{i} x)^{2} =
x_{i}^{2}
x_{i}
;
X
X
1
^{X}i
i=1
i=1
=1
a combination of roundo errors from the summations and catastrophic cancellation in the subtraction may result in the computation of a negative sample variance! (In older versions of Microsoft Excel, certain statistical calculations were prone to this unpleasant phenomenon.) In this problem, we will consider two algorithms for computing the sample variances that avoid this catastrophic cancellation. Both are \one pass” algorithms, in the sense that we only need to cycle once through the data (as is the case if we use the right hand side of (1)); to use the left hand side of (1), we need two passes since we need to rst compute x before
computing the sum on the left hand side of (1). In parts (a) and (b) below, de ne x_{k} be the sample mean of x_{1}; ; x_{k} and note that
k 1
^{x} ^{=} ^{x }^{+} ^{x}
with x = x_{n}.
n 

X_{i} 
x)^{2} can be computed using the recursion 

(a) Show that (x_{i} 

=1 

k+1 
k 
k 

^{X}i 
(x_{i} 
x_{k+1})^{2} 
= (x_{i} x_{k})^{2} 
+ 
^{(x}k+1 
x_{k})^{2} 

k + 1 

X 

=1 
i=1 
for k = 1; ; n 1. (This is known as West’s algorithm.)

A somewhat simpler onepass method replaces x by some estimate x_{0} and then corrects for the error in estimation. Speci cally, if x_{0} is an arbitrary number, show that
n n
^{X}(x_{i} x)^{2} = ^{X}(x_{i} x_{0})^{2} n(x_{0} x)^{2}:
i=1 i=1

The key in using the formula in part (b) is to choose x_{0} to avoid catastrophic cancellation, that is, x_{0} should be close to x. How might you choose x_{0} (without rst computing x) to minimize the possibility of catastrophic cancellation? Ideally, x_{0} should be calculated using
o(n) operations.
(An interesting paper on computational algorithms for computing the variance is \Algorithms for computing the sample variance: analysis and recommendations” by Chan, Golub, and LeVeque; this paper is available on Quercus.)

(a) Suppose that A, B, C, and D are matrices so that AC and BD are both wellde ned. Show that
(AC) (BD) = (A B)(C D)
(Hint: This is easier than it looks  the key is to start with the right hand side of the identity.)
(b) Use the result of part (a) to show that
(A B)^{1}=A^{1} B^{1}
for invertible matrices A and B.
(c) Suppose that H_{2}k is a 2^{k} 2^{k} Hadamard matrix. Prove the claim given in lecture:
k
Y
H_{2}k = (I_{2}j 1 H_{2} I_{2}k j )
j=1
(Hint: Use induction, starting with the fact that the identity holds trivially for k = 1.)