[Solved] EE5111 Mini project 3

$25

File Name: EE5111_Mini_project_3.zip
File Size: 197.82 KB

SKU: [Solved] EE5111 Mini project 3 Category: Tag:
5/5 - (1 vote)

Study of Expectation Maximization (EM) algorithm

The objective of this exercise is to understand the Expectation Maximization (EM) algorithm. Assume that there are two biased coins, labelled as A and B. Coin A lands up as head with probability p and coin B lands up as head with probability q. An experiment is performed with n independent trials and in each trial, a coin is chosen at random (coin A with probability and coin B with probability 1 ) and tossed m times (independently).

Let z(i) {A,B} be the label of the coin selected and x(i) {H,T}m be the observed sequence in the ith trial of the experiment, i 1,2,,n. The labels of the coins {z(i),i 1,2,,n} remain hidden and the observer could only record the faces {x(i),i 1,2,,n}. Hence, the observations x(i) can be considered as i.i.d samples from a Mixture of two Binomial models:

PX(x) = PX(x|z(i) = A) + (1 ) PX(x|z(i) = B)

Our aim is to obtain maximum likelihood estimate for parameter vector using Expectation Maximization (EM) algorithm. Generate the observations (x(i),z(i)) for following two cases

(i) = 0.50, p = 0.35 and q = 0.60. (ii) = 0.25, p = 0.35 and q = 0.60.

Choose m = 1,10 and n = 10,1000,10000. Run the EM algorithm for following experiments

Experiment: 1 Assume that we know . Thus, the vector of parameters is given by = [p q]T.

  • Use observations from case (i) and assume that = 0.50 and initial estimates are [0.45 0.50]T
  • Use observations from case (ii) and assume that = 0.50 and initial estimates are [0.45 0.50]T
  • Use observations from case (ii) and assume that = 0.25 and initial estimates are [0.45 0.50]T

Experiment: 2 Assume that we do not know . Thus, the vector of parameters is given by = [ p q]T.

  • Use observations from case (ii) and initial estimates are [0.50 0.45 0.50]T
  • Repeat above experiment with initial estimate [0.50 0.50 0.45]T

Experiment: 3 Now use a Beta prior over . Derive the update equations for MAP estimate. Use the following priors Beta(1,1),Beta(1,3),Beta(2,6),Beta(3,9).

Make the following inferences from the algorithm for the aforementioned choices of n, m and 0:

1

  • Plot the learning curve[1] and show the convergence of the algorithm[2] for each experiment.
  • Observe how the final estimate of from the algorithm (call it EM), and number of iterations needed for convergence, change when we increase m and n.
  • Report your observation about case (b) and case (c) of experiment 1, where in former case we are assuming a wrong value of and in later case we are assuming the true value of
  • Report your observation about experiment 2 when we swap the prior for p and q.
  • Compare the result of experiment 3 with the result of experiment 2.

[1] Plot of the estimate at iteration k, k vs. iteration index, k.

[2] You shall consider that the algorithm has converged at iteration k when the update to any of the parameter is not more than

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[Solved] EE5111 Mini project 3[Solved] EE5111 Mini project 3
$25