, ,

[SOLVED] Ee276 homework #7 p0

$25

File Name: Ee276_homework__7_p0.zip
File Size: 188.4 KB

Categories: , , Tags: , ,
5/5 - (1 vote)

1. Polar codes encoding and decoding.

Figure 1: Polar code encoding circuit with N = 4 for problem 2. W is a BEC, and Xi’s, Yi’s and Ui’s are binary.
For parts (a) to (c), assume that U1 and U2 are both frozen to 0, while U3 and U4 are the message bits.
(a) What is the rate of the code?
(b) Perform encoding for input message (U3,U4) = (1,1) and find the codeword (X1,X2,X3,X4).
(c) Perform successive cancellation decoding for received vector (Y1,Y2,Y3,Y4) = (1,?,?,1). Does the decoding succeed, and if yes, what is the decoded message (U3,U4)?
Now we try to understand how the choice of the frozen bits impacts the decoding. We also look at the suboptimality of successive cancellation decoding. For parts (d) and (e), assume that U2 and U3 are both frozen to 0, while U1 and U4 are the message bits.
(d) Perform successive cancellation decoding for received vector (Y1,Y2,Y3,Y4) = (1,?,?,0) and verify that the decoding fails when decoding U1.
(e) Perform optimal maximum likelihood decoding for the same received as part (d), i.e., (Y1,Y2,Y3,Y4) = (1,?,?,0). In this case, you can perform maximum likelihood decoding by
• First computing the codeword (X1,X2,X3,X4) for all 4 possible input messages (U1,U4).
• Then finding the input message(s) (U1,U4) for which you could receive (Y1,Y2,Y3,Y4) = (1,?,?,0). If more than one such message exists, declare failure.
Does the decoding succeed and, if yes, what is the decoded message (U1,U4)?
2. Proving polarization for the BEC. In polar coding, we preprocess the input so that the n identical uses of a symmetric memoryless channel become n synthetic channel uses with very different capacities. We state a polarization theorem, which says that as n → ∞, the fraction of almost noiseless channels approaches C and the fraction of almost useless channels approaches 1 − C, where C is the capacity of the original channel. In this question, we consider the binary erasure channel (BEC) with erasure probability p and prove the polarization theorem rigorously. For the BEC W with erasure probability p, define M(W) = pp(1 − p) as its mediocrity.
(a) When is the mediocrity of a channel 0?
(b) Consider the polarized channels W+ and W− we have seen in the class for m = 1. Are they also BECs? If so, what are M(W+) and M(W−)?
(c) Recall the tree of channel capacities obtained by recursively applying the polarization formula to the BECs. Suppose an ant walks on the tree of channel capacities starting at W and choosing W+ and W− with equal probability 1/2. Upon reaching each channel W˜ (e.g., W˜ = W+), it chooses W˜ + (W++) and W˜ − (W+−) with equal probability 1/2. Let Fm denote the distribution of the erasure probabilities for n = 2m and let F0 = p (with probability 1). What are the distributions F1 and F2?
(d) Let Mm denote the average mediocrity of the channels for the distribution Fm. For instance M0 = pp(1 − p). What is M1? Prove that. (e) Let. Prove that Mm ≤ ρm.
(f) Let mediocre(m,ϵ) denote the fraction of the n = 2m channnels with mediocrity

strictly larger than pϵ(1 − ϵ). Show that mediocre(m,ϵ) → 0 as m → ∞.
(g) Let g(m,ϵ) and b(m,ϵ) denote the fraction of the channels with erasure probability strictly smaller than ϵ (i.e., good channels) and strictly larger than 1−ϵ (i.e., bad channels) respectively. Show that
p ≥ b(m,ϵ)(1 − ϵ).
(Hint: recall that the expected erasure probability under Fm is the same for all m and equal to p.)
(h) Define g(ϵ) := limm→∞ g(m,ϵ). Argue that g(m,ϵ) ≥ g(m,δ) for any ϵ ≥ δ. Conclude that g(ϵ) ≥ g(δ) for any ϵ ≥ δ.
(i) Prove that g(ϵ) ≥ 1 − p. Thus, for any given ϵ ∈ (0,1), the fraction g(ϵ) of good channels becomes at least C = 1 − p as m → ∞.
3. Convexity of rate distortion function.
Assume (X,Y ) ∼ p(x,y) = p(x)p(y|x). In this problem, you will show that for fixed p(x), I(X;Y ) is a convex function of p(y|x).
(a) The log sum inequality states that for n positive numbers a1,a2,··· ,an, and b1,b2,··· ,bn, we have

with equality if and only if =constant. Using this inequality (you don’t have to prove this inequality), show that D(p||q) is convex in (p,q), i.e., λD(p1||q1) + (1 − λ)D(p2||q2) ≥ D(λp1 + (1 − λ)p2||λq1 + (1 − λ)q2)
(b) Let p1(y|x) and p2(y|x) be two different conditional distributions. For i ∈ {1,2}, let pi(x,y) = pi(y|x)p(x), i.e., their corresponding joint distributions. For 0 ≤ λ ≤ 1, let pλ(y|x) =∆ λp1(y|x) + (1 − λ)p2(y|x). Show that
pλ(y) = λp1(y) + (1 − λ)p2(y)
(c) The mutual information between random variables X and Y can be alternatively written as
I(X;Y ) = D(p(x,y)||p(x)p(y))
Using this in addition to the results of the previous parts show that for fixed p(x), I(X;Y ) is convex in p(y|x).
(d) Using the previous part, show that the rate distortion function R(I)(D) is convex in the distortion parameter D.

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] Ee276 homework #7 p0[SOLVED] Ee276 homework #7 p0
$25