, , ,

[SOLVED] Ee442 lab 4- timing synchronization

$25

File Name: Ee442__lab_4__timing_synchronization.zip
File Size: 339.12 KB

5/5 - (1 vote)

Fall Semester
Telecommunications Circuits Laboratory EPFL

4 Timing Synchronization
The timing problem is easily understood: The receiver samples the incoming signal with the known sampling rate 1/Ts, but how does it know exactly when to sample? It does not know this, it can only sample the signal with a random time offset, and it is the task of the time synchronization unit to estimate this time offset and correct it.
4.1 Effect of a Timing Offset
The effect of the timing offset can be modelled by including the filter δ(t−εT) into the effective channel. The parameter ε denotes the time offset, normalized by the symbol duration T. The matched filter output then becomes
. (4.1)
Take a look at Figure 4.1, where several RC pulses are plotted. Imagine that we want to detect the data symbol a[0]. If we sample the signal exactly at t = 0, all pulses except of the middle one are zero, and the received sample is equal to the data symbol plus noise. Now assume that we have a timing error of, say, ε = 0.1, which means that the signal arrives a bit too late, and hence our receiver samples the signal at t = −0.1T. At this time, however, the center pulse has not yet reached its maximum amplitude; furthermore, the contributions of the other pulses do not vanish, which leads to inter-symbol interference. In order to attain ISI-free reception, we must attempt to generate symbols
(4.2)
where εˆ is an estimate of the timing error. If εˆ = ε, then z(nT + εTˆ ) = a[n] + w0(nT + εTˆ ) and the data can be optimally recovered.
4.2 Synchronization via Interpolation
In principle, there are two possibilities to generate the desired symbols z(nT + εTˆ ). The first possibility is to adjust the time instant of A/D conversion. However, this results in an undesired feedback to the analog part of the system. We will therefore use a different method that works

Figure 4.1: Raised cosine pulses with α = 0.22

Figure 4.2: Time synchronization unit
4.6.
4.3 The Estimation Algorithm
The remaining question is how to obtain the estimate εˆ. For the derivation of the estimator we use the sequence x[k] = x(kTs) which is defined as the instantaneous signal power
x(kTs) = |z(kTs)|2. (4.3)
Appendix 4.5 shows the following relation: if there is no time offset (ε = 0), then the expectation of x(kTs) is a real-valued and even signal. Accordingly, its spectrum(i.e., the expectation of X(f)) is then real-valued for all frequencies f. Now, if we introduce a nonzero offset εT in z(kTs) and, accordingly, in x(kTs), the original real spectrum is multiplied by e−2πjεTf. Evaluating the spectrum at the symbol rate f = 1/T then results in a complex value whose phase is −2πε. We can thus use
arg{X(1/T)} (4.4)
as an estimator of the timing offset (see Appendix 4.5 for details). The spectral component X(1/T) can be calculated by performing a discrete Fourier transform (DFT). In general, the spectrum X(f) is given as
, (4.5)
where we have assumed that the signal starts at t = 0. The spectral component at f = 1/T is
thus
(4.6)
arg . (4.7)
The cumulative sum acts as an averager: During the reception, more samples become available, and the estimate becomes more and more reliable. It is easily seen that the algorithm (4.7) only works if the true parameter ε is constant during the whole reception, which is the case in this lab. In practice, however, the timing offset will change over time, and the estimation accuracy would be severely degraded if we took all available samples (even old ones that were caused by a different ε than the current) into account. An important measure here is the coherence time, which is the duration over which a parameter can assumed to be approximately constant. Let the coherence time for the timing offset be denoted as Tc;ε = Nc;εT. Then the estimation algorithm can be implemented as a sliding sumover the 4Nc;ε most recently received samples:
. (4.8)
4.4 Your Tasks
A4T1 Implement the estimation of the timing offset εˆ as described in (4.7). Note that for each new sample n, the sum will only include 4 new terms. These terms can be accumulated to the sum obtained for the sample n − 1.
A4T2 Implement both a linear and a cubic interpolator as presented in the appendix 4.6. The value of εˆ·(oversampling factor), provides two indications: the integer part indicates between which samples to perform the interpolation, and the fractional parts indicates the value that needs to be interpolated.
A4T3 Take advantage of the preamble samples to obtain a better estimate of εˆ.
4.5 Appendix
In this section, we show that 4.4 is an asymptotically unbiased estimator of the timing offset ε.(An estimator is asymptotically unbiased if it becomes unbiased as the number of observations goes to infinity.)
4.5.1 Signal Model
We assume M-PSK signaling, which means that the amplitude of the data symbols a[n] is constant. Without loss of generality, set |a[n]| = 1. The symbols are assumed to be uncorrelated,i.e. E{a[m]a∗[n]} = δ(m − n). The received signal after matched filtering is given as
, (4.9)
where gRC(t)is the combined impulse response of the pulse shaping filter and the matched filter,and is therefore real-valued and even. In our lab, gRC(t) is a raised cosine pulse. Furthermore, w0(t) is the AWGN with variance σw2 after matched filtering.
For the digital realization, the signal is sampled at four times the symbol rate: T/Ts = 4.
4.5.2 Continuous Time Consideration
We start with examining the noise-free signal z˜(t) = z(t) − w0(t). Let
x¯(t) = E{|z˜(t)|2} (4.10)
denote the expected value of the instantaneous signal power. Since the data symbols are uncorrelated, x¯(t) becomes
(4.11)
The function x¯(t) is periodic with period T
with n0 = n − 1 (4.12)
and can therefore be explained into Fourier series
. (4.13)
The Fourier coefficients are given as
(4.14) The first Fourier coefficient, i.e. the spectral component X¯(1/T), is
substitute u = t − εT
(4.15) Thus, we can calculate the timing offset perfectly from the noise-free and time-continuous received signal:
. (4.16)
4.5.3 Discrete Time Realization
Now we consider the noisy and sampled signal
x[k] = |z(kTs)|2
and show that (4.17)
arg{Xˆ(1/T)} (4.18)
is an asymptotically unbiased estimate of ε. We start by calculating the expected value of x[k]:
E{x[k]} = E{|z˜(kTs) + w0(kTs)|2}
= E{|z˜(kTs)|2}+E{z˜∗(kTs)w0(kTs)}+E{z˜(kTs)w0∗(kTs)}+E{|w0(kTs)|2}

| {z } | {z } | {z } | {z } x¯(kTs) 0 0 σw2
= ¯x(kTs) + σw2
(4.19)
with the noise-free signal x¯(t) that we have examined in the previous section. Assuming that the signal starts at k = 0, the corresponding spectrum is
. (4.20)
Evaluating this for f = 1/T yields

E{X(1/T)} = Xx¯(kTs)e−2πjk/4. (4.21)
k=0
If we approximateE{X(1/T)} at time t = nT by using the 4Nc;ε most recently received samples (the Nc;ε most recently received symbols), we get the estimate
. (4.22)
Since x¯(t) is bandlimited to ±2/T we can calculate this from x¯(t), rather than from the discrete x¯(kTs):
(4.23)
with c1 calculated in (4.15).
We can now calculate the expected value of εˆ:

argE{Xˆ(1/T)}
(4.24)

4.6 Appendix
4.6.1 Perfect Interpolator
Let f(x) be a (possibly complex-valued) function of one real-valued parameter. Assume that we only know f(x)for x ∈ Z, i.e. a series of samples. If f is bandlimited with a maximum frequency of 1/2, then the function is completely described by the samples. The problem of finding f(x) for x 6∈ Z is called interpolation.
The ideal interpolator is a lowpass filter, i.e. the function can be calculated from its samples as
. (4.25)
However, this interpolator is not realizable because it has infinitely many taps.
4.6.2 Linear Interpolator
The simplest possible interpolator is the linear one. Assume without loss of generality that we are interested in the intermediate value f(x) with x ∈ (0;1). The linear interpolator then calculates f(x) using the two closest samples f(0) and f(1) according to
f(x) = (1 − x)f(0) + xf(1). (4.26)
The linear interpolator can be implemented very efficiently, but its accuracy can be problematic.
Consider the situation illustrated in Figure 4.3. Given are the functionand

the samples f(0) = f(1) = 1/ 2. Now, for all x ∈ (0;1), the linear interpolator returns

f(x) = 1/ 2, which is obviously a very bad approximation of the true value if x is around 0.5. On the other hand, if we wanted to estimate a value with x ∈ (−1;0) using the samples f(−1) and f(0), the linear estimation would be quite good. We see that the linear interpolator yields especially bad results if the function has a local maximum or minimum between the two considered samples.

Figure 4.3: Linear interpolator for
4.6.3 Cubic Interpolator
The interpolation can be improved by taking one additional sample on each side into account, which leads us to the cubic interpolator. Again we assume that we want to estimate f(x) with x ∈ (0;1), this time using the samples f(−1),f(0),f(1) and f(2). The idea is to approximate f(x) with a cubic polynomial f˜(x) and then evaluate it at the desired point.
In general,
f˜(x) = ax3 + bx2 + cx + d,
where the coefficients a,b,c,d must satisfy (4.27)
. (4.28)
With a matrix inversion we get the result
. (4.29)

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] Ee442  lab 4- timing synchronization[SOLVED] Ee442 lab 4- timing synchronization
$25