March 14, 2005

Brian K. Butler

.

.

UCSD ECE 259BN Trellis-Coded Modulation Course . . Project . . . . . Topics on the weight distribution of error correcting codes.

.

Trellis-Coded Modulation Course Project Topics on the weight distribution of error correcting codes. Abstract This project report covers several areas that I found interesting. First, for reference sake, I plot the performance bounds of convolutional codes for rates 1/2, 1/3, and 1/4. Secondly, I explore some of the details of computing the weight spectra of convolutional codes with several examples, and begin the process of finding good codes. Finally, I take a very quick look at the rate = 1/3 repeat accumulate code.

Table of Contents Abstract

1

Table of Contents

1

1. BER performance of convolutional codes

2

2. Computation of the Weight Spectrum

7

3. Eliminating Catastrophic Codes

8

4. Weight Spectrum examples

9

5. Repeat Accumulate Code

17

References

24

Appendix A – ‘wtspec’ source code

25

Appendix B – ‘iscatas’ source code

25

1. BER performance of convolutional codes The bounds used To provide a reference for further discussions in the report, I first compute the BER upper bounds for nearly all of the “Noncatastrophic” convolutional codes presented in (Larsen, 1973). These are all defined as polynomial generators. Larsen presents the encoders that generate maximal free distance codes for rates of 1/2, 1/3, and 1/4. He does this for constraint lengths from m equal 2 to 13. (Larsen uses the ν=m+1 for constraint length that describes a code with 2m states). I’ve only left out m = 13 from my figures in the interest of time. I have used the tightest union upper bound presented class, and also found in (§12.2 of Lin & Costello, 2004 and §6.2 of Schlegel & Perez, 2004) before approximations are made.

Pb ( E ) <





d = d free

⎛ 2d R Eb 1 Bd Q ⎜ ⎜ k N0 ⎝

⎞ ⎟⎟ ⎠

(1.1)

The B values are the coefficients of the information bit WEF (weight enumerator function). The info WEF can be generated from the general transfer function of the encoder. (The notation differs only slightly between references – such as the 1/k factor is included in either the above or below.) The B coefficients are the total number of information bit errors among all the paths of codeword weight d.

⎡∂ ⎤ B( D) = ⎢ A( D, L, I ) ⎥ ⎣ ∂I ⎦ I = L =1

(1.2)

Using this bound assumes a system utilizing optimal (i.e., maximal likelihood sequence) decoding of the BPSK or QPSK signal corrupted by additive white Gaussian noise.

Calculation Details The first 18 coefficients of the information bit WEFs of interest can be found in (Conan, 1984). Instead of entering all the coefficients by hand, I computed them in MATLAB, using a program to be described later. For rate 1/2, I chose to compute the WEF out to D34, for rate 1/3 out to D42, and for rate 1/4 out to D51 at every constraint length. I chose these limits because they were the maximum used by Conan for this set (i.e., up to m = 12). The results are shown in Figures 1 through 3. Also included in the plots is the Shannon limit for BPSK signaling in AWGN at the appropriate code rate, R.

R

Shannon limit Eb/No (dB) for AWGN channel with BPSK signaling

0.50

0.188

0.33

-0.579

0.25

-0.793

Table 1. Theoretical limits to performance. (from Lin & Costello, 2004, §1.7)

Observations Increasing the code’s constraint length helps significantly at first for low constraint lengths. But for the larger constraints lengths, the Eb/No improvement diminishes drastically, becoming about 0.2 dB or less per unit increase in constraints length. With each unit increase of constraint length we know the complexity of the ACS portion of the Viterbi Algorithm doubles. This permits only gradual increases over time of constraints length for a given data rate as the VLSI technology allows.

Brian K. Butler

2

March, 2005

The performance gaps between the best convolutional code shown (i.e., m = 12) and the Shannon limit are about 3 dB at Pb=10-5 for all three rates. This is a significant amount, and a surely inspired lots searching for better codes. The rate = 1/4, m = 6 BER curve looks surprising weak. It falls nearly on top of the m = 5 curve. I double-checked the bit WEF versus Conan and the polynomial generators versus (§8.2.5 Proakis). The lousy performance of the m = 6 is due to the high multiplicity at dfree: A20=10 and B20=37. This can be shown by breaking down the terms of (Eq 1.1) at a moderate Eb/No value of 4 dB, as in Table 2, below. m=5 r=1/4

m=6 r=1/4

BER bound BER bound d Bd contribution Bd contribution 18 6 5.97E-06 0 19 0 0 20 17 4.59E-06 37 9.99E-06 21 0 0 22 24 1.77E-06 0 23 0 0 24 60 1.21E-06 94 1.89E-06 25 0 0 26 118 6.51E-07 0 27 0 0 28 367 5.57E-07 768 1.17E-06 29 0 0 30 991 4.15E-07 0 31 0 0 32 1980 2.29E-07 5558 6.43E-07 total 1.54E-05 1.37E-05

m=7 r=1/4 BER bound Bd contribution 0 0 0 0 2 1.47E-07 10 3.85E-07 10 2.01E-07 8 8.43E-08 10 5.52E-08 11 3.18E-08 54 8.20E-08 64 5.10E-08 68 2.85E-08 140 3.08E-08 218 2.52E-08 1.12E-06

Table 2. Partial list of terms for computing the BER bound at an Eb/No of 4 dB for several rate =1/4 convolutional codes.

Brian K. Butler

3

March, 2005

BER bounds vs. Eb/No for R=1/2 Convolutional Codes

-1

10

-2

10

-3

10

-4

10

m=2

m=12

-5

10

-6

10

<-Shannon Limit

-7

10

-8

10

-1

0

1

2

3 Eb/No (dB)

4

5

6

7

Figure 1:Rate1/2 Convoluational Code Performance(BPSK in AWGN)

Brian K. Butler

4

March, 2005

BER bounds vs. Eb/No for R=1/3 Convolutional Codes

-1

10

-2

10

-3

10

-4

10

m=12

-5

10

-6

10

m=2

<-Shannon Limit

-7

10

-8

10

-1

0

1

2

3 Eb/No (dB)

4

5

6

7

Figure 2:Rate1/3 Convoluational Code Performance (BPSK in AWGN)

Brian K. Butler

5

March, 2005

BER bounds vs. Eb/No for R=1/4 Convolutional Codes

-1

10

-2

10

-3

10

-4

10

m=12

-5

10

-6

10

m=2

<-Shannon Limit

-7

10

-8

10

-1

0

1

2

3 Eb/No (dB)

4

5

6

7

Figure 3:Rate1/4 Convoluational Code Performance (BPSK in AWGN)

Brian K. Butler

6

March, 2005

2. Computation of the Weight Spectrum I’ve created a MATLAB function, “wtspec,” to compute the codeword WEF and the information bit WEF of a specific convolutional (polynomial) encoder, of rate 1/n. The program works on a trellis. It is important not to call the function with a catastrophic code as it will run for a while before giving up. It runs something like the Viterbi algorithm, but instead of comparing incoming branches to a state and keeping the best, this algorithm keeps all the results incoming at every state. Briefly here are the steps used for computing the Weight Enumerator Function (WEF): 1.

At every trellis node a histogram of the accumulated codeword Hamming distance(s) is maintained.

2.

The all-zero state is initialized as the best state, that is to have a histogram of [1 0 0 0 …] indicating one instance of accumulated distance 0. All other states are initialized with the all zero histogram, indicating they are invalid states.

3.

We assumed that the all-zero codeword is sent, without loss of generality (as the codes of interest are linear). Branch metrics are computed for every possible state transition. However, the state #0 to state #0 branch transition metric is over-written to have a very large value as we this transition is not allowed in computing distances from departure from state #0 and reemergence back to state #0.

4.

At each stage, for every state the incoming two distance histograms are merged after the branch metrics are applied. This preserves all accumulated state metrics in place of the ACS computation. To add a branch metric to the histogram the entries are simply shifted towards the right by the branch metric. The histogram is of finite length; it can be truncated at the point of maximum Hamming distance of interest. The points in the histogram shifted beyond this are simply dropped. The merging operation on the histograms is simply a summing of the entries.

5.

At the end of every stage (e.g., bit time), the final histogram for state #0 is accumulated into the master histogram that eventually forms the WEF.

6.

Also, at the of every stage, state #0’s histogram is forced to be an invalid state, and its histogram is reset to the all zero histogram.

7.

The loop can be safely terminated when the histograms of all the states contain nothing but zeros.

The algorithm runs rather fast as it works on a trellis. See the Appendix for the detailed source code. The program’s memory requirements are dominated by histogram storage, which in bytes is of size 2*2m*(maxdist+1)*wordsize. The first “2” factor results from double-banking the histograms during calculations, one for the previous stage and one for the current stage. The second factor is simply the number of states in the trellis. The third factor is the maximum Hamming distance of interest plus one (e.g., 52 in the previous section). The final factor is 8 bytes per histogram data point in my program, as 64-bit double precision floating-point values are used. The double precision floating-point data type is capable of exactly representing integers up to 253-1 and is supported by more MATLAB routines than any other data type. So for the largest code examined in the previous section (i.e., m = 12), only 3.4 Mbytes of data RAM used – low by 2005 PC standards. Another factor of 26 in memory would be easy tolerable today, before a new algorithm or a more elaborate computer would be required. The above procedure describes computing the codewords’ weight spectrum. It can be expanded to carry another set of vectors used to track the accumulated information bit errors at each state for each accumulated distance. These vectors aren’t exactly histograms, but bear some similarities to the above processing. This processing doubles the storage requirement, but results in the information bit WEF being computed in parallel. This too is implemented in the “wtspec” program. Brian K. Butler

7

March, 2005

3. Eliminating Catastrophic Codes One part of searching for good codes is to eliminate the catastrophic encoders. I’ve written a MATLAB function, called “iscatast” to test polynomial encoders. It simply checks for common divisors among the generators by incrementing through all possible divisors. Polynomial division is implemented in a separate function called ‘gfpolydiv,’ which returns the remainder. Definition 1: For the purposes of classifying polynomials during searching we require at least one of the polynomials to have order m, and none have order greater than m. Restated, this simply assigns every rate = 1/n generator matrix to a class based upon the maximum order of its continuant polynomials:

G ( D) = [ g1 ( D) g 2 ( D) … g n ( D)]

(3.1)

Theorem 1: The performance properties (transfer function, catastrophic or not, weight spectrum) of the code do not change if the order of the constituent polynomials is permuted (i.e., the symbols are transmitted in a different order). Proof: (sketch of proof) The encoder state machine labeling, for instance, doesn’t not take into account the ordering of the output code symbols and hence the transfer function does not depend upon the output symbol ordering at each transition. We may therefore adopt the convention of listing the larger polynomial first in tables of the encoders’ properties, recognizing that the entry applies to all reordering of the polynomials. This convention will also nicely limit the search space as we examine large sets of codes. ■ As an example of this program, I have run all possible m =3, r=1/2 encoders through the “iscatast” program and tabulated the results in Table 3. I have created an additional category of a “trivial” encoder, which does no real encoding. The input binary bit simply appears as uncoded (but possibly delayed) output on all output code symbols. These “trivial” encoders are characterized by all the polynomials having a single tap each; and therefore have a dfree of n at any constraint length. Of the 92 m=3 encoders, 28 are catastrophic and 4 are deemed trivial. This leaves just 60 to examine in more detail. Theorem 2: Given a 1/n convolutional encoder described by a catastrophic polynomial generator matrix, G ( D ) , then the polynomial generator matrix, G * ( D ) , with binary taps in reverse order, is also catastrophic. (Note examples in Table 3) Proof: (sketch of proof) Using the binary taps in reverse order is equivalent to reversing all the arrows in the encoder finite state machine or running the identical trellis in the reverse direction. Hence, if an infinite weight input can produce a finite weight output in the forward direction it can do the same in the reverse direction. Therefore, if one direction is known to be catastrophic, the other is too. ■ Theorem 3: Given a 1/n convolutional encoder described by a catastrophic polynomial generator matrix, G ( D) = [ g1 ( D) g 2 ( D) … g n ( D)] , with a constituent polynomial satisfying deg ⎡⎣ g j ( D ) ⎤⎦ < υ , then the polynomial generator matrix, G ′( D) , with g j ( D ) replaced with Dg j ( D ) , is also catastrophic. Proof: Introducing a D multiplier on one of the minors of the 1-by-n G(D) does not affect the outcome of the g.c.d. test of the Massey-Sain Theorem.■ (Note examples in Table 3)

Brian K. Butler

8

March, 2005

rate = 1/2, M = 3, (G1,G2) (octal) Catastrophic 11,3 12,5 14,5 16,7 17,11 11,5 12,6 14,6 16,11 17,12 11,6 12,11 14,11 16,16 17,14 11,7 12,12 14,12 17,3 17,17 11,11 13,13 14,14 17,5 12,3 14,3 15,15 17,6 Trivial (but Non-Catastrophic) 10,1 10,2 10,4 10,10 Non-Catastrophic 10,3 12,10 14,2 15,10 16,12 10,5 13,1 14,4 15,11 16,13 10,6 13,2 14,7 15,12 16,14 10,7 13,3 14,10 15,13 16,15 11,1 13,4 14,13 15,14 17,1 11,2 13,5 15,1 16,1 17,2 11,4 13,6 15,2 16,2 17,4 11,10 13,7 15,3 16,3 17,7 12,1 13,10 15,4 16,4 17,10 12,2 13,11 15,5 16,5 17,13 12,4 13,12 15,6 16,6 17,15 12,7 14,1 15,7 16,10 17,16 Table 3, All rate =1/2, m =3 convolutional encoders categorized

4. Weight Spectrum examples I start this section by continuing to examine the possible encoder for m =3, rate =1/2 convolutional codes. Table 4 provides a list for all non-catastrophic encoders with dfree of at least 4. The BER column is computed at Eb/No = 6.16 dB, chosen to yield about 10-6. The best performing codes, (13, 17) and (15,17), have the maximum free distance and the smallest weight coefficient at that distance. One will note that 138 is 158 bit reversed, and 178 is itself after bit reversal. One will also note that the weight spectra seem to come in groups with several encoders performing identically. To further examine the bit reversal characteristics look further down the table. The code (13, 15) doesn’t change after both polynomials are bit-reversed, aside from changing the order of the code symbols. The (7, 13) encoder becomes the (15, 16) encoder after bit reversal. The (7, 15) encoder becomes the (13, 16) encoder after bit reversal. The (11, 13) encoder becomes the (11, 15) encoder after bit reversal. And so on.

Brian K. Butler

9

March, 2005

rate = 1/2, M = 3, (encoder polynomials are in octal) (G1,G2) dfree BER Weight spectra (Ad starting at d=1) 13,17 6 1.03E-06 0 0 0 0 0 1 3 5 11 25 55 121 267 589 1299 2865 6319 13937 30739 67797 149 15,17 6 1.03E-06 0 0 0 0 0 1 3 5 11 25 55 121 267 589 1299 2865 6319 13937 30739 67797 149 13,15 6 1.47E-06 0 0 0 0 0 2 0 10 0 49 0 241 0 1185 0 5827 0 28653 0 140895 0 692821 0 3406 7,13 6 4.49E-06 0 0 0 0 0 5 0 13 0 71 0 287 0 1290 0 5571 0 24416 0 106424 0 464820 0 2028 7,15 6 4.49E-06 0 0 0 0 0 5 0 13 0 71 0 287 0 1290 0 5571 0 24416 0 106424 0 464820 0 2028 13,16 6 4.49E-06 0 0 0 0 0 5 0 13 0 71 0 287 0 1290 0 5571 0 24416 0 106424 0 464820 0 2028 15,16 6 4.49E-06 0 0 0 0 0 5 0 13 0 71 0 287 0 1290 0 5571 0 24416 0 106424 0 464820 0 2028 11,13 5 3.76E-06 0 0 0 0 1 1 2 6 13 28 62 137 302 666 1469 3240 7146 15761 34762 76670 169 11,15 5 3.76E-06 0 0 0 0 1 1 2 6 13 28 62 137 302 666 1469 3240 7146 15761 34762 76670 169 7,12 5 4.70E-06 0 0 0 0 1 2 4 8 16 33 68 140 288 592 1217 2502 5144 10576 21744 44705 919 5,16 5 4.70E-06 0 0 0 0 1 2 4 8 16 33 68 140 288 592 1217 2502 5144 10576 21744 44705 919 12,16 5 4.70E-06 0 0 0 0 1 2 4 8 16 33 68 140 288 592 1217 2502 5144 10576 21744 44705 919 5,13 5 5.61E-06 0 0 0 0 1 2 5 8 13 34 77 151 301 628 1334 2790 5740 11861 24721 51522 107 12,13 5 5.61E-06 0 0 0 0 1 2 5 8 13 34 77 151 301 628 1334 2790 5740 11861 24721 51522 107 5,15 5 5.61E-06 0 0 0 0 1 2 5 8 13 34 77 151 301 628 1334 2790 5740 11861 24721 51522 107 12,15 5 5.61E-06 0 0 0 0 1 2 5 8 13 34 77 151 301 628 1334 2790 5740 11861 24721 51522 107 3,13 5 1.50E-05 0 0 0 0 2 3 3 8 20 36 63 134 281 530 1027 2098 4169 8118 16101 32125 6344 6,13 5 1.50E-05 0 0 0 0 2 3 3 8 20 36 63 134 281 530 1027 2098 4169 8118 16101 32125 6344 13,14 5 1.50E-05 0 0 0 0 2 3 3 8 20 36 63 134 281 530 1027 2098 4169 8118 16101 32125 6344 3,15 5 1.50E-05 0 0 0 0 2 3 3 8 20 36 63 134 281 530 1027 2098 4169 8118 16101 32125 6344 6,15 5 1.50E-05 0 0 0 0 2 3 3 8 20 36 63 134 281 530 1027 2098 4169 8118 16101 32125 6344 14,15 5 1.50E-05 0 0 0 0 2 3 3 8 20 36 63 134 281 530 1027 2098 4169 8118 16101 32125 6344 1,13 4 2.95E-05 0 0 0 1 0 6 0 16 0 69 0 232 0 883 0 3152 0 11624 0 42158 0 154206 0 561607 2,13 4 2.95E-05 0 0 0 1 0 6 0 16 0 69 0 232 0 883 0 3152 0 11624 0 42158 0 154206 0 561607 4,13 4 2.95E-05 0 0 0 1 0 6 0 16 0 69 0 232 0 883 0 3152 0 11624 0 42158 0 154206 0 561607 10,13 4 2.95E-05 0 0 0 1 0 6 0 16 0 69 0 232 0 883 0 3152 0 11624 0 42158 0 154206 0 561607 1,15 4 2.95E-05 0 0 0 1 0 6 0 16 0 69 0 232 0 883 0 3152 0 11624 0 42158 0 154206 0 561607 2,15 4 2.95E-05 0 0 0 1 0 6 0 16 0 69 0 232 0 883 0 3152 0 11624 0 42158 0 154206 0 561607 4,15 4 2.95E-05 0 0 0 1 0 6 0 16 0 69 0 232 0 883 0 3152 0 11624 0 42158 0 154206 0 561607 10,15 4 2.95E-05 0 0 0 1 0 6 0 16 0 69 0 232 0 883 0 3152 0 11624 0 42158 0 154206 0 561607 7,17 4 5.15E-05 0 0 0 1 0 2 5 9 21 37 82 166 347 720 1478 3067 6322 13088 27046 55905 115 16,17 4 5.15E-05 0 0 0 1 0 2 5 9 21 37 82 166 347 720 1478 3067 6322 13088 27046 55905 115 1,17 4 5.37E-05 0 0 0 1 1 2 6 9 18 34 66 131 249 480 926 1789 3459 6670 12870 24841 47952 2,17 4 5.37E-05 0 0 0 1 1 2 6 9 18 34 66 131 249 480 926 1789 3459 6670 12870 24841 47952 4,17 4 5.37E-05 0 0 0 1 1 2 6 9 18 34 66 131 249 480 926 1789 3459 6670 12870 24841 47952 10,17 4 5.37E-05 0 0 0 1 1 2 6 9 18 34 66 131 249 480 926 1789 3459 6670 12870 24841 47952 7,14 4 6.27E-05 0 0 0 1 2 2 5 9 17 32 58 110 204 380 709 1319 2460 4582 8537 15908 29637 3,16 4 6.27E-05 0 0 0 1 2 2 5 9 17 32 58 110 204 380 709 1319 2460 4582 8537 15908 29637 6,16 4 6.27E-05 0 0 0 1 2 2 5 9 17 32 58 110 204 380 709 1319 2460 4582 8537 15908 29637 14,16 4 6.27E-05 0 0 0 1 2 2 5 9 17 32 58 110 204 380 709 1319 2460 4582 8537 15908 29637 7,10 4 7.73E-05 0 0 0 2 0 5 0 17 0 54 0 174 0 559 0 1797 0 5776 0 18566 0 59677 0 191821 0 1,16 4 7.73E-05 0 0 0 2 0 5 0 17 0 54 0 174 0 559 0 1797 0 5776 0 18566 0 59677 0 191821 0 2,16 4 7.73E-05 0 0 0 2 0 5 0 17 0 54 0 174 0 559 0 1797 0 5776 0 18566 0 59677 0 191821 0 4,16 4 7.73E-05 0 0 0 2 0 5 0 17 0 54 0 174 0 559 0 1797 0 5776 0 18566 0 59677 0 191821 0 10,16 4 7.73E-05 0 0 0 2 0 5 0 17 0 54 0 174 0 559 0 1797 0 5776 0 18566 0 59677 0 191821 0 Table 4 All encoders for rate =1/2, m =3 and with dfree >= 4

Brian K. Butler

10

March, 2005

Theorem 4: A convolutional code, C , described by a non-catastrophic polynomial generator matrix has the identical distance spectrum as the convolution code, C* , described by the polynomial generator matrix with binary taps in reverse ordering. Proof: For code, C , with rate n/k, the codewords (i.e., output sequences) are described by: C ( D) = X ( D) G ( D ) , where ⎡ g1,1 ( D ) g1,2 ( D ) g1, n ( D ) ⎤ ⎢ ⎥ G ( D) = ⎢ ⎥ . ⎢ g1, k ( D ) g1, k ( D ) ⎥ g D ( ) k ,1 ⎣ ⎦ Likewise for code, C* , the codewords (i.e., output sequences) are described by: C * ( D ) = X ( D ) G * ( D ) , where

⎡ g *1,1 ( D ) ⎢ G* ( D) = ⎢ ⎢ g *1, k ( D ) ⎣

g *1,2 ( D ) g *1, k ( D )

g *1, n ( D ) ⎤ ⎥ ⎥ g *k ,1 ( D ) ⎥⎦

In either case, the entire set of semi-infinite codewords can be found by multiplying the generator by all possible infinite information sequences, X(D). Due to our reversed ordering of the generators, gi*, j ( D) = Dυi gi , j ( D −1 ) ∀j; 1 ≤ i ≤ k ;1 ≤ j ≤ n , where

(

)

υi = max deg ( gi , j ( D) ) . Also note, that every valid information pattern of finite length is also a valid j

information pattern in reverse, so we can define: X * ( D ) D N X ( D −1 ) , where N is pattern length. Now we can state by substitution: C * ( D ) = X * ( D ) G * ( D ) = D N +υ X ( D −1 ) G ( D −1 ) If the D term can be thought of as just a place holder variable to create polynomial multiplication, it is straight forward to show the following for any non-zero integer, m: Given a ( D ) = b( D ) c ( D ), then a ( D m ) = b ( D m ) c ( D m ) In other words, the coefficients are preserved, and therefore, C * ( D ) = D N +υ C ( D − 1 ) Thus, the final equation shows there a one-to-one correspondence between the codewords of C and C* . The corresponding codewords are of equal weight as the D N +υ term does not affect the Hamming weight, and the corresponding information sequences are of equal weight. To compute the weight enumerator functions of a convolutional code, we enumerate the Hamming weight of all codewords corresponding to X(D) forcing an immediate departure from the zero state until the first re-mergence to the zero state. For convolutional codes described by a non-catastrophic polynomial generator, we will be satisfied with finite length codewords. Since the trellis is symmetric backwards-to-forwards and not time-varying, it is equivalent to count only those sequences that have a first re-mergence at a specific stage significantly far in the future and depart from the zero state at various earlier points. Therefore, we have proven that non-catastrophic polynomial generator matrices G(D) and G*(D) generate the same weight spectrum.■

Brian K. Butler

11

March, 2005

rate = 1/2, M = 4, (encoder polynomials are in octal) (G1,G2) (G1,G2) dfree BER Weight spectra (Ad, starting at d=1) 27,31 23,35 7 1.01E-06 0 0 0 0 0 0 2 3 4 16 37 68 176 432 925 2156 5153 11696 26868 6 23,33 31,33 7 1.03E-06 0 0 0 0 0 0 2 4 6 15 37 83 191 442 1015 2334 5371 12353 28414 23,31 self 6 1.36E-06 0 0 0 0 0 1 0 4 0 22 0 124 0 682 0 3729 0 20390 0 111534 0 6101 23,27 31,35 7 1.43E-06 0 0 0 0 0 0 3 3 3 24 42 63 231 510 937 2539 5989 12341 29802 7 23,25 25,31 6 1.57E-06 0 0 0 0 0 1 0 5 0 31 0 158 0 853 0 4565 0 24438 0 130846 0 7005 15,23 26,31 6 1.86E-06 0 0 0 0 0 1 0 9 0 35 0 198 0 1034 0 5322 0 27856 0 144764 0 753 13,31 23,32 6 1.86E-06 0 0 0 0 0 1 0 9 0 35 0 198 0 1034 0 5322 0 27856 0 144764 0 753 11,37 22,37 6 2.63E-06 0 0 0 0 0 1 1 3 7 18 40 83 195 462 1064 2422 5578 12856 29551 21,37 self 6 2.63E-06 0 0 0 0 0 1 1 3 5 12 27 61 144 334 789 1847 4347 10203 23963 5 25,37 self 6 2.65E-06 0 0 0 0 0 1 0 6 0 31 0 168 0 898 0 4803 0 25678 0 137300 0 7341 13,37 32,37 6 2.77E-06 0 0 0 0 0 1 0 7 0 40 0 187 0 1007 0 5204 0 27098 0 140730 0 731 15,37 26,37 6 2.77E-06 0 0 0 0 0 1 0 7 0 40 0 187 0 1007 0 5204 0 27098 0 140730 0 731 23,37 31,37 6 2.78E-06 0 0 0 0 0 1 0 6 0 34 0 174 0 930 0 4928 0 26146 0 138692 0 7357 27,37 35,37 6 3.00E-06 0 0 0 0 0 1 1 3 7 18 40 87 209 476 1096 2521 5805 13363 30740 33,37 self 6 3.03E-06 0 0 0 0 0 1 1 3 6 13 31 70 166 385 902 2103 4903 11421 26583 6 17,25 25,36 6 3.13E-06 0 0 0 0 0 1 2 3 5 16 44 96 205 472 1111 2551 5825 13309 30574 25,27 25,35 6 3.13E-06 0 0 0 0 0 1 2 2 8 19 42 96 218 506 1160 2663 6118 14043 32261 17,23 31,36 6 3.21E-06 0 0 0 0 0 1 2 2 10 27 49 99 232 554 1263 2821 6395 14527 32836 17,31 23,36 6 3.21E-06 0 0 0 0 0 1 2 2 10 27 49 99 232 554 1263 2821 6395 14527 32836 13,25 25,32 6 3.86E-06 0 0 0 0 0 2 0 6 0 43 0 199 0 1089 0 5549 0 29010 0 150376 0 781 15,25 25,26 6 3.86E-06 0 0 0 0 0 2 0 6 0 43 0 199 0 1089 0 5549 0 29010 0 150376 0 781 17,26 15,36 6 3.91E-06 0 0 0 0 0 1 3 5 11 25 55 122 273 608 1351 3006 6689 14881 3310 17,32 13,36 6 3.91E-06 0 0 0 0 0 1 3 5 11 25 55 122 273 608 1351 3006 6689 14881 3310 26,36 15,17 6 3.91E-06 0 0 0 0 0 1 3 5 11 25 55 122 273 608 1351 3006 6689 14881 3310 32,36 13,17 6 3.91E-06 0 0 0 0 0 1 3 5 11 25 55 122 273 608 1351 3006 6689 14881 3310 7,23 31,34 6 4.70E-06 0 0 0 0 0 2 0 13 0 49 0 259 0 1302 0 6250 0 31089 0 152738 0 75 16,23 16,31 6 4.70E-06 0 0 0 0 0 2 0 13 0 49 0 259 0 1302 0 6250 0 31089 0 152738 0 75 7,31 23,34 6 4.70E-06 0 0 0 0 0 2 0 13 0 49 0 259 0 1302 0 6250 0 31089 0 152738 0 75 15,26 self 6 5.33E-06 0 0 0 0 0 2 0 10 0 49 0 245 0 1225 0 6123 0 30605 0 152976 0 76 13,32 self 6 5.33E-06 0 0 0 0 0 2 0 10 0 49 0 245 0 1225 0 6123 0 30605 0 152976 0 76 26,32 13,15 6 5.33E-06 0 0 0 0 0 2 0 10 0 49 0 245 0 1225 0 6123 0 30605 0 152976 0 76 5,37 24,37 6 5.46E-06 0 0 0 0 0 2 2 3 11 28 56 116 274 627 1384 3069 6912 15529 3469 12,37 self 6 5.46E-06 0 0 0 0 0 2 2 3 11 28 56 116 274 627 1384 3069 6912 15529 3469 Table 5 Best encoders for rate =1/2, m =4, (BER computed for AWGN at Eb/No =5.73 dB)

To dig a little deeper, we will search through all possible encoders for rate =1/2, m =4. Table 5 shows the best ones of this class from an exhaustive search. Encoder pairs with identical weight spectra due to bit reversal relationship of the polynomial generators do not get a dedicated rows in this table as before, rather their generators are listed side-by-side in the first two columns. The entry “self” listed in several places indicates where the first encoder remains unchanged after the bit reversal operation. Table 5 is sufficiently long enough to list all encoders with a free distance of 7 and those of free distance 6 that have coefficient A6 <= 2. The first thing to note is that the first pair of equivalent encoders looks like it will clearly be the best performing at practical SNRs as its weight is best all distances. This indeed is the one found in Larsen & Conan. Sometimes, a lower distance code (of the same family) will perform better than a higher distances code due to lower weight coefficients, such as the third code (with dfree=6) performs better than the fourth code pair (with dfree=7).

Brian K. Butler

12

March, 2005

rate = 1/2, M = 5, (encoder polynomials are in octal) (G1,G2) (G1,G2) dfree BER Weight spectra (Ad, starting at d=1) 45,77 51,77 8 7.87E-07 0 0 0 0 0 0 0 2 3 8 15 41 90 224 515 1239 2896 6879 16203 3837 43,67 61,73 8 8.36E-07 0 0 0 0 0 0 0 3 0 13 0 78 0 425 0 2394 0 13377 0 74746 0 417956 51,67 45,73 8 8.52E-07 0 0 0 0 0 0 0 2 0 20 0 68 0 469 0 2560 0 13978 0 79126 0 439016 53,61 43,65 7 8.80E-07 0 0 0 0 0 0 1 0 1 11 21 35 93 219 491 1226 2960 6846 16166 384 51,57 45,75 8 8.89E-07 0 0 0 0 0 0 0 3 0 16 0 85 0 474 0 2641 0 14715 0 81694 0 454683 57,62 23,75 8 9.03E-07 0 0 0 0 0 0 0 3 0 19 0 90 0 519 0 2856 0 15734 0 87246 0 481307 46,75 31,57 8 9.03E-07 0 0 0 0 0 0 0 3 0 19 0 90 0 519 0 2856 0 15734 0 87246 0 481307 52,67 25,73 8 9.31E-07 0 0 0 0 0 0 0 3 0 21 0 94 0 534 0 2986 0 16386 0 90472 0 499213 52,73 25,67 8 9.31E-07 0 0 0 0 0 0 0 3 0 21 0 94 0 534 0 2986 0 16386 0 90472 0 499213 57,61 43,75 8 9.45E-07 0 0 0 0 0 0 0 3 0 17 0 87 0 482 0 2731 0 15092 0 84002 0 467566 57,75 self 8 9.84E-07 0 0 0 0 0 0 0 3 0 12 0 70 0 397 0 2223 0 12497 0 70093 0 393300 57,65 53,75 8 1.01E-06 0 0 0 0 0 0 0 1 8 7 12 48 95 281 605 1272 3334 7615 18131 4319 45,67 51,73 8 1.10E-06 0 0 0 0 0 0 0 3 0 18 0 81 0 513 0 2698 0 15311 0 84748 0 471592 45,53 51,65 7 1.12E-06 0 0 0 0 0 0 1 1 2 9 14 34 111 219 475 1264 2959 6761 16283 387 45,76 37,51 8 1.13E-06 0 0 0 0 0 0 0 4 0 20 0 94 0 582 0 3083 0 17138 0 94194 0 519697 51,76 37,45 8 1.13E-06 0 0 0 0 0 0 0 4 0 20 0 94 0 582 0 3083 0 17138 0 94194 0 519697 41,67 41,73 7 1.22E-06 0 0 0 0 0 0 1 1 4 8 14 42 86 211 521 1169 2858 6727 15838 3796 55,57 55,75 8 1.24E-06 0 0 0 0 0 0 0 2 7 10 18 49 124 292 678 1576 3694 8692 20419 47 45,63 51,63 7 1.24E-06 0 0 0 0 0 0 1 1 1 8 17 34 92 208 470 1147 2727 6427 15341 3636 57,63 63,75 8 1.25E-06 0 0 0 0 0 0 0 2 6 8 17 45 106 247 591 1399 3284 7743 18245 429 56,61 35,43 7 1.25E-06 0 0 0 0 0 0 1 1 4 11 21 51 115 270 682 1572 3546 8440 20172 47 43,72 27,61 7 1.25E-06 0 0 0 0 0 0 1 1 4 11 21 51 115 270 682 1572 3546 8440 20172 47 Table 6: Best encoders for rate =1/2, m =5, (BER computed for AWGN at Eb/No =5.29 dB)

Now we will search through all possible encoders for rate =1/2, m =5. Table 6 shows the best ones of this class. This table was created in similar fashion as the previous table. The BER in this Table 6 was computed at an Eb/No of 5.29 dB, chosen to yield 10-6 error rate for the published code in the AWGN channel. Indeed, the (53,75) code, shown in blue highlight in the table, is the code published by Larsen and Conan, achieving the target BER. Surprisingly it is not the best code in the table at the BER of interest. Unfortunately, this little discovery translates to an Eb/No savings of just 0.07 dB. Table 7 shows a detailed breakdown of the info WEF of the published m =5 code and the two best from Table 6. It shows that while the published code has the best multiplicity at B8, its poor multiplicity at B9 dominates its BER performance. Therefore, at higher SNRs the published code should cross over, and perform the best.

Brian K. Butler

13

March, 2005

rate = 1/2, M = 5, (encoder polynomials are in octal) [53,75] [51,77] [61,73] d 8 9 10 11 12 13 14 15 16 17 total

BER bound BER bound BER bound Bd contribution Bd contribution Bd contribution 2 1.99E-07 4 3.98E-07 6 5.97E-07 36 6.26E-07 11 1.91E-07 0 32 9.77E-08 36 1.10E-07 60 1.83E-07 62 3.34E-08 83 4.47E-08 0 332 3.17E-08 250 2.38E-08 469 4.47E-08 701 1.19E-08 630 1.07E-08 0 2342 7.07E-09 1776 5.36E-09 3340 1.01E-08 5503 2.97E-09 4531 2.44E-09 0 12506 1.21E-09 11982 1.16E-09 23086 2.23E-09 36234 6.28E-10 30474 5.28E-10 0 1.01E-06 7.88E-07 8.38E-07

Table 7 Partial list of terms for computing the BER bound at an Eb/No of 4.79 dB for several rate =1/2, m =5 codes.

Now we will search through all possible encoders for rate =1/2, m =6. Table 8 shows the best ones of this class – just like the previous table. Indeed, the (133,171) encoder, shown in blue highlight in the table, is the one published by Larsen and Conan, achieving the target BER. In Tables 6 and 8, I have used green highlight to show some examples of codes where the degree of the generators is less than m. In each case they, are part of a family of 4 encoders of identical performance. This leads to another potential reduction of the search space. rate = 1/2, M = 6, (encoder polynomials are in octal) (G1,G2) (G1,G2) dfree BER Weight spectra (starting at d=1) 117,155 133,171 10 1.01E-06 0 0 0 0 0 0 0 0 0 11 0 38 0 193 0 1331 0 7275 0 40406 0 234969 0 105,153 121,153 8 1.02E-06 0 0 0 0 0 0 0 1 0 4 0 42 0 188 0 1158 0 6389 0 37318 0 211220 0 107,155 133,161 9 1.04E-06 0 0 0 0 0 0 0 0 2 4 9 15 40 116 250 596 1419 3403 8268 19484 46 107,135 135,161 9 1.05E-06 0 0 0 0 0 0 0 0 1 6 12 24 50 126 302 727 1743 4140 9848 23542 5 115,147 131,163 9 1.08E-06 0 0 0 0 0 0 0 0 2 3 10 25 47 118 272 639 1569 3829 9119 21474 5 121,157 105,173 9 1.09E-06 0 0 0 0 0 0 0 0 2 3 10 28 43 101 287 655 1554 3804 9018 21397 5 127,141 103,165 8 1.12E-06 0 0 0 0 0 0 0 1 0 6 0 40 0 216 0 1203 0 6981 0 39864 0 227402 0 137,155 133,175 9 1.15E-06 0 0 0 0 0 0 0 0 1 6 11 12 45 117 259 629 1513 3570 8494 20497 4 127,161 107,165 9 1.18E-06 0 0 0 0 0 0 0 0 2 5 8 23 47 107 278 660 1611 3813 8944 21450 51 121,147 105,163 8 1.27E-06 0 0 0 0 0 0 0 1 0 6 0 37 0 228 0 1176 0 6966 0 39558 0 226873 0 112,147 51,163 8 1.30E-06 0 0 0 0 0 0 0 1 0 8 0 51 0 270 0 1492 0 8666 0 48899 0 277757 0 122,163 45,147 8 1.30E-06 0 0 0 0 0 0 0 1 0 8 0 51 0 270 0 1492 0 8666 0 48899 0 277757 0 105,167 121,167 9 1.34E-06 0 0 0 0 0 0 0 0 2 4 12 23 48 113 255 665 1605 3915 9329 21913 5 135,163 135,147 10 1.35E-06 0 0 0 0 0 0 0 0 0 12 0 53 0 234 0 1517 0 8862 0 48590 0 276334 0 135,157 135,173 9 1.38E-06 0 0 0 0 0 0 0 0 2 4 8 20 43 116 263 619 1484 3544 8534 20377 48 103,135 135,141 8 1.39E-06 0 0 0 0 0 0 0 1 0 6 0 50 0 236 0 1383 0 7845 0 44819 0 255408 0 127,165 self 8 1.47E-06 0 0 0 0 0 0 0 1 0 5 0 35 0 187 0 1074 0 6150 0 35219 0 201519 0 127,142 43,165 8 1.54E-06 0 0 0 0 0 0 0 1 0 10 0 52 0 301 0 1614 0 9435 0 52915 0 300679 0 106,165 61,127 8 1.54E-06 0 0 0 0 0 0 0 1 0 10 0 52 0 301 0 1614 0 9435 0 52915 0 300679 0 Table 8: Best encoders for rate =1/2, m =6, (BER computed for AWGN at Eb/No =4.79 dB)

Theorem 5: Given a 1/n convolutional code described by a non-catastrophic polynomial generator matrix, G ( D) = [ g1 ( D) g 2 ( D) … g n ( D)] , with a constituent polynomial satisfying deg ⎡⎣ g j ( D ) ⎤⎦ < υ , Brian K. Butler

14

March, 2005

then the polynomial generator matrix, G * ( D ) , with g j ( D ) replaced with Dg j ( D ) , is also noncatastrophic and has the same weight spectra as G ( D ) . Proof: Proving that G * ( D ) is non-catastrophic follows Theorem 3: that introducing a D multiplier on one of the minors of G(D) does not affect the outcome of the g.c.d. test of the Massey-Sain Theorem. Showing that the weight spectra is the same, we argue that introducing the D multiplier to g i , j ( D ) simply delays the code symbols for that branch without increasing the memory of the encoder state machine. Thus, creating the weight spectra by examining all possible information sequences that force an immediate departure from the zero state and later remerge to the zero state, the simple delaying of the code symbol will not alter the Hamming weight calculations.■ rate = 1/2, M = 8, (encoder polynomials are in octal) (G1,G2) (G1,G2) dfree BER Weight spectra (Ad, starting at d=10) 467,625 523,731 11 9.41E-07 0 2 3 9 29 63 128 311 776 1884 4537 10845 26097 62755 15151 435,657 561,753 12 1.00E-06 0 0 11 0 50 0 286 0 1630 0 9639 0 55152 0 320782 0 1859184 0 557,631 463,755 12 1.07E-06 0 0 11 0 46 0 287 0 1609 0 9322 0 54055 0 314059 0 1818931 0 545,647 515,713 11 1.10E-06 0 1 4 15 31 58 127 351 855 2013 4937 11719 28026 67734 1634 557,651 453,755 12 1.11E-06 0 0 11 0 48 0 304 0 1671 0 9687 0 55962 0 326719 0 1889667 0 557,611 443,755 11 1.14E-06 0 2 5 10 23 64 152 321 804 1978 4694 11414 27371 65811 1590 455,617 551,743 11 1.16E-06 0 1 4 17 30 57 169 393 894 2196 5185 12501 30667 73415 1765 537,671 473,765 11 1.19E-06 0 1 6 13 27 55 151 331 830 1983 4809 11622 28046 66886 1618 465,627 531,723 11 1.19E-06 0 2 5 15 25 42 126 352 857 2017 4827 11337 26985 65393 1585 511,717 445,747 11 1.20E-06 0 3 4 9 23 57 143 311 781 1945 4572 10964 26519 63719 15343 557,751 457,755 12 1.20E-06 0 0 10 9 30 51 156 340 875 1951 5127 11589 28740 68191 1663 451,637 451,763 11 1.22E-06 0 2 4 16 27 64 161 345 905 2088 5043 12273 29306 71127 1707 473,661 433,671 11 1.24E-06 0 2 9 7 28 56 147 346 871 2093 5065 11944 29127 69276 16822 565,607 535,703 11 1.25E-06 0 2 3 14 34 59 154 383 903 2128 5079 12458 29980 71969 1740 437,675 573,761 10 1.26E-06 1 1 2 14 20 59 151 311 812 1918 4505 11150 26606 63777 1549 471,673 self 12 1.26E-06 0 0 14 0 52 0 302 0 1757 0 10212 0 59186 0 344100 0 1990067 0 467,661 433,731 11 1.27E-06 0 3 5 13 26 52 151 385 916 2112 5095 12418 29579 71152 1724 431,657 461,753 11 1.28E-06 0 4 3 5 33 68 126 342 893 1963 4660 11920 28325 66594 16255 531,657 465,753 12 1.28E-06 0 0 14 0 49 0 288 0 1758 0 10100 0 58357 0 337832 0 1960393 0 471,617 471,743 10 1.30E-06 1 1 3 14 24 58 142 326 801 1960 4629 11142 27171 64997 1568 573,621 423,675 11 1.32E-06 0 3 5 10 24 61 159 335 819 2058 4879 11861 28576 68401 1642 417,675 573,741 10 1.33E-06 1 0 8 0 56 0 289 0 1710 0 9936 0 57314 0 332144 0 1926986 0 1 Table 9: Best encoders for rate =1/2, m =8 (BER computed for AWGN at Eb/No =4.11 dB)

Now we will search through all possible encoders for rate =1/2, m =8. I skipped m =7, as m = 8 is used more often in practice. This time the search takes over a day to complete, even using the insight of Theorem 5. The search space was organized by two loops. The outer loop runs polynomial, g1(D), through all possible m =8 values from 257 to 511. The inner loop runs polynomial, g1(D), over the reduced space from 256 to (g1-1). Our prior theorems have justified this reduced space. Table 9 shows the best encoders for m =8. The (561,753) encoder, shown in blue highlight in the table, is the one published by Larsen and Conan, achieving the target BER. Again, the published one appears to not be the best at achieving a BER of 10-6, but only by about 0.02 dB this time. Table 10 shows a detailed breakdown of the info WEF of the published m =8 code on the right and the best code from Table 9 on the left. It shows, again, that the published code has is hurt by its multiplicities.

Brian K. Butler

15

March, 2005

te = 1/2, M = 8, (encoder polynomials are in octa [523,731] [561,753] d 11 12 13 14 15 16 17 18 19 20 21 22 total

BER bound BER bound Bd contribution Bd contribution 4 2.04E-07 0 10 1.35E-07 33 4.46E-07 39 1.40E-07 0 140 1.34E-07 281 2.69E-07 411 1.05E-07 0 964 6.58E-08 2179 1.49E-07 2549 4.67E-08 0 6896 3.39E-08 15035 7.39E-08 18468 2.44E-08 0 48418 1.72E-08 105166 3.74E-08 124303 1.19E-08 0 321726 8.32E-09 692330 1.79E-08 9.26E-07 9.92E-07

Table 10: Partial list of terms for computing the BER bound at an Eb/No of 4.11 dB for several rate =1/2, m =8 codes.

The results of Table 9 lead me to propose the following heuristic: At the larger constraint lengths, it is not necessary to search even valued generators. That is, all the good generator polynomials have a binary tap at each end of the shift register. This will cut down the search space by about a factor of 4. If, in the future, I wished to continue the search at larger constraint lengths, I would apply the following ideas to speed them up. (My searches computed both WEFs out to large distances for every polynomial.)



A first pass search to find dfree would be beneficial to computation time. Calculating dfree would not involve keeping histograms, just a single best metric at every state at every stage. Those codes with dfree within one or two of the maximum can be investigated further.



Writing some of the ‘wtspec’ code in the C language could speed it up significantly. I used the convolution function to merely shift over the elements in the vector. That is extremely wasteful of CPU cycles. Convolution is Matlab is N*M floating point multiplies. Some pointer management could have accomplished the same thing.



Use the last heuristic and possibly search for other such heuristic as the constraint length increases.

It’s worth making a fairly obvious conclusion in this section on code selection. To optimize communication system performance, the puplished codes may not be the most applicable for a particular system design. Depending upon the operating BER and whether block error rate or bit error rate is more critical, finding the optimal convolutional code through searching can sometimes save small amounts of SNR at no additional system complexity.

Brian K. Butler

16

March, 2005

5. Repeat Accumulate Code The repeat-accumulate code contained a curiosity to me. As was presented in class, the stand-alone accumulate code (rate = 1) merely rearranges the space of possible inputs. That is, there are 2k possible binary input patterns of word length k. These input weights are therefore, binomially distributed. The stand-alone accumulate code produces every possible binary output pattern of length n = k. So the output weight distribution must also be binomially distributed. So how can putting an outer code with a simple repeat function make things much better? (We know the interleaver itself doesn’t do anything to the weight.) I felt a quick look at the repeater’s input and output weight distributions for particular interleavers might shed some light on what is happening. Figure 4 contains a simple simulation of every possible codeword in a (15,5) repeat accumulate codeword. It shows that the outer repeat function has spaced the original binomial distribution from the random source apart, leaving gaps. The combination of the interleaver and accumulator has made effective use of these gaps. The gaps have been filled in, nicely shifting the distance upwards, creating sometime significantly better than a simple repeat code. (This simulation was run with the Oberg interleaver.) (15,5) RA Code Weight Histogram (Oberg Intlv) accumulator input count

10 8 6 4 2 0

1

2

3

4

5

6

7

1

2

3

4

5

6

7

8

9

10 11 12 13 14 15

8 9 weight

10 11 12 13 14 15

accumulator output count

8 6

4 2

0

Figure 4: Weight Distribution of every possible codeword in a (15,5) Repeat Accumulate code

Figure 5 contains the simulation extended to a (36,12) RA code, using a specific interleaver, that I randomly chose to be [6 34 32 13 5 8 25 20 2 1 11 9 36 18 26 35 16 19 29 12 17 23 30 21 15 33 24 10 28 4 22 27 31 3 7 14]. This shows the effect much more clearly. The output histogram appears to be nearly binomial centered at 18. One may intuitively speculate that as the block length grows this distribution closely approaches the random distribution used in Shannon’s theorem. Figure 6 shows the weight distribution as a CDF, which emphasizes what is happening at low weights. Upon close

Brian K. Butler

17

March, 2005

examination, provided by Table 11, we see that the small distance events have been improved significantly. Weight

3

4

5

6

7

Accumulator Input Multiplicity

12

0

0

66

0

Accumulator Output Multiplicity

0

2

0

4

3

Table 11: Weight Distribution of (36,12) Repeat Accumulate code

(36,12) RA Code Weight Histogram (random intlv) accumulator input count

1000 800 600 400 200 0

0

5

10

15

20

25

30

35

40

0

5

10

15

20 weight

25

30

35

40

accumulator output count

600

400

200

0

Figure 5: Weight Distribution of every possible codeword in a (36,12) Repeat Accumulate code

Brian K. Butler

18

March, 2005

(36,12) RA Code Weight CDF (random intlv)

0

10

accum input accum ouput -1

10

-2

10

-3

10

-4

10

0

5

10

15

20 weight

25

30

35

40

Figure 6: Cumulative Weight Distribution of every possible codeword in a (36,12) Repeat Accumulate code

From the weight distributions found by exhaustive search, we can use the union bounding techniques taught in class as applied to block codes to plot an approximate upper bound on bit error rate. Also we can use the theoretical ensemble IOWEF (which uses the theoretical uniform interleaver) for the (qN,N) RA code (Divsilar, Jin, & McEliece, 1998) is:

Aw, h

⎛ N ⎞ ⎛ qN − h ⎞ ⎛ h − 1 ⎞ ⎟ ⎟⎜ ⎜ ⎟⎜ ⎝ w ⎠ ⎝ ⎣⎢ qw / 2 ⎦⎥ ⎠ ⎝ ⎢⎡ qw / 2 ⎥⎤ − 1⎠ = ⎛ qN ⎞ ⎜ ⎟ ⎝ qw ⎠

(5.1)

The BER corresponding to this IOWEF can similarly be bounded and plotted. Both of these BER bounds are shown in Figure 7. Surprisingly, my randomly chosen interweaver seems to be doing significantly better than the average interleaver at high SNR.

Brian K. Butler

19

March, 2005

10

10

10

10

10

10

10

10

B E R b o u n d vs . E b / N o fo r (3 6 , 1 2 ) R A C o d e

-1

m y in t e rle a ve r u n ifo rm in t e rle a ve r

-2

-3

-4

-5

-6

-7

-8

0

1

2

3

4

5 6 E b / N o in d B

7

8

9

10

Figure 7: The union bound based BER vs. Eb/No for two (36,12) RA Codes

Now that we’ve gotten this far, it’s temping to see how much better the repeat accumulate codes get as the block length is increased. For the (3N,N) RA codes, the following values of N have been used to compute the IOWEF (Eq. 5.1): 15, 30, 60, 125, 250, and 500. There are some interesting numeric problems that arise when directly computing the binomial coefficients that appear in (Eq. 5.1). For ⎛1030 ⎞ instance, ⎜ ⎟ is greater than the maximum possible double-precision floating-point value, ⎝ k ⎠ 1.798*10308, for some k’s. Even getting the binomial coefficient to work well up to this point means taking care to order the multiply and divides such that overflows are avoided as much as possible. My program for computing pushes beyond the 1030-limit by instead collecting all terms to be multiplied and divided resulting from all four binomial coefficients that appear in (Eq. 5.1) and then re-ordering them in some smart way before multiplying them all together. Alternately, one may extend a little further by truncating the terms. I choose to go through all terms exhaustively. Following the concepts in class, the average information weight, wh , of the codewords of output Hamming weight, h, is computed numerically using the IOWEF, as: N

wh = ∑ w Aw, h w =1

N

∑A w =1

w, h

(5.2)

This result is then used in the union bound provided in class to create Figures 8 through 10.

Brian K. Butler

20

March, 2005

Pb ≤ ∑ h ≥1

Ah wh ⎛ 2hREb ⎞ Q⎜ ⎟ N 0 ⎠ Nq ⎝

(5.3)

One can see drastic performance improvements with each doubling of block size, at Eb/No values above 2.5 dB for rate = 1/3 and above 2 dB for rate = 1/4. These curves look very similar to the parallel-concatenated codes studied, with a cliff region and an error floor. However, for RA codes, the cliff region is predicted by the union bounds and the error floor is rather very at these small block sizes. One may speculate, that as the block size continues to grow, this BER curve will look like a cliff around 2.2 – 2.5 dB at rate =1/3 and around 1.9 dB for rate = 1/4. In fact, the union bound performance threshold found by Divsilar, Jin, & McEliece is 2.2 dB for q=3 and 1.93 dB for q=4, using the union bound. Interestingly, for rate = 1/2, the performance improvements are much smaller with each doubling of block size. Before comparing to convolutional codes, there are two things to keep in mind:



Other bounds, particularly the Viterbi-Viterbi bound, are lower than the union bound. The Viterbi-Viterbi bound predicts the “cutoff threshold” to be 1.112 dB for q=3, and 0.313 dB for q=4.



These bounds assume maximum likelihood sequence decoding, which is unfeasible at even medium block lengths. Suboptimal decoders will therefore sacrifice some performance.

However Divsilar, et. al., publish simulation results for a N=16384, q=3 RA code achieving a BER of 2*10-5 at an Eb/No of about 1.0 dB using a message passing decoder structure running 30 iteration. Hence, they believe the cutoff for q=3 to be less than 1 dB. In simulation the N=16384, q=4 RA code achieved a BER of 1*10-5 at an Eb/No of about 0.5 dB. Thus, at such large block sizes the rate 1/3 and 1/4 RA codes appear to be outperforming the corresponding convolutional codes (see Figs 2 & 3) at any practical system BER requirement. The performance of the rate = 1/2 RA codes is more difficult to conclude at this point. At the block lengths simulated, convolutional is clearly superior. At some block length there may be a cross over. However, we don’t have any q=2 RA code Viterbi-Viterbi bounds nor simulation results from Divsilar, et. al. to reference so our comparison to convolutional is inconclusive.

Brian K. Butler

21

March, 2005

10

10

10

10

10

10

10

10

B E R b o u n d vs . E b / N o fo r s e ve ra l (2 N , N ) R A C o d e s

-1

-2

N= 15

-3

-4

N = 1000

-5

< --S h a n n o n L im it

-6

-7

-8

-1

0

1

2

3 E b / N o in d B

4

5

6

7

Figure 8: The union bound based BER vs. Eb/No for several rate = 1/2 RA Codes

Brian K. Butler

22

March, 2005

10

10

10

10

B E R b o u n d vs . E b / N o fo r s e ve ra l (3 N , N ) R A C o d e s

-1

-2

-3

N= 15

-4

N= 30 10

-5

N= 60 N= 125

10

-6

S h a n n o n L im it N= 250

10

10

N = 500

-7

-8

-1

0

1

2

3 E b / N o in d B

4

5

6

7

Figure 9: The union bound based BER vs. Eb/No for several rate = 1/3 RA Codes

Brian K. Butler

23

March, 2005

10

10

10

10

B E R b o u n d vs . E b / N o fo r s e ve ra l (4 N , N ) R A C o d e s

-1

-2

-3

-4

N= 15 N= 30

10

-5

N= 60

10

-6

< --S h a n n o n L im it

N= 125 N= 250

10

10

-7

-8

-1

0

1

2

3 E b / N o in d B

4

5

6

7

Figure 10: The union bound based BER vs. Eb/No for several rate = 1/4 RA Codes

References J. Conan, “The Weight Spectra of Some Short Low-Rate Convolutional Codes,” IEEE Trans. Comm. Theory, vol. COM-32 (1984), pp. 1050 – 1053. D. Divsalar, H. Jin, R. J. McEliece, “Coding Theorems for ‘Turbo-Like’ Codes”, Proc. 36th Allerton Conf. On Communication, Control, and Computing, pp. 201-220, 1998. K. Larsen, “Short Convolutional Codes with Maximal Free Distance for Rates 1/2, 1/3, and 1/4,” IEEE Trans. Inform. Theory, vol. IT-19 (1973), pp. 371 – 372. S. Lin and D. Costello, Error Control Coding, Pearson Prentice Hall, Upper Saddle River, NJ, 2004. J. G. Proakis, Digital Communications, McGraw-Hill, New York, NY, 2001. Brian K. Butler

24

March, 2005

C. Schlegel and L. Perez, Trellis and Turbo Coding, IEEE Press, Piscataway, NJ, 2004. P. Siegel, “ECE 259BN Trellis-Coded Modulation,” course lecture notes, Jan – Mar, 2005. note: J. P. Odenwalder, “Optimal decoding of convolutional codes,” Ph.D. dissertation appears to be an important early reference on searching for good convolutional codes, but I do not yet have a copy.

Appendix A – ‘wtspec’ source code Code submitted to Prof. Siegel, but not posted.

Appendix B – ‘iscatast’ source code Code submitted to Prof. Siegel, but not posted.

Brian K. Butler

25

March, 2005

Trellis-Coded Modulation Course Project

Mar 14, 2005 - details of computing the weight spectra of convolutional codes with several .... lousy performance of the m = 6 is due to the high multiplicity at dfree: .... accumulated into the master histogram that eventually forms the WEF. 6.

246KB Sizes 0 Downloads 216 Views

Recommend Documents

amplitude modulation pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. amplitude ...

Temperature modulation in ferrofluid convection
tude modulation prevail, i.e., when →0, a general solution .... describe the problem for the simplest case s=0 when N=1 and L=1 to obtain some analytic results.

pdf amplitude modulation
Sign in. Page. 1. /. 1. Loading… Page 1 of 1. File: Pdf amplitude modulation. Download now. Click here if your download doesn't start automatically. Page 1 of 1.

Bandlimited Intensity Modulation - IEEE Xplore
Abstract—In this paper, the design and analysis of a new bandwidth-efficient signaling method over the bandlimited intensity-modulated direct-detection (IM/DD) ...

Mechanisms underlying the noradrenergic modulation ...
Design, Cambridge, UK) and DATAVIEW Analysis Software (W.J.. Heitler, University of St Andrews, UK). N indicates the number of. Fig. 2. Caudal motor neurons ...

Reportage Gauquelin modulation d'azote.pdf
le spécialiste en images satellitaires radars Telespazio. “Appliqué au blé, cet outil d'aide à la décision s'adressera. aux agriculteurs, aux conseillers, semenciers.

pulse width modulation techniques pdf
File: Pulse width modulation techniques. pdf. Download now. Click here if your download doesn't start automatically. Page 1 of 1. pulse width modulation techniques pdf. pulse width modulation techniques pdf. Open. Extract. Open with. Sign In. Main me

New Modulation Method for Matrix Converters_PhD Thesis.pdf ...
New Modulation Method for Matrix Converters_PhD Thesis.pdf. New Modulation Method for Matrix Converters_PhD Thesis.pdf. Open. Extract. Open with. Sign In.

BIOLOGICALLY BASED TOP-DOWN ATTENTION MODULATION FOR ...
stream. Only after saliency has been determined in each stream are they combined ...... Jan Morén received his M.S. degree in Computer Science in 1995 and ...

MODULATION FORENSICS FOR WIRELESS DIGITAL ...
... College Park, MD 20742 USA. Email: {wylin, kjrliu}@eng.umd.edu ... achieves almost perfect identification of the space-time coding, and high accuracy rate of ...

2005 May RR211901-SIGNALS—-MODULATION-THEORY.pdf
(a) Compare line-coding techniques in terms of minimum bandwidth, ... (a) Show that if two signals are orthogonal over an interval t1, t2, then the energy.

Attentional modulation of short - Semantic Scholar
rInternational Journal of Psychophysiology 32 1999 239 250. ¨. 242. Fig. 1. Visual lead stimuli for the selective counting task. The zebra stripe patterns were presented as slides. The dimensions of the projected patterns were. 9.25=19.5 inches. Pur

Trellis-Coded Modulation with Multidimensional ... - IEEE Xplore
constellation, easier tolerance to phase ambiguities, and a better trade-off between complexity and coding gain. A number of such schemes are presented and ...

Advance Learning Course for Managers/Project ... -
Managers/Project Officer/Development Practitioners ... program/project life-cycle by integrating strategy, people, resources, processes and ... Application.

Curriculum of SEIP Project for SEO Course Outline.pdf
Curriculum of SEIP Project for SEO Course Outline.pdf. Curriculum of SEIP Project for SEO Course Outline.pdf. Open. Extract. Open with. Sign In. Main menu.