IJRIT International Journal of Research in Information Technology, Volume 2, Issue 3, March 2014, Pg: 554-561

International Journal of Research in Information Technology (IJRIT) www.ijrit.com

ISSN 2001-5569

Implementation of Viterbi decoder for a High Rate Convolutional Code Using Pre-computational Logic for TCM Systems K. Ajay Kumar1, M.Valarmathi2 M.Tech , Email id: [email protected] Assistant Professor (Sr.G)2, Email id: [email protected]. 1

Abstract— This paper presents a efficient design of Viterbi decoder (VD) for trellis-coded-modulation (TCM). VD uses the viterbi algorithm for decoding a bit stream that has been encoded by the convolutional encoder. Here, efficient architecture based on pre-computation technique incorporated with T-algorithm for VD, which can effectively reduce the power consumption without degrading the decoding speed much. Implementation results of a VD for rate-3/4 convolutional code shows the reduction of power consumption by 70% without any performance loss, while the degradation in clock speed is negligible. Index Terms—Viterbi decoder (VD), Add-Compare-Select Unit (ACSU), Trellis-coded modulation (TCM), Path metric unit (PMU).

I.

INTRODUCTION

Viterbi decoder is the most efficient decoder and commonly used in wide range of communications and data storage applications. VD employs trellis coded modulation techniques that combines error correcting codes and modulation in digital communication systems. Viterbi decoder with high-rate convolutional code, which leads to more complexity in nature. In order to decrease computational complexity as well as the power consumption, some low power techniques should be exploited for the viterbi decoder in a TCM decoder. Power reduction in VDs could be achieved by using T-algorithm. T-algorithm searches only for the optimal path metric (PM), that is, the maximum value or the minimum value of all PMs [1],[2]. However, searching for optimal path metric in the feedback loop reduces the decoding speed. In order to achieve high speed, it is possible to implement fully parallel comparator architecture with 2(K-1) inputs. However, it will provide significantly hardware overhead, which provides conflicts with the design goal of low power and less computations. Several schemes have been proposed [2], for implementation of high speed T-algorithm, but all these schemes suffers from severe degradation of bit-error-rate (BER) performance due to inherent drifting error between the estimated optimal PM and the accurate one. The encoded data in TCM systems are always associated with 8-ary phase shift keying (8PSK) or 16/64-ary quadrature amplitude modulation (16/64 QAM) modulation schemes through a constellation mapper. The remainder of this paper is organized as follows. Section II gives the brief information about the general viterbi decoder. Section III presents the pre-computational architecture incorporated with T-algorithm and discusses the choice of pre-computational steps. Section IV presents design of low-power high speed viterbi decoder design. Simulation and synthesis results are reported in section V, II. VITERBI DECODER A typical functional block diagram of a viterbi decoder is shown in Fig.1. The block diagram consists of three main units. First, branch metric unit (BMU) calculates the branch metrics from the received symbols. In TCM decoder, this module is replaced by transition metric unit (TMU). Second, Add-compare-select-unit (ACSU) receives the possible branch metrics and the state metrics storage values. An ACS module adds each incoming branch metric of the state to the corresponding state metric and compares the two results to select a smaller one. Then it will update the state metric storage with selected value. After that, to decode source bits along with the final survivor path, the decision bits are stored in and retrieved from survivor path metric unit (SMU).

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 3, March 2014, Pg: 554-561

Fig.1. Functional block diagram of Viterbi decoder In this paper, we proposed an add-compare-select unit architecture based on pre-computation for VDs using T-algorithm. In the ACSU loop for calculating the actual optimal path metric and puncturing states, T-algorithm requires extra computations. Therefore, a straightforward implementation of T-algorithm will reduce decoding clock speed. The key point of improving clock speed of T-algorithm is to find the optimal path metric. Since the optimal path metric is accurate, the new architecture keeps the same BER performance as the conventional T-algorithm, and is well suited for high rate codes. III. THE PRE-COMPUTATION ARCHITECTURE A.

Pre-computation algorithm

The basic idea of pre-computation algorithm was presented in [3]. Consider a VD for a convolutional code with a constraint length k, where each state receives p candidate paths. First, we expand PMs at the current time slot n (PMs(n)) as a function of PMs (n-1) to form a look-ahead computation of the optimal PM PMopt (n). If the branch metrics are calculated based on the Euclidean distance, PMopt (n) is the minimum value of PMs (n) obtained as PMopt (n) = min {PM0(n), PM1(n)……… PM2k-1(n) } = min { min [PM0,0(n-1) + BM0,0(n), PM0,1(n-1) + BM0,1(n), …..., PM0,p(n-1) + BM0,p(n)], min [PM1,0(n-1) + BM1,0(n), PM1,1(n-1) + BM1,1(n), ….., PM1,p(n-1) + BM1,p(n)], …………………., min [PM2k-1-1,0(n-1) + BM2k-1-1,0(n), PM2k-1-1,1(n-1) + BM2k-1-1,1(n), ….., PM2k-1-11,p(n-1) + BM2k-1-1,p(n)] } = min{ PM0,0(n-1) + BM0,0(n), PM0,1(n-1) + BM0,1(n), ….., PM0,p(n-1) + BM0,p(n), PM1,0(n-1) + BM1,0(n), PM1,1(n-1) + BM1,1(n), ….., PM1,p(n-1) + BM1,p(n), …………………., PM2k-1 -1,0(n-1) + BM2k-1 -1,0(n), PM2k-1 -1,1(n-1) + BM2k-1 -1,1(n), ….., PM2k- 1-11,p(n-1) + BM2k-1-1,p(n)}. (1) To reduce the computational overhead caused by look-ahead computation, we need to group states in to several clusters. The trellis butterflies for a VD usually have a symmetric structure. In other words, the states can be grouped into m clusters, where all the clusters have the same number of states and all the states in the same cluster will be extended by the same BMs. Thus, Eq. (1) can be re-written as PMopt(n)= min{ min (PMs (n-1) in cluster 1) + min(BMs(n) for cluster 1), min (PMs (n-1) in cluster 2) + min(BMs(n) for cluster 2), ……………………….., min (PMs (n-1) in cluster m) + min(BMs(n) for cluster m) }. (2)

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 3, March 2014, Pg: 554-561 The min(BMs) for each cluster can be easily obtained from the BMU or TMU, and the min(PMs) at time n-1 in each cluster can be pre-calculated at the same time when the ACSU is updating the new PMs for time n. Theoretically, when we continuously decompose PMs(n-1), PMs(n-2),…., the pre-computation scheme can be extended to q steps, where q is any positive integer that is less than n. Hence, PMopt(n) can be calculated directly from PMs(n-q) in q cycles. Pre-computational logic is usually inefficient for low-rate convolutional codes, because the number of states in the VD is much greater than that of BMs. In this case, to maintain acceptable clock speed, atleast 4 steps of pre-computation are needed. Which will cause computation overhead and large amount of hardware is required. However, for High rate convolutional codes, the number of branch metrics BMs also large and each state receives more than 2 paths. In this case one or two steps of pre-computations is necessary, since regular update of new path metrics also takes long time. B.

Choosing Pre-computation steps

A design example is shown in [3], where q-step pre-computation can be pipelined in to q stages, as q increases the logic delay of each stage is continuously reduced. As a result, the decoding speed of the low power viterbi decoder is improved. However, after reaching a certain number of steps, qb, further pre-computation would not result in additional benefits because of inherent iteration bound of the ACSU loop. Therefore it is worth to discuss the optimal number of pre-computational steps. In a TCM decoder, usually the convolutional code has a coding rate of R/(R+1), R=2,3,4,….., so that in Eq.(1), p=2R, and the logic delay of the ACSU is TACSU=Tadder+Tp-in_comp, where Tadder is the logic delay of the adder to compute PMs of each candidate paths that reaches the same state and Tp-in_comp is the logic delay of a p-input comparator to determine the survivor path metric( the path with the minimum metric) for each state. If T-algorithm is employed in the VD, the iteration bound is slightly longer than TACSU because there will be another two input comparator in the loop to compare the new PMs with a threshold value obtained from the optimal PM and a pre-set T as shown in Eq.(3): Tbound=Tadder+Tp-in_comp+T2-in_comp (3) The iteration bound is expressed in Eq.(3), for the pre-computation in each pipelining stage, we limit the comparison to be among only p or 2p metrics. To simplify our evaluation, we assume that each stage reduces the number of metrics to 1/p(or 2-R) of its input metrics as shown in Fig. 2. The smallest number of pre-computation steps (qb) meeting the theoretical iteration bound should satisfy (2R)qb ≥ 2k-1. Therefore, qb ≥ (k-1)/R and qb is expressed in Eq. (4) 

  





(4)

Fig. 2. Topology of pre-computation pipelining The most important factor that should be carefully evaluated is computational overhead. Mostly computational overhead comes from adding branch metrics (BMs) to the metrics of each stage as shown in Eq. (2). In other words, if there are m remaining metrics after comparison in a stage, the computational overhead from this stage is atleast m additions operations. The exact overhead varies for different cases depending on convolutional code’s trellis diagram. Again, to simplify the evaluation, we consider the convolutional code with a constraint length k and q pre-computations steps. Also we assume that each remaining metric would cause a computational overhead of one addition operation. In this case, the number of metrics will reduce at a ratio of 2(k-1)/q, and overall computation overhead is gives as: Noverhead = 20+2(k-1)/q+22(k-1)/q+……+2(q-1)(k-1)/q

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 3, March 2014, Pg: 554-561

=

/  / 

=

/ 

/ 

(5)

The estimated computational according to Eq. (5) when k=7(64 states) is shown in Fig. 3, where linear curve is included as a reference. Fig .3 shows that the computational overhead increases exponentially. In real design, as the overhead increases even faster than what is given in Eq. (5) When the other factors are taken in to consideration. Therefore, a small number of pre-computational steps are preferred even though the iteration bound may not be fully satisfied. In most of these cases, one step or two-step pre-computation is good choice.

Fig .3: The estimated computational overhead of 64 states as a function of pre-computation steps. The above analysis shows that pre-computation is not for low- rate convolutional codes (rate of 1/Rc, Rc= 2,3,…), because it needs more than two steps to effectively reduce the critical path ( in that case R=1 in Eq. (4) and qb is k-1). However for TCM systems, where high-rate convolutional codes are always employed, two steps of pre-computation could achieve the iteration bound or make big difference in terms of clock speed. In addition, computational overhead is small. IV. DESIGN OF LOW-POWER HIGH-SPEED VITERBI DECODER The high-rate convolutional code employed in the TCM system is shown in Fig. 4, where it still uses the 4-dimensional 8PSK TCM system described in [4]. In our preliminary work, the architecture design and BER performance of ASCU unit have been discussed in [3].

Fig.4: ¾-rate convolutional encoder The computational complexity and BER performance of VD employing T-algorithm with different threshold values over an additive white Gaussian noise channel is shown in Fig. 5. The simulation is based on 4-dimensional 8PSK trellis coded modulation (TCM) system employing the rate-¾ code [5]. After taking the other uncoded bits into consideration, the TCM system has overall coding rate of 11/12. Compared with the ideal viterbi algorithm, the threshold value Tpm can be lowered to 0.3 with less than 0.1dB of performance loss, while the computational complexity could be reduced by up to 90%.

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 3, March 2014, Pg: 554-561

Fig.5: BER performance of T-algorithm

A.

Two-step pre-computation T-algorithm

The 2-step pre-computation is the optimal choice for the rate-¾ code VD. For easy discussion, we define the left-most register in Fig. 4, as the most-significant-bit (MSB) and the right-most register as the least-significant-bit (LSB). For 64 states the path metrics (PMs) are labeled from 0 to 63. The 2-step pre-computation is expressed as Eq. (6) [3]. PMopt(n) = min[ min{ min (cluster0 (n-2))+ min (BMG0(n-1)), min (cluster1 (n-2))+ min (BMG1(n-1)), min (cluster2 (n-2))+ min (BMG2(n-1)), min (cluster3 (n-2))+ min (BMG3(n-1)) }+ min (even BMs(n)), min{ min (cluster0 (n-2))+ min (BMG1(n-1)), min (cluster1 (n-2))+ min (BMG0(n-1)), min (cluster2 (n-2))+ min (BMG2(n-1)), min (cluster3 (n-2))+ min (BMG3(n-1)) }+ min (odd BMs(n)), where cluster0 = {PMm |0 ≤ m ≤ 63, m mod 4 = 0}; cluster1 = {PMm |0 ≤ m ≤ 63, m mod 4 = 1}; cluster2 = {PMm |0 ≤ m ≤ 63, m mod 4 = 2}; cluster3 = {PMm |0 ≤ m ≤ 63, m mod 4 = 3}; BMG0 = {BMm |0 ≤ m ≤ 15, m mod 4=0}; BMG1 = {BMm |0 ≤ m ≤ 15, m mod 4=1}; BMG2 = {BMm |0 ≤ m ≤ 15, m mod 4=2}; BMG3 = {BMm |0 ≤ m ≤ 15, m mod 4=3};

(6)

The functional block diagram of the VD with 2-step pre-computational T-algorithm is shown in Fig.6. The branch metric generator (BMG) calculates the minimum value of each BM group in BMU or TMU and passed to the “Threshold Generator” unit (TGU), to calculate (PMopt+T). The new path metrics (PMs) and (PMopt+T) are compared in the “Purge unit” (PU).

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 3, March 2014, Pg: 554-561

Fig.6: VD with 2-step pre-computation algorithm The implementation of VD with 2-step pre-computation T-algorithm is shown in Fig.7. In Fig.7, the “MIN 16” unit for finding the minimum value in each cluster is constructed with 2 stages of 4-input comparator. Calculating PMopt(n) from PM(n-2) means that calculation can be completed within 2 cycles. Thus, the process is pipelined as two stages as indicated by dashed lines shown in Fig.7. Again, we need to examine the critical path of each stage. The critical path of the first stage and second stage are expressed as in Eq. (7) and Eq. (8). T (STAGE 1)2-Step-pre-T = TAdder + 2T4-in_comp (7) T (STAGE 2)2-Step-pre-T = TAdder + T4-in_comp + 2T2-in_comp (8) This architecture has been optimized to meet the iteration bound [3]. Compared with conventional T-algorithm, the computational overhead of this architecture is 12 addition operation and comparison which is slightly more than the number obtained from the evaluation in Eq. (5).

Fig.7: Implementation of VD with 2-step pre-computation T-algorithm. B. SMU Design In this section, we discuss about SMU design when T-algorithm is employed. There are two different types of SMU in the literature: 1. Register exchange (RE) and 2. Trace back (TB). In regular VD without any low-power techniques,

Fig.8: Architecture of 64 to 6 priority encoder.

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 3, March 2014, Pg: 554-561

the SMU always decodes the output data from fixed state (arbitrarily selected in advance), if RE scheme is used or it traces back the survivor path from the fixed state if TB scheme is used, for low complexity purpose. For VD incorporated with T-algorithm, no state is guaranteed to be active at all clock cycles. Thus it is impossible to appoint a fixed state for either outputting the decoded bit (RE scheme) or starting the trace-back process (TB scheme). In the conventional implementation of T-algorithm, the decoder can use the optimal state (state with PMopt), which is always enabled, to output or trace back data. The process of searching for PMopt can find out the index of the optimal state as a byproduct. However, when the estimated PMopt is used [2], or in our case PMopt is calculated from PMs at the previous time slot, it is difficult to find the index of the optimal state. (k-1) A practical method is to find the index of an enabled state through a 2 -to-(k-1) priority encoder. Suppose that we have labeled the states from 0 to 63. The output of the priority encoder would be the unpurged state with the lowest index. Assuming the purged states have the flag ‘0’ and other states are assigned the flag ‘1’, the truth table of such a priority encoder is shown in Table I, where ‘flag’ is the input and ‘index’ is the output. Implementation of such a table is not trivial. In our design, we employ an efficient architecture for the 64-to-6 priority encoder based on three 4-to-2 priority encoders, as shown in Fig. 8. The 64 flags are first divided into 4 groups, each of which contains 16 flags. The priority encoder at level 1 detects which group contains at least one ‘1’ and determines ‘index [5:4]’. Then MUX2 selects one group of flags based on ‘index [5:4]’. The input of the priority encoder at level 2 can be computed from the output of MUX2 by ‘OR’ operations. We can also reuse the intermediate results by introducing another MUX (MUX1). The output of the priority encoder at level 2 is ‘index[3:2]’. Again, ‘index [3:2]’ selects 4 flags (MUX3) as the input of the priority encoder at level 3. Finally, the last encoder will determine ‘index[1:0]’. TABLE I. Truth table of 64-to-6 priority encoder

TABLE II. Truth table of 4-to-2 priority encoder

Implementing the 4-to-2 priority encoder is much simpler than implementing the 64-to-6 priority encoder. Its truth table is shown in Table II, and the corresponding logics are shown in Eq. (9) and Eq. (10). .(I[1]+I[3]. 2 . 1  ) =   ); (9) O[0]=0 0.(I[1]+I[3]. 2 .I[1].(I[2]+ 2 . I[3]) =  O[1]=0 0  1 .(I[2]+I[3] );

(10)

V. IMPLEMENTATION RESULTS The full trellis VD, conventional T-algorithm and 2-step pre-computation architecture are modeled with Verilog code. RE scheme with survival length of 42 is used for SMU, and the register arrays associated with the purged states are clock-gated to reduce the power consumption in SMU. For a more detailed comparison, we implemented, in FPGA, the ACSU using several different schemes, and the synthesized results are summarized in table-III. Thus, next we focused on power comparison results between full trellis VD and 2-step pre-computation architecture. The power comparisons of these two designs are estimated using synopsis prime power under the clock speed of 200Mbps. The results are tabulated in table-IV. TABLE III. SYNTHESIS RESULTS

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 3, March 2014, Pg: 554-561

TABLE IV. POWER RESULTS

VI. CONCLUSION In this paper, we proposed a high speed low-power VD for TCM decoder, Power consumption in VD is gently reduced by using pre-computation architecture incorporated with T-algorithm. We have discussed the pre-computational algorithm, where the optimal pre-computational steps are calculated and analyzed. Finally, FPGA synthesis and Power results have verified and compared with the full-trellis VD, where pre-computation VD could reduce power consumption by 70%. REFERENCES [1] J. Jin, and C.-Y. Tsui, “Low-power limited-search parallel state Viterbi decoder implementation based on scarece state transition,” IEEE Trans. VLSI Syst., vol. 15, no. 10, pp.1172-1176, Oct. 2007. [2] F. Sun and T. Zhang, “Low power state-parallel relaxed adaptive Viterbi decoder design and implementation,” in Proc. IEEE ISCAS, pp. 4811-4814, May, 2006. [3] J. He, H. Liu, Z. Wang, "A fast ACSU architecture for Viterbi decoder using T-algorithm," in Proc. 43rd IEEE Asilomar Conf. on Signals, Systems and Computers, pp. 231-235, Nov. 2009. [4] “Bandwidth-Efficient Modulations”, CCSDS 401(3.3.6) Green Book, April 2003 [5] J. He, Z. Wang and H. Liu, “An efficient 4-D 8PSK TCM decoder architecture”, IEEE Trans. VLSI Syst., vol. 18, no. 5, pp. 808-817, May 2010.

Implementation of Viterbi decoder for a High Rate ...

Code Using Pre-computational Logic for TCM Systems .... the same number of states and all the states in the same cluster will be extended by the same BMs.

1MB Sizes 0 Downloads 216 Views

Recommend Documents

Implementation of H.264 Decoder on General-Purpose ...
implementation of a real-time H.264 decoder on general-purpose processors with ... single-instruction-multiple-data (SIMD) execution model was introduced in Intel ... and yield a much better perceptual quality for the decoded video stream.

Optimized Software Implementation of a Full-Rate IEEE ... - IEEE Xplore
Hardware implementations are often used to meet the high-data- rate requirements of 802.11a standard. Although software based solutions are more attractive ...

Low-Complexity Shift-LDPC Decoder for High-Speed ... - IEEE Xplore
about 63.3% hardware reduction can be achieved compared with the ... adopted by high-speed communication systems [1] due to their near Shannon limit ...

Transforms for High-Rate Distributed Source Coding
As for quantization for distributed source coding, optimal design of ... expected Lagrangian cost J = D+λ R, with λ a nonnegative real number, for high rate R.

The growth rate of significant regressors for high ...
journal homepage: www.elsevier.com/locate/stapro. The growth ... However, we wonder if we can have a faster growth rate for quantile regression so that the.

design and implementation of a high spatial resolution remote sensing ...
Therefore, the object-oriented image analysis for extraction of information from remote sensing ... Data Science Journal, Volume 6, Supplement, 4 August 2007.

design and implementation of a high spatial resolution remote sensing ...
Aug 4, 2007 - 3College of Resources Science and Technology, Beijing Normal University, Xinjiekou Outer St. 19th, Haidian ..... 02JJBY005), and the Research Foundation of the Education ... Photogrammetric Record 20(110): 162-171.

design and implementation of a high spatial resolution remote sensing ...
Aug 4, 2007 - 3College of Resources Science and Technology, Beijing Normal University, ..... 02JJBY005), and the Research Foundation of the Education.

Implementation of a High Throughput 3GPP Turbo ...
There are a number of ways to recover the performance loss ... In our kernel, we need to recover performance loss ..... The QPP interleaver guarantees bank free.

The Viterbi Algorithm - IEEE Xplore
HE VITERBI algorithm (VA) was proposed in 1967 as a method of decoding convolutional codes. Since that time, it has been recognized as an attractive solu-.

practical implementation of liquid cooling of high heat flux ...
practical implementation of liquid cooling of high heat flux electronics.pdf. practical implementation of liquid cooling of high heat flux electronics.pdf. Open.

A reconstruction decoder for computing with words
Other uses, including reproduction and distribution, or selling or licensing ... (i) The Extension Principle based models [1,22,20,3], which operate on the ..... The two-input SJA is a fuzzy logic system describing the relationship between ..... [10]

A Novel Storage Scheme for Parallel Turbo Decoder
We do this by restricting the whole number of colors seen by a SISO processor when it decode the two component codes. If p χ can be restricted, the total tri-state buffer consumption will decrease. The resultant “reordered first fit” algorithm i

A Reconstruction Decoder for the Perceptual Computer
Abstract—The Word decoder is a very important approach for decoding in the Perceptual Computer. It maps the computing with words (CWW) engine output, ...

Design of LVCSR Decoder for Czech Language
All known specifics of the Czech lan- ... of recognition network can be performed. ... This leads to a significant speed up due to less local likelihood calculations and less .... to a better RTR of the tree decoder in comparison to the previous test

Design and Implementation of High Performance and Availability Java ...
compute engine object that is exported by one JAVA RMI (JRMI) server, it will take ... addition, to build such a system, reliable multicast communication, grid ...

Design and Implementation of High Performance and Availability Java ...
compute engine object that is exported by one JAVA RMI (JRMI) server, it will take .... Test an application, such as a huge computation on this system. 2. Test the ...

A Generalization of the Rate-Distortion Function for ...
side information Y and independent Gaussian noise, while the source data X must be the ... However, the reduction of the general noisy WZ problem requires.

Implementation of a Symbolic Circuit Simulator for ... - Semantic Scholar
as a simulator for large network analysis due to their complexity and ..... pairs have different topologies, they lead to two identical sub-GRDDs. We call such ...

A Hardware Intensive Approach for Efficient Implementation of ... - IJRIT
conventional Multiply and Accumulate (MAC) operations. This however tends to moderate ... However, the use of look-up tables has restricted their usage in FIR.

A Hardware Intensive Approach for Efficient Implementation of ...
IJRIT International Journal of Research in Information Technology, Volume 3, Issue 5, May 2015, Pg.242-250. Rajeshwari N. Sanakal ... M.Tech student, Vemana Institute of Technology, VTU Belgaum. Banaglore ... It can, however, be shown that by introdu