Soft-Decision List Decoding with Linear Complexity for the First-Order Reed-Muller Codes Ilya Dumer

1

University of California Riverside, CA 92521 Email: [email protected]

Grigory Kabatiansky

Inst. for Info. Transmission Problems Moscow, Russia and INRIA, Rocquencourt, France Email: [email protected]

Abstract— Soft-decision decoding on a memoryless channel is considered for the first-order Reed-Muller codes RM(1, m) of length 2m . We assume that different positions j of the received binary vector y can be corrupted by the errors of varying weight wj . The generalized Hamming distance between vector y and any binary vector c is then defined as the sum of weighted differences wj |yj − cj | taken over all n positions. We obtain ˆ T on the number of codewords located a tight upper bound L within generalized Hamming distance T from vector y, and design a decoding algorithm that outputs this list of codewords ˆ T ). with complexity O(n ln2 L In particular, all possible error weights wj equal 1 if this combinatorial model is applied to a binary symmetric channel. In this case, the well known Green algorithm performs full maximum likelihood decoding of RM(1, m) and requires O(n ln2 n) bit operations, whereas the Litsyn-Shekhovtsov algorithm operates within the bounded-distance decoding radius n/4 − 1 with linear complexity O(n). We close the performance-complexity gap between the two algorithms. Namely, for any fixed ² ∈ (0, 1/2), our algorithm outputs the complete list of codewords within the decoding radius n(1/2 −²) with linear complexity of order n ln2 ².

I. I NTRODUCTION Decoding of the first-order Reed-Muller codes RM(, m) has been addressed in many papers since 1960s. One algorithm designed by Green [1] performs maximum likelihood decoding of codes RM(, m) and finds the closest codeword(s) in the Hamming metric with complexity order of n ln2 n bit operations, where n = 2m is the code length. Another algorithm designed by Litsyn and Shekhovtsov [2] performs bounded distance decoding and corrects up to n/4−1 errors with linear complexity O(n). In the area of probabilistic decoding, a major breakthrough was achieved by Goldreich and Levin [3]. Their algorithm takes any received vector and outputs the list of codewords of RM(, m) within a decoding radius n(1/2 − ²) with high probability 1 − 2−k and low poly-logarithmic complexity poly(mk/²) for any k > 0 and ² ∈ (0, 1/2). Later, this 1 Research

was supported by NSF grants CCF-0622242 and CCF-0635339 was supported by RFFI grant 03-01-00098 of the Russian Foundation for Fundamental Research 3 Research was supported by DISCREET, IST project no. 027679 of the European Commission’s Information Society Technology 6th Framework Programme 2 Research

2

C´edric Tavernier

3

THALES Communications Colombes, France Email: [email protected]

algorithm was substantially generalized for non-binary RM codes in [4]. Recent developments for RM codes have turned to efficient deterministic list decoding algorithms. The algorithm of [6] relegates the Guruswami-Sudan algorithm [5] to BCH codes (as subfield subcodes of RS codes) and then to the 1-shortened codes RM(s, m) codes embedded therein. For RM(1, m) codes, this algorithm yields the list of codewords within the decoding radius n(0.293 − ²) and has complexity O(n3 ²−6 ). Another, more efficient, algorithm [7] corrects up to n( 21 − ²) errors with linear complexity O(n²−2 ) for any ² ∈ (0, 1/2), and therefore allows to decode arbitrarily close to the maximum decoding radius of n/2. This paper advances the results of [7] in two different directions. ¡ First, ¢ the latter algorithm is now refined to yield only O n ln2 ² of bit operations. It is shown in [7] that for a vanishing ² of order n−1/2 , the output list can include as many as O(n) codewords. Each of these is defined by ln n + 1 information bits. Thus, the newly presented algorithm has complexity that closely approaches the bit size O (n ln n) of its output. Also, in this specific case of ² ∼ n−1/2 , the algorithm gives the complexity of the Green algorithm. Secondly, the newly defined algorithm will also be extended to an arbitrary memoryless semi-continuous channel. On this channel, an original codeword from Fn2 is received as some vector z taken from a larger space (usually, Rn or Fn2s ). The decoding result is defined in terms of the maximum likelihood. Equivalently, given any symbol zj in position j ∈ {0, ..., n − 1}, one can first choose the more probable binary bit yj instead of its complement y¯j and define the error weight wj as ¯ ¯ Pr{yj |zj } ¯¯ Pr{1|zj } ¯¯ wj = ln = ln . Pr{¯ yj |zj } ¯ Pr{0|zj } ¯ Thus, an output z can be replaced with a binary vector y ∈ Fn2 and a vector w = (w0 , ..., wn−1 ) of error weights. Then given the output pair y, w, we introduce the generalized Hamming distance n−1 X D(y, c) = wj · |yj − cj | (1) j=0

between the received vector y and any codeword c. Thus, the input c with the maximum posterior probability yields

the smallest distance D(y, c) and soft-decision decoding is executed in terms of the generalized Hamming distance. In the sequel, all sums over integer j will be taken within the entire range [0, n − 1], if not stated otherwise.Without loss of generality, we divide all the weights wj by their maximum max{wj } and assume that wj ∈ [0, 1] for any position j. We shall also use notation X X W1 = wj , W2 = wj2 . j

j

While working with real numbers wj ∈ R, we count our operations in bytes. However, below we assume that our bytes have fixed size s. Therefore, the complexity will have the same order in both byte and bit operations. Our main result reads as follows. Theorem 1: Let the first-order code RM(1, m) be used on a general memoryless channel with any set of error weights {wj } . For any received vector y and any decoding radius T < W1 /2, this code can be decoded into the complete list of codewords {f : D(y, f ) ≤ T } with complexity µ ½ ¾¶ nW2 2 O min n ln2 , n ln n (2) (W1 − 2T )2 Now consider a binary symmetric channel, in which case each position j is assigned the same error weight wj = 1. Corollary 1: Let the first-order code RM(1, m) be used on a binary symmetric channel. For any fixed ² ∈ (0, 1/2) and any received vector y, this code can be list-decoded within the decoding radius n(1/2 − ²) with linear complexity ¡ ¢ O min n ln2 ² , m → ∞. II. D ETERMINISTIC LIST DECODING FOR CODES RM(1, m) A. Johnson bound for generalized Hamming distance By definition [8], given any code C and any received vector y ∈ Fn2 , a list decoding algorithm of radius T produces the list of all codewords within the Hamming distance T from y : LT (y) = {c ∈ C : d(y, c) ≤ T }. Recall that the size of the list LT (y) can be upper-bounded using the classic Johnson bound, which reads as follows. The Johnson bound. Let Bn,T (y) = {x : d(y, x) ≤ T } be the Hamming ball of radius T = θn and center y. Let C be a code of minimum distance d ≥ δn such that δ > 2θ(1 − θ). Then δ |LT (y)| = |C ∩ Bn,T (y)| ≤ . (3) δ − 2θ(1 − θ) We now consider Johnson bound for the generalized Hamming distance defined in (1). Consider the conventional inner product of two real vectors X (a, b) ∈ Rn × Rn : ha, bi = aj bj . j

Given any binary vector c, let ˆ c be the vector with symbols (−1)cj . Given vector w and a binary output y, we shall also

consider the vector wˆ y = (w0 yˆ0 , ..., wn−1 yˆn−1 ). We can now relate the generalized Hamming distance and the inner product as follows X hwˆ y, ˆ ci = wj (−1)yj −cj = W1 − 2D(y, c). (4) j

Also, we observe that the list LT (y) can be written as LT (y) = {c ∈ C : hwˆ y, ˆ ci ≥ W1 − 2T } and reformulate the Johnson bound for the generalized Hamming distance D as follows. Lemma 1: Let y ∈ {0, 1}n be a received vector, w be an error-weight vector, and C be an [n, d]-code. Then the list LT (y) = {c ∈ C : hwˆ y, ˆ ci ≥ W1 − 2T }, has size

(

|LT (y)| ≤ min

)

2d · W2 2

(W1 − 2T ) − (n − 2d)W2

, |C|

(5)

2

provided that (W1 − 2T ) − (n − 2d)W2 > 0. Proof. For brevity, let L denote LT (y), and L be thePsize of c. L. Consider now the two vectors a = wˆ y and b = c∈L ˆ The Cauchy-Schwartz inequality ha, bi2 ≤ ha, aihb, bi implies that * wˆ y,

X

+2 ˆ c

* ≤ hwˆ y, wˆ yi

c∈L

X c∈L

ˆ c,

X

+ ˆ c .

c∈L

Then hwˆ y, wˆ yi = hw, wi = W2 . Also, by construction of the list L, * +2 X 2 wˆ y, ˆ c ≥ L2 (W1 − 2T ) . c∈L

On the other hand, given the code C of distance d, we have inequality * + X X ˆ c, ˆ c ≤ Ln + L(L − 1)(n − 2d), c∈L

c∈L

and therefore 2

L2 (W1 − 2T ) ≤ W2 (Ln + L(L − 1)(n − 2d)). 2

The above gives the required bound (5). B. Sums-Algorithm for generalized Hamming distance f

Below we consider affine Boolean functions Fm 2 → F2 defined on all points x =(x1 , ..., xm ) ∈ Fm , where 2 X f (x1 , ..., xm ) = f0 + fi x i . 1≤i≤m

The set of coefficients (f0 , ..., fm ) will be used to retrieve any such function f in the decoding process. For any f, notation f = f (x1 , ..., xm ) will also denote the corresponding vector of

length n = 2m with symbols f (x1 , ..., xm ). The set of vectors {f } defined by all affine functions {f } forms an optimal ReedMuller code RM(1, m) of length n, size 2n, and code distance d = n/2. Given the received vector y and decoding radius T, consider first the Hamming metric. In decoding, we seek the list LT (y) := { fdec : d(y, fdec ) ≤ T }

be the sum of all error weights taken over the facet Sα . We now observe that any vector F(i) (x1 , ..., xm ) and its i-th prefix f (i) (x1 , ..., xi ) ∈ RM (1, i) differ on Sα only by a constant vector hα =(hα , ..., hα ): F(i) (x1 , ..., xm ) hα

(6)

of required affine functions fdec . The Johnson bound shows that |LT (y)| ≤ (2²)−2 for T = n(1/2 − ²). To extend this decoding for the generalized Hamming distance, we first apply Lemma 1 for the code RM(1, m) and obtain the following. Corollary 2: Let y ∈ {0, 1}n be a received vector and w be an error vector. Then for any T < W1 /2, the list LT (y) = {fdec : D(y, fdec ) ≤ T }

(7)

= f (i) (x1 , ..., xi ) + hα , m X = fj αj

(

Thus, F(i) and f (i) can differ only by an all-one vector 1 (if hα = 1). Now for each Sα we define the function ∆α (a, b)

= min{Dα (a, b), Dα (a + 1, b)} = min{Dα (a, b), Wα − Dα (a, b)}

Let

nW2 2, n . (W1 − 2T ) Remark. The term |C| of (5) is replaced with n in the last expression. Here we use the fact that code RM(1, m) contains an all-one codeword, and at most half the code belongs to LT (y) for any T < W1 /2. Next, we modify the Sums-algorithm of [7] and apply it for the generalized Hamming distance. The algorithm works recursively on each step i, and finds some list LiT (y) of “candidates” def

f

(i)

(x1 , ..., xi ) = f1 x1 + . . . + fi xi .

m X

F (i) (x1 , . . . , xm ) = f1 x1 + . . . + fi xi +

fj xj

j=i+1

with the same set of coefficients (f1 , . . . , fi ) . The main idea of the Sums-algorithm is to approximate the generalized Hamming distance between the received vector y and an arbitrary “extension-vector” F(i) . This approximation is done as follows. Consider the sum of generalized Hamming distances taken over all i-dimensional “facets” . Let Sα = {(x1 , . . . , xi , α1 , . . . , αm−i )} be an i-dimensional facet of the m-dimensional Boolean cube. Here the prefixes (x1 , . . . , xi ) run through Fi2 and the suffixes (α1 , . . . , αm−i ) are fixed. We use notation α = α1 + . . . + αm−i 2m−i−1 . Also, let Dα (a, b) denote the generalized Hamming distance between two arbitrary vectors a, b ∈ Fn2 restricted to a given i-dimensional facet Sα . Let Wα =

j=0

wj+2i α

(9)

∆α (a, b)

(10)

α=0

Given (9) and (10), note that any i-th prefix f (i) (x1 , ..., xi ) and its extension F(i) (x1 , ..., xm ) satisfy inequality D(y, F(i) )

=

2m−i X−1

Dα (y, F(i) ) ≥

α=0

= ∆(y, f

2m−i X−1

∆α (y, f (i) )

α=0 (i)

)

Based on (11), a prefix-function f our algorithm if and only if

(11) (i)

is accepted in step i of

∆(y, f (i) ) ≤ T.

(12)

Thus, the algorithm retrieves the list of affine functions

This list will include the i-th prefixes of all affine functions fdec taken from the list (7). Given any i-th prefix f (i) , below we consider any affine function

i 2X −1

2m−i X−1

)

ˆ T = min |LT (y)| ≤ L

(8)

j=i+1

∆(a, b) =

has size

x ∈Sα .

(i)

LT (y) = {f (i) : ∆(y, f (i) ) ≤ T } and is called the Sums-algorithm. III. L IST SIZE AND COMPLEXITY OF THE S UMS -A LGORITHM (i)

Inequality (11) shows that the prefixes fdec of all required functions fdec ∈ LT (y) will be accepted by the Sums algorithm. In addition, we need to prove that this algorithm does not accept too many other (incorrect) candidates. To do so, below we extend the statement of [7] to the generalized Hamming distance. Lemma 2: For any received vector y, any error-weight (i) vector w, and any decoding radius T < W1 /2, the list LT (y) obtained in decoding step i ∈ [1, . . . , m] has size ¾ ½ ¯ ¯ nW2 def ¯ (i) ¯ i . (13) L(i) = ¯LT (y)¯ ≤ min , 2 (W1 − 2T )2 (i)

Proof. Let f (i) be an arbitrary element of the list LT (y). On every facet Sα , we compare the element f (i) and its complement f (i) +1. Namely, we find two distances, Dα (f , y) and Dα (f +1, y), and choose fmin ∈ {f ,(f +1} which is closer to y. Let Fmin be the vector of length n that equals fmin on each of 2m−i facets. Obviously, the Sums-Criterion (12) is equivalent to the condition D(Fmin , y) ≤ T . Next, let f (i) and

(i)

g (i) be two different elements of the list LT (y). Then it is easy to verify that the corresponding vectors Fmin and Gmin have the Hamming distance d(Fmin , Gmin ) = n/2.

(14)

Indeed, on every facet Sα , vectors Fmin and Gmin are restricted to vectors f (i) + hα and g(i) + h0α , where hα and h0α are constants defined in (8). The latter vectors belong to the code RM(1, i) of length 2i and have distance d(f (i) + aα , g(i) + bα ) = 2i−1 . Now we see that (14) holds. Then we can apply the generalized Johnson bound of Corollary 2 to this set of vectors {Fmin }. We also use the fact that at most (i) 2i vectors of the code RM(1, i) belong to LT (y). 2 We now proceed with the proof of Theorem 1. Here we also employ a new recursive procedure and reduce the former complexity estimates of [7]. Proof of Theorem 1. Let Sα be an i-dimensional facet. Consider the two associated i − 1 dimensional facets: Sα00 = {(x1 , . . . , xi−1 , 1, α1 , . . . , αm−i )}. We consider any prefix accepted in the step i − 1 by the Sums algorithm, f (i−1) (x1 , · · · , xi−1 ) = f1 x1 + · · · + fi−1 xi−1 and its extension to the step i, f (i) (x1 , · · · , xi ) = f (i−1) (x1 , · · · , xi−1 ) + fi xi ­ ® Let wˆ y, f (i) α denote the inner product of vectors wˆ y, f (i) restricted to a facet Sα . Then, similarly to (4), we see that D E wˆ y, f (i) = Wα − 2Dα (wˆ y, f (i) ) α

D wˆ y, f

(i)

E

D α

=

wˆ y, f

(i−1)

D

2

p X

m X

(s + i − 1)2i · 2m−i + 2

i=1

(s + i − 1)2p · 2m−i

i=p+1

= np (2s + p − 1) + 2n

m X

(s + i − 1)2p−i = O(np2 ).

i=p+1

Now we see that the algorithm has bit complexity (2). 2 Finally, observe that our decoding procedure yields linear complexity under very light restrictions. Namely, the following corollary holds. Corollary 3: Consider a channel, where the sum W1 of n possible error weights wj ∈ [0, 1] satisfies condition W1 ≥ λn

(16)

for some λ > 0. Then for n → ∞ and any ² ∈ (0, 1/2), the Sums algorithm decodes code RM(1, m) within the decoding radius T = W1 (1/2 − ²) with linear complexity

Sα0 = {(x1 , . . . , xi−1 , 0, α1 , . . . , αm−i )},

and

Then in the worst case, the list size L(i) doubles in each of the first p steps and is fixed in the last m − p steps. Thus, L(i) ≤ 2i for i ≤ p and L(i) ≤ 2p for i ≥ p. This gives the total bit complexity bounded by

O(n ln2 (λ²)). Proof. Recall that wj ≤ 1 for all j and therefore W2 ≤ n. On the other hand, W1 −2T ≥ 2²λn. Then the proof is completed by substitution of these two inequalities in expression (2). 2 In particular, Corollary 3 holds for conventional decoding within the Hamming sphere of radius T = n (1/2 − ²) , as W1 = n. Thus, Corollary 1 holds. We also obtain linear complexity if possible error weights occupy a finite range wj ∈ [λmin , λmax ] bounded away from 0. One general example is a quantized channel with 2s fixed nonzero levels of quantization. R EFERENCES

E α0

+(−1)fi wˆ y, f (i−1)

(15)

E α00

In each step i, we verify the­Sums criterion for any vector f (i) ® (i) by computing the quantities wˆ y, f on any i- dimensional α facet Sα . Recursion (15) shows that these quantities can be calculated using the same quantities obtained in step i − 1. Next, assume that the original weights wj employ some fixed number of s bits. Then in step i, any vector f (i) uses two additions (15) on any facet Sα . Again, we assume that these calculations are performed over (s + i − 1)-digital integers and require 2(s + i − 1) binary operations. The entire verification of f (i) in step i employs 2m−i facets. Given the size L(i) of the list in step i, we see that the overall bit complexity equals to m X 2 (s + i − 1)L(i) 2m−i . i=1

Now we use bound (13) and define parameter µ » ¶¼ nW2 p = log2 . (W1 − 2T )2

[1] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes, North-Holland, Amsterdam, 1977. [2] S. Litsyn and O. Shekhovtsov, “Fast decoding algorithm for first order Reed-Muller codes ”, Problems of Info. Transmission, vol. 19, pp. 87–91, 1983. [3] O. Goldreich and L.A. Levin, “A hard-core predicate for all one-way functions”, Proceedings of 21-st ACM Symp. on Theory of Computing, pp. 25–32, 1989. [4] O. Goldreich, R. Rubinfeld and M. Sudan, “Learning polynomials with queries: the highly noisy case”, SIAM J. on Discrete Math., pp. 535–570, 2000. [5] V. Guruswami and M. Sudan, “Improved decoding of Reed-Solomon and algebraic-geometry codes ,” IEEE Trans. on Information Theory, vol. 45, pp. 1757–1767, 1999. [6] R. Pellikaan and X.-W. Wu, “List decoding of q-ary Reed-Muller Codes”, IEEE Trans. on Information Theory, vol. 50, pp. 679-682, 2004. [7] G. Kabatiansky and C. Tavernier, “List decoding of first order ReedMuller codes II,” in Proc. ACCT-10, pp. 131–135, Russia, 2006. [8] P. Elias, “List decoding for noisy channels” 1957-IRE WESCON Convention Record, Pt. 2, pp. 94–104, 1957.

Soft-Decision List Decoding with Linear Complexity for ...

a tight upper bound ˆLT on the number of codewords located .... While working with real numbers wj ∈ R, we count our ... The Cauchy-Schwartz inequality.

125KB Sizes 2 Downloads 221 Views

Recommend Documents

Soft-decision list decoding of Reed-Muller codes with linear ... - Sites
Cédric Tavernier is with National Knowledge Center (NKC-EAI), Abu-. Dhabi, UAE; (e-mail: [email protected]). we consider the vector y = (y0, ..yn−1) ...

AN155 – Fault Log Decoding with Linduino PSM - Linear Technology
10. The LTC3880 data sheet gives a detailed description of the fault log header data, ... network and processed on another computer. ... or a server, laptop, etc.

AN155 – Fault Log Decoding with Linduino PSM - Linear Technology
The fault log data contains several types of information: 1. Cause of the fault log. 2. .... network and processed on another computer. That com- puter can use the ...

A Low ML-Decoding Complexity, Full-Diversity, Full ... - IEEE Xplore
knowledge available at both the transmitter and the receiver. For two transmit antennas and QAM constellations, a real-valued precoder which is approximately ...

The Extraction and Complexity Limits of Graphical Models for Linear ...
graphical model for a classical linear block code that implies a de- ..... (9) and dimension . Local constraints that involve only hidden variables are internal ...

Low ML-Decoding Complexity, Large Coding Gain, Full ... - IEEE Xplore
Jan 13, 2010 - the Golden code in performance and ML-decoding complexity for square QAM ... Index Terms—Coding gain, full-rate space-time block codes.

On complexity of decoding Reed-Muller codes within ...
full list decoding up to the code distance d can be performed with a lower ... Both recursive and majority algorithms correct many error patterns beyond the BDD ...

Hybrid Decoding: Decoding with Partial Hypotheses ...
Maximum a Posteriori (MAP) decoding. Several ... source span2 from multiple decoders, then results .... minimum source word set that all target words in both SK ...

Hybrid Decoding: Decoding with Partial Hypotheses ...
†School of Computer Science and Technology. Harbin Institute of .... obtained from the model training process, which is shown in ..... BLEU: a method for auto-.

On the Linear Programming Decoding of HDPC Codes
The decision boundaries divide the signal space into M disjoint decision regions, each of which consists of all the point in Rn closest in. Euclidean distance to the received signal r. An ML decoder finds which decision region Zi contains r, and outp

List Decoding of Biorthogonal Codes and the ... | Google Sites
an input alphabet ±1 and a real-valued output R. Given any nonzero received vector y ... Key words: soft-decision list decoding, biorthogonal codes,. Hadamard ...

Learning encoding and decoding filters for data representation with ...
Abstract—Data representation methods related to ICA and ... Helsinki Institute for Information Technology, Dept. of Computer Science,. University of Helsinki.

List Decoding of the First-Order Binary Reed–Muller Codes
Binary first-order Reed–Muller codes RM(1,m) have length n = 2m and consist of ..... MacWilliams, F.J. and Sloane, N.J.A., The Theory of Error-Correcting Codes, ...

List Decoding of Biorthogonal Codes and the ...
an algorithm that outputs this list of codewords {c} with the linear complexity order ... Ilya Dumer is with the Department of Electrical Engineering, University of. California, Riverside, CA 92521, USA; (e-mail: [email protected]). Research supported

List decoding of Reed-Muller codes up to the Johnson bound with ...
project no. 027679, funded in part by the European Commission's Information ... from any center y, where. Js = 2−1(1 .... This algorithm is call the Ratio algorithm ...

With Low Complexity
differential space-time block code in the serial concatenation than the APP decoder. The core idea of the proposed decoder is to employ a maximum likelihood ...

List Decoding of Reed-Muller Codes
vector y and the candidate c(i)(x1,...,xm) = c1x1 + ... + cixi on facet Sj and denote. dSj (y, c(i)) the Hamming distance between these two vectors (of length 2i). Clearly that for any linear function c(x1,...,xm) such that c(i)(x1,...,xm) = c1x1 + .

List decoding of Reed-Muller codes up to the Johnson bound with ...
Email: [email protected] ... Email: [email protected]. Abstract—A new .... Perr = P1 +P2. Here P1 is the probability that a “good” code vector c (i.e. ...

Soft-decision list decoding of Reed-Muller codes with ...
Abstract—Let a binary Reed-Muller code RM(s, m) of length ... beyond the error-correcting radius and retrieves the code list LT ..... vector Q(x, α) is the output of a non-constant Boolean linear function. Then vector. ˆQ(x, α) has equal numbers

The Computational Complexity of Linear Optics - Scott Aaronson
In particular, we define a model of computation in which identical photons are generated, sent through a linear-optical network, then .... 8.1 Numerical Data . .... For example, what is the “input” to a Bose-Einstein condensate? In other words ..

The Computational Complexity of Linear Optics - Scott Aaronson
Dec 22, 2012 - (3.14). From now on, we will use x as shorthand for x1,....xm, and xS as ...... (6.5) be a unitary transformation that acts as U on the first m modes, ...

The Computational Complexity of Linear Optics - Scott Aaronson
Abstract. We give new evidence that quantum computers—moreover, rudimentary quantum ... sent through a linear-optical network, then nonadaptively measured to count the number of .... Thus, one might suspect that proving a quantum system's computati

The Computational Complexity of Linear Optics - Scott Aaronson
Dec 22, 2012 - solve sampling problems and search problems that are classically intractable under plausible .... 102. 1 Introduction. The Extended Church-Turing Thesis says that all computational ...... we have also plotted the pdf of Dn := |Det(X)|