The Space Complexity of Elimination Theory: Upper Bounds Guillermo Matera1,2 and Jose Maria Turull Torres3,4 1

Depto. de Matem´ aticas, F.C.E. y N., Univ. de Buenos Aires, Argentina. 2 Int. de Ciencias, Univ. Nac. Gral. Sarmiento, Argentina [email protected] 3 Universidad Nacional de San Luis, Argentina. 4 Depto. de Sistemas, F.R.B.A., Univ. Tecnol´ ogica Nacional, Argentina. [email protected]

Abstract: We use a theorem by Borodin relating parallel time with sequential space in order to obtain algorithms that require small space resources. We first apply this idea to some linear algebra problems. Then we reduce several problems of Elimination Theory to linear algebra computations and establish PSPACE bounds for all of them. Finally, we show how this strategy can be improved by means of probabilistic arguments.

1

Introduction

We use standard notions and notations for complexity models and complexity classes as can be found in [1, 24]. We recall that the classes N C i are defined as the set of O(log i n)-uniform families boolean circuits of polynomial size and depth O(log i n) with bounded fan-in. A family of boolean circuit is said to be uniform iff its standard encoding can be built in deterministic space O(log i n). Intuitively, the classes N C i represent the functions which can significantly benefit with the use of parallel computation. In fact, Borodin [4] showed that uniform circuit depth or parallel time turns into sequential space. This means that for many functions the property of well parallelizability can be re-interpreted, in the scope of sequential computing, as the use of a very small amount of working space in their computation. Moreover, its proof is constructive, so that given the family of boolean circuits in N C i and the algorithm which witnesses its uniformity, we can use the proof to build a sequential algorithm which works in space log i n. It is our intention to promote the use of this important tool for the solution of concrete problems in Computer Algebra. We are going to develope here an alternative method to deal with problems coming from algebraic and geometric elimination theory that uses the strategy of Borodin’s theorem in order to obtain algorithms requiring small space resources. The central idea of the method is to reduce elimination problems to linear algebra computations. Unfortunately, the matrices involved in these reductions have exponential size, probably due to the syntactical aspect of the kind of problems we are considering. By this syntactical we refer to the codification of multivariate polynomials which represents the basic objects of our language.

The usual dense or sparse codification of polynomials inhibits the application of the basic subroutines most frequently used at present, like gaussian elimination, because those algorithms require space unfeasible for big matrices. In this paper we develope algorithms to perform linear algebra tasks which work with very low space (namely polilogarithmic space). For this purpose, we study arithmetic circuits with controlled parallel time, as Berkowitz’s algorithm [3] for the computation of the characteristic polynomial. Based on it, we design arithmetic circuits for the computation of the rank and the resolution of linear equations systems as it is explained in Section 2. Then, we translate these arithmetic circuits into boolean circuits in order to produce a suitable input for the application of Borodin’s theorem. Although it seems natural trying to make this translation from the basis of the operations {+, ∗} in ZZ to the basis {∧, ∨, P¬} in {0, 1}, we show in Section P 2 that the translation from the basis {+, ∗, } is more efficient, where is implemented in N C 1 by using a Carry Save Adder circuit (see [24]). In this way we get linear algebra algorithms which have the same perfomance in parallel time as the best uniform algorithms known at present, as those in [5], but with some additional advantages from a programming point of view. The application of Borodin’s theorem to this context leads us to the subject of uniformity. In [16], in collaboration with A. Grosso, N. Herrera and M.E. Stefanoni we exhibit the full program which witnesses the logspace uniformity of our linear algebra algorithms. Then we use the proof of Borodin’s theorem to build a log 2 n space sequential algorithm for the computation of the rank of integer matrices and the resolution of linear equations systems on ZZ. In Section 3 we show how to reduce several problem of Elimination Theory to linear algebra computations. Then, on the basis of the algorithms developped in the previous section, we build PSPACE algorithms for some of the main problems in elimination theory: the Ideal Triviality Problem, the Ideal Membership Problem for Complete Intersections, the Radical Membership Problem, the General Elimination Problem, the Noether Normalization, the Decomposition in Equidimensional Components, the Computation of the Degree of a Variety and an algorithmic version of Quillen-Suslin Theorem. We do this following the suggestions in [17], where it is conjectured as well as by other authors (see [9, 10] for instance), that some of these problems are in the complexity class PSPACE. Finally, in Section 4 we show in the case of the Zero Dimensional Elimination Problem, how these techniques can be refined in order to get a class of probabilistic algorithms with a better time perfomance. This is done by applying our ideas to the techniques in [20]. The main ingredient for this improvement is the use of correct test sequences for the encoding of polynomials by means of straight–line programs, in the sense of [18]. Let us remark that this last approach allows us to get algorithms whose time perfomance is at least as well as that of the existent software on the subject (which uses as main ingredient algorithms of the Gr¨obner basis solving). This approach keeps space resources within reasonable limits, unlike the case of the Gr¨obner basis solving where the space requirements are exponential.

Some PSPACE results on elimination theory have been announced in [9, 23] (see also [2]). However, in those papers the authors do not prove the membership of their algorithms to PSPACE. In particular, the related aspects of uniformity of their algorithms are not considered.

2 2.1

Linear algebra algorithms in low space The computation of the characteristic polynomial and the rank

Let us suppose that we are given a n×m matrix A with integer coefficients to compute its rank. Since B := A · At has the same rank as A, and B is diagonalizable, rk(A) can be read from the coefficients of the characteristic polynomial of B. This polynomial can be computed by means of Berkowitz algorithm [3]. By analyzing the whole circuit produced in this way we see¡ that we have a¢ family of Boolean circuits of size O(n6 h2 log3 n) and depth O log(n) log(hn) which computes the rank of any n × m integer matrix with h-bit coefficients. As it is proved in [16], this family of circuits is logspace uniform. Applying Borodin’s theorem we obtain: Theorem 1 There ¡ ¢ exists a deterministic Turing machine that works in space O log(n) log(hn) and computes the rank of any n × m integer matrix (n ≥ m) with h-bit coefficients, for every positive integers n, m, h. Let us remark that if the matrix A belongs to Rn×m , being R any effective domain (ZZ[X1 , . . . , Xn ] for instance), the computation of rk(A) can be reduced to the computation of a characteristic polynomial applying the techniques developed by Mulmuley [21]. 2.2

The resolution of linear equations systems

Let A be an integer n × m matrix, X a m × 1 vector of unknowns and b an integer n × 1 vector. Let h be the maximal bit size of all the occurring integers. We want to solve the linear system A · X = b. The idea is to reduce our problem to the resolution of a system with a nonsingular square matrix. The second system is then solved applying just Cramer’s rule. The reduction of the problem is done by an algorithm which finds a nonsingular square submatrix of maximal rank of A. Let us denote this submatrix by ˜ Deleting from b all the entries which do not correspond to rows of A˜ we obtain A. a column vector ˜b and deleting from X all entries which do not correspond to ˜ of unknowns. the columns of A˜ we obtain a new column vector X ˜X ˜= Solving now the reduced nonsingular square system of linear equations A· ˜b we obtain easily a solution of the original system A · X = b. Therefore, we just need to find a square submatrix A˜ of A with maximal rank. Let us denote by Ai the i-th row of A. We compute (in parallel) rk(A1 , . . . , Ai ) for i = 1, . . . , n. Every time when rk(A1 , . . . , Ai−1 ) < rk(A1 , . . . , Ai ) occurs, we

keep the index i. The indices which are kept in this procedure correspond to ˜ In the same way we determine the the rows which will occur in our matrix A. ˜ columns which we choose to build A. From the results quoted in Section 2.1, we conclude that ¡ there exists¢ a uniform family of boolean circuits of size (nh)O(1) and depth O log n log(nh) which solves the given system A · X = b. Now, by applying the proof of Borodin’s theorem [4] we obtain: Theorem 2 There ¡ ¢ exists a deterministic Turing machine that works in space O log(n) log(hn) which 1. checks whether the input system A · X = b has a solution, and 2. in case of an affirmative answer in 1, the Turing machine computes numerators and denominators of a particular solution of the system. Let us remark that a slight modification of the algorithm underlying Theorem 2 allows us to construct a maximal independent affine set of solutions of the given linear equation system. For this purpose, we have to observe that, in order to compute a representative set of particular solutions of the system A · X = b, we have only to specialize some of the unknowns X1 , . . . , Xn in zero following a selection scheme given by the algorithm.

3

Some PSPACE results in Elimination Theory

In this section we will show upper bounds for the space complexity of some algorithmic problems of classical algebraic and geometric elimination theory. The uniformity of the algorithms which we use to solve these problems will be an easy consequence of the uniformity of the algorithms developed in the previous section. We start fixing some notations: let X1 , . . . , Xn be indeterminates over Q. We are going to consider polynomials F1 , . . . , Fs ∈ ZZ[X1 , . . . , Xn ] (n ≥ 2) where the total degrees deg Fj (1 ≤ j ≤ s) are bounded by a given integer d ≥ 3. We assume that all the coefficients of the Fi ’s have their bit-size bounded by a given number h. We will denote by (F1 , . . . , Fs ) the ideal generated by F1 , . . . , Fs in Q[X1 , . . . , Xn ] and by V := {F1 = 0, . . . , Fs = 0} := {x ∈ Cn ; F1 (x) = . . . = Fs (x) = 0} the algebraic variety of Cn defined by these polynomials. 3.1

The ideal triviality problem

As an example of the general method which we shall apply, we will first consider a classical problem: the ideal triviality problem. Decide whether V = ∅ holds and if this is the case find P1 , . . . , Ps ∈ Q[X1 , .., Xn ] such that the identity 1 = P1 F1 + · · · + Ps Fs is satisfied. The essential tools which we use are a single exponential version of the affine Nullstellensatz in characteristic 0 due to D. Brownawell [6] and its generalizations to an arbitrary field in [7, 8, 19]:

Theorem 3 (An effective Nullstellensatz for ideal triviality) The ideal (F1 , . . . , Fs ) is trivial iff there exist P1 , . . . , Ps ∈ Q[X1 , . . . , Xn ] satisfying the conditions 1 = P1 F1 + · · · + Fs

and

max1≤j≤s {deg Pj Fj } ≤ dn

This turns the ideal triviality problem into problem of algorithmic linear algebra: once we have degree bounds as in Theorem 3 for the polynomials P1 , . . . , Ps , the condition 1 = P1 F1 + · · · + Fs can be rewritten as a linear equation system 2 2 of size dn × sdn whose coefficients have at most h bits with the coefficients of the Pi ’s playing the role of unknowns. Applying the results of Section 2.2 to this linear equation system we get a¢ ¡ 2 uniform family of boolean circuits of size (hsdn )O(1) and depth O n4 log2 (hsd) which decides whether the system is compatible and, if this is the case, computes a particular solution of it. At this stage we apply the proof of Borodin theorem [4] in order to evaluate the corresponding circuit. In conclusion, we have proved: Theorem 4 The ideal triviality ¡ problem can ¢ be solved in polynomial deterministic space, namely in space O n4 log2 (hsd) . 3.2

The ideal membership problem for complete intersections

The problem to be considered is the following: suppose that F1 , . . . , Fs form a regular sequence of Q[X1 , . . . , Xn ] and let F be an integer polynomial with height bounded by h. We want to decide whether F belongs to the ideal generated by F1 , . . . , Fs in Q[X1 , . . . , Xn ] and if this is the case, we want to find P1 , . . . , Ps ∈ Q[X1 , . . . , Xn ] such that F = P1 F1 + · · · + Ps Fs holds. Here again, the effective affine Nullstellens¨atzse are the key point. Theorem 5 ([10]) (An effective Nullstellensatz for complete intersections) Suppose that F1 , . . . , Fs form a regular sequence in Q[X1 , . . . , Xn ]. Then F belongs to (F P1 , . . . , Fs ) iff there exist polynomials P1 , . . . , Ps ∈ Q[X1 , . . . , Xn ] such that F = j Pj Fj and maxj {deg Pj Fj } ≤ deg F + ds ≤ deg F + dn holds. By following the same analysis as above, we find a linear equations system 2 2 of size O(dn × sdn ) and bit-size h, such that its solutions yields the coefficients of the required polynomials Pi . Then we can conclude: Theorem 6 The ideal membership problem for complete intersections can be solved in deterministic space n4 log2 (hsd). 3.3

The radical membership problem

The radical membership problem consists in deciding whether F vanishes on V and if this is the case, finding N ∈ IN and P1 , . . . , Ps ∈ Q[X1 , . . . , Xn ] such that F N = P1 F1 + · · · + Ps Fs holds.

In Remark 1.6 of [10] it is shown that this problem can be reduced to the ¡ 2 ¢ resolution of a linear¢ equations system of size O dn (deg(F ) + 1)n and bit-size ¡ 3n O d h log(deg(F )) . Hence, by using the strategy and the results of section 2.2, we can prove: Theorem 7 There exists a deterministic Turing¢ machine that solves the radical ¡ membership problem in space n4 log2 dh deg(F ) 3.4

The general elimination problem

Let 0 ≤ m < n and let π : Cn → Cm be the projection map π(x1 , . . . , xn ) = (x1 , . . . , xm ). Our problem is to find polynomials Q1 , . . . , Qt ∈ Q[X1 , . . . , Xm ] and a quantifier free formula Φ in the first order language of fields with constants from Q, involving only the polynomials Q1 , . . . , Qt as basic terms, such that Φ defines the set π(V ). The procedure starts checking coherence on all the possible (F1 , . . . , Fs )-cells 2 in parallel, which can be achieved by means of rank computations of s3n d3n n2 2 matrices whose size is O(d ) and whose bit-size is O(s hn log d) (see Theorem 2 in [13] or Th´eor`eme 3 in [12]). Afterwards, the elimination of the existencial quantifiers is performed again by rank computations on matrices of similar size whose coefficients belong to ZZ[Xm+1 , . . . , Xn ]. Here, applying the techniques of [21] as it is indicated in the references above, a similar treatment as in section 2 allows us to get a uniform family of boolean circuits of suitable complexity to solve this problem. Then we get the following: Theorem 8 The general elimination problem can be solved in deterministic space n5 log2 (sdh). 3.5

Noether normalization and dimension

An essential preprocessing technique used to prepare polynomial data is the Noether normalization. Let r := dim(V ). We say that the variables X1 , . . . , Xn are in Noether position with respect to V if for each r < i ≤ n there exists a polynomial of Q[X1 , . . . , Xr , Xi ] which is monic in Xi and vanishes on V . The Noether normalization is performed by a linear change of variables such that the new variables are in Noether position. Here we follow the scheme proposed in [10], combined with the techniques of questors sets as in [20]. The first step is to compute a system of independent variables with respect to V (we observe that the cardinality of this set is the dimension of V ). By means of Proposition 1.7 in [10], this is related to the resolubility of a linear equations system of size O(d2n × sd2n ) and bit-size h. Proposition 1.8 of [10] allows us to reduce the decision of which of the variables Xi with i ∈ {1, . . . , n} \ I are integer with respect to the new independent variables to rank computations of some O(d2n × sd2n ) matrices of bit-size h.

Finally, a recursive procedure over the remaining variables computes the suitable change of variables. Proposition 1.2 of [10] and Corollary 19 of [20] are the basic tools that allows us to restate the problem in terms of linear algebra (see also Proposition 38 in [20]). In fact, one has to solve in each recursive 2 2 step O(s3 d7n ) linear equations systems of size O(d3n ) and maximal bit-size 3 O(s2n d3n h). Applying the same strategy as in section 2 we can conclude: Theorem 9 The Noether normalization and the computation of the dimension of V can be performed in deterministic space bounded by n4 log2 (hsd). 3.6

Decomposition in equidimensional components

A fundamental problem for the description of the variety V consists in the computation of its equidimensional decomposition. That is, starting from equations for the variety V , for every intermediate dimension i (0 ≤ i ≤ dim(V )) we have to produce a finite set of polynomials defining the union of the irreducible components of V of dimension i. Let us denote by Vi and Wi the union of the irreducible components of V of dimension i and less than i respectively, and set r := dim(V ). The algorithm we will describe consists of at most r recursive steps. In the i-th step, we compute a system of polynomial equations for Vr−i+1 and for Wr−i+2 . The i-th steps starts computing equations to describe Vr−i+1 . This task is reduced by Theorem 4.1.2 and Proposition 4.2.2. in [14] to linear algebra com2 2 putations. In fact, we have to produce a basis of solutions of an O(sd3n × d3n ) 2 homogeneous linear ¡equations system O(nd4n ) times in parallel, whose bit-size ¢ 2

is bounded by (sd)O n h; and then we reduce the computed equations by means of more linear equations system solving of similar features as the former. Afterwards, Proposition 4.2.6 in [14] allows us to compute equations for Wr−i+2 with a similar scheme to the one explained above. By adding these complexities, and by using the strategy of section 2, we can prove the following statement: Theorem 10 There exists a deterministic Turing machine that computes the equidimensional decomposition of V within space bounded by n9 log2 (hsd). 3.7

Computation of the degree of a variety

Let V = C1 ∪ . . . ∪ Ct be the decomposition of V in irreducible components. For 1 ≤ j ≤ t we define the degree of Cj as usually P and denote this quantity by deg Cj . The degree of V is defined as deg V := j deg Cj . In order to compute the degree of V, we take suitable modified ideas from [8, 14]. First of all, after computing the decomposition of V in equidimensional components as it was indicated in section 3.6 above, we can suppose that our variety is equidimensional. Let r be the dimension of V . We intersect the variety

with r generic hyperplanes such that the intersection is 0-dimensional. By means of Remark 1.6 of [10], we compute univariate polynomials Pi that annihilate Xi 2 2 on the intersection for every i = 1, . . . , n as the solution of some dO(n ) × sdO(n ) linear equations system with bit-size h. Further, we compute the minimal polynomial of every Xi on the intersection by means of more linear algebra manipulation. Finally, we have to perform some rank computations (as in Proposition 5.3.1 6 in [14]) with matrices of size dO(n ) in order to compute the degree of V . In conclusion, we can state: Theorem 11 The computation of the degree of V can be made in deterministic space n10 log2 (hsd). 3.8

Quillen-Suslin Theorem

Let F be a unimodular r × s matrix with entries being n-variate integer polynomials. Denote by deg(F ) the maximum of the degrees of the entries of F and let d = 1 + deg(F ). We want to compute an unimodular s × s matrix M such that F M = [Ir , 0], where [Ir , 0] denotes the r × s matrix obtained by adding to the r × r unit matrix Ir s − r zero columns. Following the scheme of [11], we perform a procedure in which every step is¢ ¡ divided in four stages. At the first stage, we construct a sequence of O (1+rd)2n polynomials of degree bounded by (rd)2 with certain properties (see Theorem 3.1 in [11]). By means¡of Proposition 5.6 in [11], this is reduced to determinantal ¢ computations with O (r2 sdr )r polynomial r × r matrices. In the second stage we test ideal triviality and we get a representation of 1 in the ideal generated by the polynomials computed in the first stage. During the third stage, O(1 + rd)2n s × s matrices are built, which are multiplied in the fourth stage to give rise to the s × s matrix which is the final result of the corresponding recursive step. Here we make use of Lemma 4.5 in [11] to perform the third stage by means of linear algebra computations. After n parallel steps like the one described above, we get a uniform family of boolean circuits which fulfills the conditions required by Borodin theorem. Then, we can apply the strategy of section 2 and conclude: Theorem 12 The Quillen-Suslin Theorem can be perfomed algorithmically in deterministic space n4 log2 (shd).

4

A probabilistic result for the Zero Elimination problem

In this section we show how the scheme of the previous section can be refined in order to improve the time perfomance of our algorithms. We take the Zero Dimensional Elimination problem to illustrate the idea: Suppose that dim V = 0 and let Y be a given linear form of ZZ[X1 , . . . , Xn ] with bit-size h. Then the problem consists in computing a nonzero one-variate polynomial Q ∈ ZZ[T ] such that Q(Y ) annihilates ov V .

First of all, we observe that V can be described as the zero set of n equations, say G1 , . . . , Gn ∈ ZZ[X1 , . . . , Xn ], which in turn are linear combinations of the input polynomials F1 , . . . , Fs . Under the assumption that we have already computed the polynomials G1 ,..., Gn , we can apply Lemme 3.3.3 of [15] (or Lemma 24 in [20]) in order to translate the problem to a similar one in the projective space with “good” conditions. At this point, we apply Lemma 24 and Proposition 23 in [20] (see also Lemme 3.2.1 in [15]) in order to compute the output polynomial Q by means of linear algebra manipulation of controlled size. A key point is that the coefficients appearing in the linear combinations of F1 , . . . , Fs that give the polynomials G1 , . . . , Gn can be chosen with the condition of not satisfying a certain polynomial equation which can be computed in low parallel time. As it is proved in [18] (in the version of [20], section 3.1), we can generate these coefficients at random with a high probability of getting a right sequence of coefficients. Then, we can perform the strategy of Borodin’s Theorem on the circuit, generating every bit of every coefficient we need randomly in advance. In conclusion, we have the following result: Theorem 13 The 0-dimensional problem can be solved by means of a probabilistic BPP¡ algorithm¢ requiring working space n2 log(nd) log(nhsd) and hence, time (nd)O 4.1

n2 log(nhsd)

.

Acknowledgements:

The main ideas leading to this work, as with respect to [16] are due to Joos Heintz. We also thank to Marc Giusti, Joel Marchand, Jos´e Luis Monta˜ na and Luis Miguel Pardo for the very useful discussions we had with them.

References 1. Balc´ azar J., D´ıaz J. and Gabarr´ o J.: Structural complexity II. EATCS Monographs on Theoretical Computer Science 22, Springer-Verlag (1990). 2. Ben-Or M., Kozen M. and Reif J.: The complexity of elementary algebra and geometry. J. Comp. and Sys. Sciences 32 (1986) 251–264. 3. Berkowitz S.J.: On computing the determinant in small parallel time using a small number of processors. Inf. Proc. Letter 18 (1984) 147–150. 4. Borodin A.: On relating time and space to size and depth. SIAM J. Comput. 6 (1977) 733–744. 5. Borodin A., Cook S. and Pippenger N.: Parallel computation for well-endowed rings and space-bounded probabilistic machines. Inf. and Control 58 (1983) 113–136. 6. Brownawell W.D.: Bounds for the degrees in the Nullstellensatz. Ann. Math. (Second Series) 126 (3) (1987) 577–591. 7. Caniglia L., Galligo A. and Heintz J.: Borne simplement exponentielle pour les degr´es dans le th´eor`eme des z´eros sur un corps de caract´eristique quelconque. C. R. Acad. Sci. Paris, S´erie I, 307 (1988) 255–258. 8. Caniglia L., Galligo A. and Heintz J.: Some new effectivity bounds in computational geometry. Springer LNCS 357 (1989) 131–151.

9. Canny J.: Some algebraic and geometric computations in PSPACE. Proc. 20-th STOC (1988) 460-467. 10. Dickenstein A., Giusti M., Fitchas N. and Sessa C.: The membership problem for unmixed polynomial ideals is solvable in single exponential time. Discrete Appl. Math. 33 (1991) 73–94. 11. N. Fitchas: Algorithmic aspects of Suslin’s solution of Serre’s Conjecture. Computational Complexity 3 (1993) 31–55. 12. Fitchas N., Galligo A. and Morgenstern J.: Algorithmes rapides en s´equentiel et parall`ele pour l’´elimination des quantificateurs en g´eom´etrie ´el´ementaire. S´election d’exposes 1986-1987, Vol I, Publ. Math. Univ. Paris 7 32 103–145. 13. Fitchas N., Galligo A. and Morgenstern J.: Precise sequential and parallel complexity bounds for the quantifier elimination over algebraically closed fields. J. Pure Appl. Algebra 67 (1990) 1-14. 14. Giusti M. and Heintz J.: Algorithmes –disons rapides– pour la d´ecomposition d’une vari´et´e alg´ebrique en composantes irr´eductibles et ´equidimensionnelles. Progress in Mathematics Vol. 94, Birkh¨ auser (1991) 169–193. 15. Giusti M. and Heintz J.: La d´etermination des points isol´es et de la dimension d’une vari´et´e alg´ebrique peut se faire en temps polynomial. Symposia Matematica vol. XXXIV, Istituto Nazionale di Alta Matematica, Cambridge University Press (1993) 216–256. 16. Grosso A., Herrera N., Matera G., Stefanoni M.E. and Turull Torres J.M.: Un algoritmo para el c´ alculo del rango de matrices enteras en espacio polilogar´ıtmico. Proc. 25-th JAIIO (1996). 17. Heintz J. and Morgenstern J.: On the intrinsic complexity of elimination theory. Journal of Complexity 9 (1993) 471–498. 18. Heintz J. and Schnorr C.P.: Testing polynomials which are easy to compute. Proc. 12-th STOC (1980) 262–280. 19. Koll´ ar J.: Sharp effective Nullstellensatz. J. AMS 1 (1988) 963–975. 20. Krick T. and Pardo L.M.: A Computational method for diophantine approximation. To appear in Proc. MEGA’94, Birkh¨ ausser Verlag, (1995). 21. Mulmuley K.: A fast parallel algorithm to compute the rank of a matrix over an arbitrary field. Proc. 18-th STOC (1986) 338–339. 22. Reif J.: Logarithmic depth circuits for algebraic functions. SIAM J. on Comp. 15 (1986) 231–242. 23. Renegar J.: On the Computational Complexity and Geometry of the first Order theory of the Reals. J. of Symbolic Comput. 13(3) (1992) 255–352. 24. Wegener I.: The complexity of boolean functions. Wiley-Teubner Series in Computer Science (1987).

The Space Complexity of Elimination Theory: Upper ...

the family of boolean circuits in NC i and the algorithm which witnesses its uniformity, we can use the proof to build a sequential algorithm which works in space logi n. It is our intention to promote the use of this important tool for the solution of concrete problems in Computer Algebra. We are going to develope here an.

161KB Sizes 0 Downloads 169 Views

Recommend Documents

The Space Complexity of Processing XML Twig ... - Research at Google
and Google Haifa Engineering Center. Haifa, Israel. [email protected] ..... which we call basic twig queries. Many existing algo- rithms focus on this type ...

Linear Space Theory
Linear space. Let Γ denote a non-empty set. For each pair of elements i γ and j γ contained in. Γ, assume that there exists an operation, + , called addition, which combines pairs of elements to produce an element, i j γ γ. + , which is also co

The Effects of Nationwide Tuition Fee Elimination on ... - Robert Garlick
Sep 28, 2017 - Field, Deon Filmer, Brian Jacob, David Lam, Jeffrey Smith, Stephen Taylor, Duncan Thomas, and conference and seminar participants at Bristol, CSAE, Duke, ESSA, .... system of need-based tuition fee exemptions, which are arguably less s

Alexandropoulou_S._et_al. The Likelihood of Upper-Bound ...
The Likelihood of Upper-Bound Construals among Different Modified Numerals.pdf. Alexandropoulou_S._et_al. The Likelihood of Upper-Bound Construals ...

A Time-Space Hedging Theory
Apr 2, 2004 - N (S) (in particular, by any linear combination of trading strategies with memory of order N)? For instance, in a standard Black-Scholes model, it is rather intuitive that an option whose payoff is a linear combination of random variabl

On the Complexity of Explicit Modal Logics
Specification (CS) for the logic L. Namely, for the logics LP(K), LP(D), LP(T ) .... We describe the algorithm in details for the case of LP(S4) = LP and then point out the .... Of course, if Γ ∩ ∆ = ∅ or ⊥ ∈ Γ then the counter-model in q

The Dynamics of Policy Complexity
policymakers are ideologically extreme, and when legislative frictions impede policy- making. Complexity begets complexity: simple policies remain simple, whereas com- plex policies grow more complex. Patience is not a virtue: farsighted policymakers

the complexity of exchange
population of software objects is instantiated and each agent is given certain internal states ... macrostructure that is at least very difficult to analyse analytically.

Elimination of the Copper-Zinc Interference at Mercury ...
decreases upon addition of Cu2+ at a thin mercury film electrode and a new peak appears which is due to the formation of the. Zn-Cu intermetallic compounds. A number of studies have been focused on the elimination of this interference since Cu2+ is a

The Kolmogorov complexity of random reals - ScienceDirect.com
if a real has higher K-degree than a random real then it is random. ...... 96–114 [Extended abstract appeared in: J. Sgall, A. Pultr, P. Kolman (Eds.), Mathematical ...

The Complexity of Abstract Machines
Simulation = approximation of meta-level substitution. Small-Step ⇒ Micro-Step Operational Semantics. Page 8. Outline. Introducing Abstract Machines. Step 0: Fix a Strategy. Step 1: Searching for Redexes. Step 2: Approximating Substitution. Introdu

The Kolmogorov complexity of random reals - ScienceDirect.com
We call the equivalence classes under this measure of relative randomness K-degrees. We give proofs that there is a random real so that lim supn K( n) − K(.

Elimination chamber 2015
Mooreanatomy pdf.Elimination chamber 2015.Elimination chamber 2015.2013 ... Every thingwork.Kung fu hustle. eng.Elimination chamber 2015 wasn'teven ...

The Complexity of Abstract Machines
tutorial, focusing on the case study of implementing the weak head (call-by-name) strategy, and .... Of course, the design of a reasonable micro-step operational semantics depends much on the strategy ..... Let ai be the length of the segment si.