ENTROPY STRUCTURE

Tomasz Downarowicz May 11, 2004

Abstract.

Investigating the emergence of entropy on different scales, we propose an “entropy structure” as a kind of master invariant for the entropy theory of topological dynamical systems. An entropy structure is a sequence of functions hk on the simplex of invariant measures which converges to the entropy function h, and which falls into a distinguished equivalence class defined by a natural equivalence relation capturing the “type of nonuniformity in convergence”. An entropy structure recovers several existing invariants, including the symbolic extension entropy hsex and the Misiurewicz parameter h∗ . Entropy theories of Misiurewicz, Katok, Brin-Katok, Newhouse, Romagnoli, Ornstein-Weiss and others all yield candidate sequences (hk ); we determine which of these exhibit the correct type of convergence and hence become entropy structures. One of the satisfactory sequences arises from a new treatment of entropy theory strictly in terms of continuous functions (in place of partitions or covers). The results allow the computation of symbolic extension entropy without reference to zero dimensional extensions. A new light is shed on the property of asymptotic h-expansiveness.

1. Introduction We will consider topological dynamical systems of the form (X, T ) where X is a compact metric space and T : X → X is a homeomorphism. In the last section we will also address the case of noninvertible continuous maps. It is intuitively obvious that chaotic behavior in a topological dynamical system may occur on many different scales. Observing the system through a device with limited resolution, we may always be missing some small-scale dynamics. With ever increasing resolution, ever more dynamical complexity may emerge, evenly or unevenly spread over various parts of the system. In this paper we offer a refined quantitative description of this phenomenon in the language of entropy. When properly defined, entropy detectable in a fixed (topological) resolution becomes an upper semicontinuous function hk on the simplex of invariant measures (by which we mean Borel probabilities) of the system. With increasing resolution the functions hk converge to the entropy function h. As we soon explain, it is 1991 Mathematics Subject Classification. 37A35, 37B40, 37B10. Key words and phrases. entropy structure, entropy function, topological entropy, fiber entropy, conditional entropy, symbolic extension entropy. Supported by the KBN grant 2 P03 A 04622

1

2

TOMASZ DOWNAROWICZ

essential to know how much this convergence differs from uniform, in other words we will need to characterize the “type of nonuniformity” of this convergence. A coarse manifestation of such nonuniformity is through the defect of upper semicontinuity of the entropy function, affecting the existence of the measure of maximal entropy. In this aspect it has captured the attention of several authors already long ago. In 1976 M. Misiurewicz [M] introduced a parameter h∗ (we insist on changing the original name “topological conditional entropy” to “topological tail entropy”), a well-known and often used upper bound for the mentioned defect. As we shall prove (see 5.1.5), it measures precisely the global fault of uniformity: h∗ = lim ||h − hk ||sup . k

Some other authors (e.g. Newhouse [N], Buzzi [B]) considered similarly defined upper bounds of the defect of h; as follows from the considerations in this paper, they have all dealt with the same parameter h∗ . The interplay between entropy and scale, however, is much too complex for the single number h∗ to describe. It is the entropy theory of symbolic extensions, which tests effectiveness of the developed tools. This theory reveals, for instance, that a multi-scale topological dynamics may be, in a sense, “more chaotic” than a singlescale dynamics of the same entropy. To be specific: a symbolic extension of (X, T ) is a subshift (Y, S) together with a continuous factor map π : Y → X. Associated to a given extension π is its extension entropy function hπext on the simplex of T -invariant measures: hπext (µ) = sup{hν (S) : π(ν) = µ}. Because a symbolic extension is expansive, it must capture at a single finite resolution (specified by the expansive constant) the complexity of the dynamics for T on all scales. The infimum of all such functions over all possible symbolic extensions, called the symbolic extension entropy function hsex is often (sometimes generically [D-N]) larger than h, revealing the “additional chaos” hidden in the multi-scale structure of the dynamics. We emphasize that the gap hsex − h (the residual entropy function) need not be bounded by h∗ . For instance, for systems with finite topological entropy htop , h∗ is never larger than htop , while hsex can be much larger, even infinite (i.e., symbolic extensions may not exist; [B-F-F], [D], [D-N]). The extension entropy functions for a given system T were characterized in [B-D] as the “affine superenvelopes” of a carefully chosen sequence of functions hk converging to the entropy function h on the T -invariant measures. That characterization leads to rich invariants, examples and applications ([B-D], [D-N]) inaccessible from previous works [B-F-F], [D]. Still, the definition of this sequence is indirect (using some special zero dimensional extensions of (X, T )) and quite complicated. Also, there is no canonical sequence (hk ) which provides a topological invariant.

ENTROPY STRUCTURE

3

The goal of this work is to introduce a meaningful invariant of topological conjugacy between topological dynamical systems, capturing the relation between entropy and resolution, having a reasonably simple definition, and computable in various natural ways. For this we introduce an elementary equivalence relation (called uniform equivalence) among monotone sequences of nonnegative functions H = (hk )k∈N defined on any domain, reflecting “the faults of uniformity” in the convergence of H. Then we define an entropy structure for a topological dynamical system T to be any nondecreasing sequence of nonnegative functions on the simplex of T -invariant measures uniformly equivalent to the specific sequence (hk ) considered in [B-D] (here called the reference entropy structure). From this point “entropy structure” will be used for both the entire equivalence class as well as any member sequence H. To justify entropy structure as a kind of master entropy invariant for topological dynamics, we list a trinity of major entropy characteristics which are determined by an entropy structure H. (A) The entropy function h on the simplex of T -invariant measures. The function h quantifies the distribution of chaos on the supports of invariant measures, regardless of scale. (B) The family of all extension entropy functions associated to symbolic extensions. Every such function quantifies the single-scale chaos needed to capture, in a symbolic extension, the multi-scale chaos near each invariant measure. (C) The transfinite sequence uH α. This sequence (defined below, see 2.2.1) reflects the way in which small and large scale chaos accumulates near each measure. The trinity above determines other entropy invariants, as follows. • The topological entropy htop equals sup h. • The symbolic extension entropy function hsex is characterized as the minimal superenvelope of the entropy structure, and also as h + uH α0 . • The topological symbolic extension entropy hsex equals sup hsex . • The tail entropy function h∗ equals uH 1 . ∗ • The topological tail entropy h equals sup h∗ . • The order of accumulation of entropy α0 = α0H equals (by definition) the “last” index in the transfinite sequence (always a countable ordinal). The last parameter counts, roughly speaking, in how many “layers” small scale dynamics accumulates near large scale dynamics, and classifies all topological dynamical systems into ω1 classes. Class zero (uniform convergence of the entropy structure) coincides with the familiar property of asymptotic h-expansiveness ([M], [B-F-F], [B-D]). Finite value of α0H plus finite topological entropy imply existence of symbolic extensions and allow quick estimates of hsex . The main advantage of defining entropy structure as the relevant equivalence class is to admit sequences of entropy functions hk conceptually much simpler than the reference entropy structure, and defined directly on the invariant measures of (X, T ), which in particular frees us from consulting any special extensions. Moreover, sequences of not necessarilly upper semicontinuous functions are admitted.

4

TOMASZ DOWNAROWICZ

Entropy structures actually occur among fairly well-known existing notions (here referred to as candidates). Each candidate represents the entropy of the invariant measure with respect to a sequence of certain refining resolutions, for instance, a resolution can be given by a finite family of continuous functions, by a finite open cover, or by ²-balls. This is how the relation between entropy and resolution is being captured directly on X. There are actually two types of candidates: increasing (as resolution refines) and decreasing. The latter type is a nonincreasing sequence of functions θk which usually represent some kind of conditional entropy and may be interpreted as measuring the chaos not detected at a given resolution. We say that (θk ) is an entropy structure of the decreasing type if the sequence (h − θk ) is uniformly equivalent to a reference entropy structure. The convergence of an increasing candidate notion to the entropy function h, usually proved already by the inventors of that notion, is by itself insufficient by far to ensure it is an entropy structure. Therefore we need to supplement the existing entropy theory with numerous inequalities, variational principles, and other facts connecting these notions (some of those facts are perhaps of independent interest). The main effort arises from difficulties in deriving estimates uniform on invariant measures, and in comparing the dynamics observed on partitions and on open covers. Among the passing increasing candidates we find the entropy with respect to a family of continuous functions. This notion seems to us of independent interest for entropy theory, as it enjoys the generality and technical convenience one might ask of a functional formulation (it has been successfully applied to define entropy of stochastic and Markov operators, see [D-F]). Next, introduced in 1980 by A. Katok, entropy based on cardinalities of unions of (n, ²)-balls achieving a certain positive measure value; then, introduced three years later by M. Brin and A. Katok, a notion obtained for (n, ²)-balls by analogy to Shannon-McMillan-Breiman Theorem; then, recently presented by P.P. Romagnoli, the measure-theoretic entropy of a cover; and finally, inspired by a work of D. Ornstein and B. Weiss, entropy evaluated via return times to (n, ²)-balls. Local entropy as defined in 1989 by S. Newhouse is a passing candidate of the decreasing type. The seemingly most natural and simple increasing candidate for an entropy structure, the sequence of entropy functions hµ (Ak , T ) with respect to a refining sequence of partitions, is, without any further assumptions on the partitions, not very good. It does converge to h but possibly essentially faster (more uniformly) than the entropy structure. Consequently, it provides only lower bounds for h∗ and hsex . If the elements of the partitions Ak all have “small boundaries” (measure zero for every T -invariant measure), then the sequence becomes a passing candidate; however, such partitions do not always exist, and are complicated to construct when they do. Another decreasing candidate is Misiurewicz’s topological conditional entropy given a cover, modified so that it becomes a function of an invariant measure. This candidate is not an entropy structure; it does determine h∗ , but in general gives

ENTROPY STRUCTURE

5

only upper (often infinite) estimates of hsex . We refer the reader to [D] and [B-D] for examples exhibiting the variety of possible phenomena related to entropy structure and for independence of certain invariants. The paper is organized in the following way: In section 2 we consider abstract sequences of functions on a compact domain, introducing the superenvelopes, the associated transfinite sequence, and the uniform equivalence relation. Section 3 contains preliminaries on entropy and major entropy invariants. In section 4 we replicate from [B-D] the necessary background for understanding the reference entropy structure. The definition of the key notion of entropy structure as a uniform equivalence class and the theorems about derivation of major entropy invariants are contained in section 5. Section 6 introduces all candidate notions, both new and already existing in the literature. The passing candidates are identified in section 7, the most elaborate and technical part of the paper. In the presentation of the remaining, failing candidates (section 8) the largest effort is due to a fact used later in section 9 to proving that h∗ , as defined by Misiurewicz, is in fact an entropy structure-dependent parameter. Section 9 also contains a review of the asymptotic h-expansiveness property. Finally, section 10 addresses the actions of noninvertible continuous maps and verifies that most of the theory passes to this case via natural extensions, almost unchanged, as long as by symbolic extensions we keep understanding subshifts on bi-infinite sequences (see Theorem 10.0.1). 2. Sequences of functions In this section we conduct an abstract study of certain features of increasing sequences of nonnegative functions on a compact domain. Compactness is clearly not necessary for all our definitions and statements but we assume it throughout for sake of establishing the working environment. To avoid a confusing play of words we will use increasing and decreasing in the meaning nondecreasing and nonincreasing, respectively. 2.1 Preliminaries on superenvelopes We begin with some basics: A real-valued function f defined on a compact domain P is said to be upper semicontinuous (u.s.c.) if for every x ∈ P. f (x) ≥ lim sup f (x0 ). x0 →x

It is known that every u.s.c. function is bounded from above. We will also use the following standard fact: (2.1.1) If a sequence of u.s.c. functions decreases to a continuous limit then the convergence is uniform.

6

TOMASZ DOWNAROWICZ

For a bounded from above function f , by fe we will denote the u.s.c. envelope of f defined by fe(x) := max{f (x), lim sup f (x0 )}. x0 →x

In fact, fe is the smallest u.s.c. function above f . If f is unbounded from above or infinite, we set fe ≡ ∞. For a bounded from above function f the difference ...

f := fe − f

is called the defect of upper semicontinuity (or the defect for short). It is elementary to see that ..........

...

... (f^ + g) ≤ fe + ge, and (f + g) ≤ f + g .

(2.1.2)

We can now generalize the notion of a superenvelope of an increasing sequence of functions studied in [B-D], where it is defined only for sequences of u.s.c. functions with u.s.c. differences. 2.1.3 Definition. Let H = (hk )k∈N be a sequence of functions defined on a compact space P, increasing to a bounded limit h. By a superenvelope of H we mean any function E ≥ h defined on P, which, at every x ∈ P, satisfies the condition: .............

lim (E − hk )(x) = 0.

k→0

In any case (including h unbounded or infinite), we admit the constant ∞ function as a superenvelope of H. 2.1.4 Lemma. If h is bounded then the function E − h is u.s.c. Proof. We have ³ ´ ^ 0 = lim (E^ − hk ) − (E − hk ) = lim (E^ − hk ) − (E − h) ≥ (E − h) − (E − h) ≥ 0, k

k

...........

hence (E − h) = 0.

¤

Denote by EH the infimum of all superenvelopes of H. This function is either bounded or it is the constant ∞. 2.1.5 Lemma. EH is itself a superenvelope of H. Proof. The statement holds if EH ≡ ∞. If not, fix an x ∈ X and let E be some bounded superenvelope of H such that (E − EH)(x) < ². Next, let k be large ............. enough so that (E − hk )(x) ≤ ². Then ................

^ (EH − hk )(x) = (EH − hk )(x) − (EH − hk )(x) ≤ (E^ − hk )(x) − (E − hk )(x) + (E − EH)(x) < 2².

¤

In a special case, where every difference hk+1 − hk is u.s.c. we have the following characterization (which has been used in [B-D] as definition) of bounded superenvelopes:

ENTROPY STRUCTURE

7

2.1.6 Lemma. Let H = (hk ) be such that hk+1 − hk is u.s.c. for each k, and let E ≥ h be a function on P. Then E is a bounded superenvelope of H if and only if E − hk is u.s.c. for every k. Proof. If the last condition is satisfied then E is obviously a superenvelope. Conversely, let E be a bounded superenvelope of H. For every k 0 > k we have .............

.........................................

(E − hk ) = ((E − hk0 ) + (hk0 − hk )) ≤ ..............

...............

..............

k0 →∞

(E − hk0 ) + (hk0 − hk ) = (E − hk0 ) −−−−→ 0.

¤

If, in addition, at least one function hk is u.s.c., then every bounded superenvelope E can be represented as a sum of u.s.c. functions, E = (E − hk ) + hk , so it is itself a u.s.c. function. We quote one more fact from [B-D] (see Theorem 4.3 there): 2.1.7 Fact. If H defined on a Choquet simplex P has u.s.c. differences and consists of affine functions, then EH coincides with the pointwise infimum of all affine superenvelopes. ¤ 2.2 Transfinite sequence, order of accumulation 2.2.1 Definition. Let H be an increasing sequence of functions converging to a bounded limit h. Let θk = h − hk . We define the transfinite sequence associated to H by setting (0)

u0 = uH 0 ≡ 0,

then, for an ordinal α let (α)

^ uα+1 = uH α+1 := lim uα + θk . k→∞

Notice that if uα is bounded then this limit is decreasing and also bounded. Finally, for a limit ordinal β let (β)

^ uα . uβ = uH β := sup α<β

(Note that uβ ≡ ∞ can occur.) If h is unbounded or infinite, we set uα ≡ ∞ for all α ≥ 1. It is obvious that uα cannot decrease with α and if uα+1 = uα then uβ = uα for all ordinals β ≥ α. By an easy argument involving well ordered increasing sequences of compact sets it follows that there are at most countably many ordinals with different functions uα (see page 13 in [B-D]). In other words, the transfinite sequence of functions uα increases until it stabilizes at some countable α0 . In this sense α0 is the last (essential) ordinal for this procedure.

8

TOMASZ DOWNAROWICZ

2.2.2 Definition. The smallest (countable) ordinal α0 = α0H for which uα0 +1 = uα0 will be called the order of accumulation of H. The above terminology is motivated by many examples, where α0H appears to be strongly related to the topological order of accumulation. The following fact, proved in [B-D] (see Theorem 3.3 there), establishes the connection between superenvelopes and the transfinite sequence: 2.2.3 Fact. Let H be an increasing sequence of u.s.c. functions with u.s.c. differences, converging to a bounded limit h. Then uα0 = EH − h.

¤

We remark, that for H with u.s.c. differences another way of deriving the function EH is provided in [B-D] (see Theorem 2.18 there). We will not use that characterization in this paper. It is immediately seen (by (2.1.2)) that uα ≤ αu1 for any integer α. Thus, with the assumptions of Fact 2.2.3, if the order of accumulation α0 happens to be finite, then (2.2.4)

EH ≤ h + α0 u1 . 2.3 Uniform equivalence

The definition below is crucial in further considerations: 2.3.1 Definition. Let H = (hk ) and H0 = (h0k ) be two increasing sequences of functions on a compact domain P. We say that H0 uniformly dominates H (we uni

write H0 ≥ H) if for every index k and every γ > 0 there exists an index k 0 such uni

that h0k0 > hk − γ. We say that H and H0 are uniformly equivalent if both H0 ≥ H uni

and H ≥ H0 . Uniform equivalence of decreasing sequences T = (θk ) is defined uni

analogously; T 0 uniformly yields to T (we write T 0 ≤ T ) if for every index k and every γ > 0 there exists an index k 0 such that θk0 0 < θk + γ. Because “uniformly yields to” has different meaning than “is uniformly dominated by”, we will never reverse the order in writing these relations. It is immediate to see that uniform equivalence is in fact an equivalence relation (separately among increasing and decreasing sequences of functions). If H = (hk ) is increasing and has a finite limit h then the sequence of tails T = (θk ) with θk := h − hk is decreasing, and two sequences converging to a common limit are uniformly equivalent if and only if their tails are. It is also immediate to see that if the sequences (θk ) and (θk0 ) are uniformly equivalent then so are the corresponding sequences of u.s.c. envelopes (θek ) and (θek0 ). The following facts will be important:

ENTROPY STRUCTURE

9

2.3.2 Theorem. Let H = (hk ) and H0 = (h0k ) be two uniformly equivalent increasing sequences. Then (1) lim H = lim H0 ; H0 (2) uH α = uα for every ordinal α; 0 (3) α0H = α0H ; (4) H and H0 have the same superenvelopes; (5) EH = EH0 . Proof. The first two statements follow straightforward from the appropriate definitions. Then (3) follows from (2). To prove (4), let E be a bounded superenvelope ............. of H. Fix x and γ. Let k be large enough so that (E − hk )(x) < γ2 . Let k 0 ≥ k be such that h0k0 > hk − γ2 (at all points). We have ..............

.............

...............

(E − h0k0 )(x) ≤ (E − hk )(x) + (hk − h0k0 )(x) ≤ γ. Statement (5) is then derived from (4) and the definition of EH as the minimal superenvelope. ¤ 3. Preliminaries on entropy, major entropy invariants In this section we recall some basics of the entropy theory both in measure-preserving transformations and topological dynamical systems, reduced to the minimum necessary for further considerations of this work. 3.1 Few notational conventions Let X be a set and let A and B be two families of nonempty subsets of X (in the future these will be either partitions or covers). One defines A ∨ B as the family of all nonempty intersections A ∩ B with A ∈ A and B ∈ B. We write A < B if every element of A is contained in some element of B. Let T : X → X. We denote n

A :=

n−1 _

T −i A.

i=0

For x ∈ X, by Anx we denote any (the, if unique) element (cell) of An which contains x. We write Ax for A1x . We will use letters A, B for finite measurable partitions, while U or V is reserved for finite open covers. For a cover U, the set Uxn is usually not unique and we will often refer to Unx (or Vxn ) designating the union of all elements Uxn containing x. By Leb(U ) we will denote the Lebesgue number of the cover U: every open set of this (hence any set of smaller) diameter is contained in an element of U . Talking about partitions or covers we will often skip the adjectives “finite”, “measurable” or “open”, because no other partitions or covers will be considered. Also, in metric spaces, by “measurable” (partition, set or function) we will always mean “Borel-measurable”. If d is a metric in X then diamd (A) (or just diam(A), if the metric d is fixed) denotes the diameter of a set A. For a partition (or cover), diam(A) denotes the

10

TOMASZ DOWNAROWICZ

maximal diameter of a member of A. The denotation dn will be used in a topological dynamical system (X, T ) for the metric dn (x, y) = max{d(T i x, T i y) : 0 ≤ i < n}. (n,²) The ball of radius ² around x in this metric will be denoted by Bx . Note that (n,²) the dn -diameter of Bx can reach up to 2². A set E is called (dn , ²)-separated n if x 6= y =⇒ d (x, y) > ² for any x, y ∈ E. The cardinality of a set E will be denoted by #E. In compact spaces, any (dn , ²)-separated sets are obviously finite. Another convention concerns harmonic and supharmonic functions on measures. For a topological dynamical system (X, T ), by PT (X) we will denote the set of all invariant probability measures equipped with the topology of weak* convergence. It is well-known that PT (X) is a nonempty metrizable compact Choquet simplex. For a probability M on PT (X) one defines the barycenter of M by Z µM = µdM (µ) (the Pettis integral). PT (X)

For example, each invariant measure µ is the barycenter of a unique probability Mµ supported by the ergodic measures – this is the ergodic decomposition of an invariant measure statement (see e.g. [D-G-S], chapter 13); this is also the meaning of PT (X) being a Choquet simplex. Let f be a real-valued function defined of PT (X). We say that f is harmonic or supharmonic if, for any probability M on PT (X) holds Z Z f (µM ) = f dM or f (µM ) ≥ f dM, respectively. (Upper integral is used to admit nonmeasurable functions.) Notice that without additional assumptions on f the ordinary notion of being affine (concave) is weaker than harmonic (supharmonic). However, if f is affine and upper (or lower) semicontinuous then it is a monotone limit of affine continuous functions, and hence it is harmonic. Similarly, a u.s.c. concave function is supharmonic. It is customary to use the letter H for entropy of a partition. Lower case h indicates a limit notion involving averaging along powers of T . To easier distinguish between measure-theoretic and topological entropy notions we will use boldface H and h when we believe that given entropy notion is more topological. Throughout this paper “the Shannon-McMillan-Breiman Theorem” will be abbreviated by “S-M-B”. 3.2 A subadditivity lemma The lemma below is purely technical and will be used only in sections 8 and 9. But because it is very general (uses no notion of entropy), we decided to place it before the exposition on entropy. We say that a sequence is ascending (or descending) if it converges to its supremum (or infimum), admitting infinite limits. Let (P, T ) be any space with any self-map. A sequence of nonnegative functions (Hn )n∈N on P is called a subadditive process (or a subadditive cocycle, in other contexts) if, for any x ∈ P and any m, n ∈ N, holds Hn+m (x) ≤ Hn (x) + Hm (T n x).

ENTROPY STRUCTURE

11

3.2.1 Lemma. (see [D-S1] for similar statements) If P is a compact convex set and (Hn ) is a subadditive process consisting of concave functions, with H1 bounded by a constant a, then for any m < n and x ∈ P holds

where xn =

1 n

Pn−1 i=0

1 2am 1 Hn (x) ≤ Hm (xn ) + , n m n T i x.

Proof. Let n = pm + q with q < m. Then, for any 0 ≤ j < m holds Hn (x) ≤ ja +

p−2 X

Hm (T im+j x) + (m − j + q)a.

i=0

Averaging over 0 ≤ j < m we get pm−m−1 X

n−1 n 1X Hm (T x) + (m + q)a ≤ Hm (T i x) + 2ma ≤ m n i=0 i=0 n Hm (xn ) + 2ma. m Dividing by n we complete the proof. ¤

1 Hn (x) ≤ m

i

We take this opportunity to recall that a positive sequence (an )n∈N is called subadditive if an+m ≤ an + amR. For example, if (Hn ) is a subadditive process then an = supx∈P Hn (x) and bn = Hn (x)dµ are subadditive sequences (the latter for a T -invariant measure µ). It is well known that for a subadditive sequence an the sequence n1 an descends in n. 3.3 Measure-theoretic entropy, topological entropy and topological tail entropy Let (X, µ, T ) be a measure-preserving transformation, and let A be a finite partition of X. Classically, one denotes X Hµ (A) := − µ(A) log(µ(A)). A∈A

It is known that Hµ (An ) is a subadditive sequence, hence divided by n it descends, allowing to define: 1 hµ (A, T ) := lim Hµ (An ), n→∞ n the entropy of the process on A with respect to µ. Next, the entropy of T with respect to µ is defined as hµ (T ) := sup hµ (A, T ), A

where A ranges over all partitions of X. If X is a metric space then the above supremum is realized as a nondecreasing limit for any refining sequence of partitions. In considerations of a topological dynamical system (X, T ), where the transformation T remains fixed and the measure µ ranges over the set PT (X) we want to view hµ (A, T ) and hµ (T ) as functions of µ. This leads to our first two basic definitions:

12

TOMASZ DOWNAROWICZ

3.3.1 Definition. On PT (X) the entropy function with respect to a partition A is defined as h(µ, A) := hµ (A, T ). This function is bounded by log #A. 3.3.2 Definition. On PT (X) the entropy function is defined as h(µ) := hµ (T ). This function may be unbounded or assume ∞. It is known that h is a harmonic function of the measure (see [D-G-S], chapter 13). We now recall the well-known construction of topological entropy, mainly in order to establish the notation. For a finite open cover U of X we set H(U) := log min{#W : W ⊂ U ,

[

W ⊃ X}, and h(U) := lim n

1 H(U n ). n

The limit exists by subadditivity. Also, for a metric d on X and an ² > 0, we let H(d, ²) := log max{#E : E is a (d, ²)-separated set in X}, 1 h(²) := lim sup H(dn , ²). n→∞ n One easily shows that H(dn , ²) ≤ H(U n ) ≤ H(dn , ²0 ), whenever diam(U ) < ² and ²0 < 12 Leb(U). Hence also h(²) ≤ h(U) ≤ h(²0 ). 3.3.3 Definition. One defines the topological entropy of T as htop = htop (X, T ) := sup h(U) = lim h(²). U

²→0

The above supremum can be replaced by an increasing limit along a refining sequence of covers, the above limit is increasing. The famous relation between topological and measure-theoretic entropy is the 3.3.4 Variational Principle. (Goodwyn [Gw], Goodman [Gm], see also [M1] for a simpler proof.) htop = sup h(µ). ¤ µ∈PT (X)

In [M2] we find the following definition (we have slightly changed the original notation to fit it to our convention):

ENTROPY STRUCTURE

13

3.3.5 Definition. Let U , V be open covers of X. Denote (a) H(n, U|V) := log max{#minimal subfamily of U n covering V n : V n ∈ V n }; (b) h(U|V) := limn n1 H(n, U|V) (by subadditivity this limit is descending); (c) h(X|V) := supU h(U|V). The last is called the topological conditional entropy of (X, T ) given V. Exactly the same notion h(X|V) will be obtained if we replace the minimal cardinality of a subfamily of U n by the maximal cardinality of a (dn , δ)-separated set (contained in V n ). However, 3.3.6 Definition. If we replace V n in the Definition 3.3.5 by the family of all (n, ²)-balls then we obtain h(X|²), the topological conditional entropy of (X, T ) given ². We skip the obvious details (see also [M2]). Clearly, whenever diam(U) < ² and ²0 < 21 Leb(U), then by an obvious inclusion argument we have h(X|²0 ) ≤ h(X|V) ≤ h(X|²). The topological tail entropy (originally given the name “topological conditional entropy”) of the system (X, T ) has been defined by Misiurewicz as 3.3.7 Definition. h∗ = h∗ (X, T ) := inf h(X|V) = lim h(X|²). ²→0

V

Obviously, the infimum is achieved as a decreasing limit along any refining sequence of covers. 3.4 Extension entropy and symbolic extension entropy. Now suppose that (X, T ) is a topological factor of another topological dynamical system (Y, S), i.e., that there is a continuous surjection π : Y → X such that π ◦ S = T ◦ π. The conjugate map on measures (also denoted by π) sends PS (Y ) onto PT (X). The notion below is the natural way of measuring the chaos appearing in the extension on the fiber of an invariant measure. 3.4.1 Definition. For µ ∈ PT (X) we define hπext (µ) :=

sup

h(ν),

ν∈π −1 (µ)

and we call it the extension entropy function for π. It has been proved in [B-D] (using the fact that π preserves ergodic measures, see Proposition 2.7 there) that the above function is affine on invariant measures. An extension (Y, S) of (X, T ) is called symbolic if S is the shift map on a closed shift-invariant subset Y of ΛZ , where Λ is a finite set called the alphabet. For a symbolic extension the function hπext is also u.s.c., and hence harmonic.

14

TOMASZ DOWNAROWICZ

3.4.2 Definition. We introduce the symbolic extension entropy function and topological symbolic extension entropy: (a)

hsex (µ) := inf hπext (µ),

(b)

hsex = hsex (X, T ) := inf htop (Y, S),

π

π

as π : (Y, S) → (X, T ) ranges over all possible symbolic extensions of (X, T ). If there are no such extensions, we set hsex ≡ ∞ and hsex = ∞. The function hsex is u.s.c., it needn’t be harmonic, but it is always supharmonic. It has been proved in [B-D] (see Theorem 8.1 there), that 3.4.3 Symbolic Extension Variational Principle. hsex =

sup

hsex (µ).

¤

µ∈PT (X)

4. Essential partitions, principal extensions, standard product, and the reference entropy structure This section basically replicates a part of [B-D] producing the reference to special extensions, which enables derivation of the symbolic extension entropy. If (X, T ) is a zero dimensional dynamical system then, due to existence of partitions which are at the same covers (or covers which are partitions) the topological and measure-theoretic dynamics are easy to compare. The natural definition of an entropy structure in this case is as follows: pick in X a refining sequence (Ak ) of partitions into closed and open sets (we call such partition clopen) and set H = (hk ), with hk (µ) = h(µ, Ak ). The functions hk are nonnegative affine u.s.c. and their differences have the same properties. Obviously, lim H is the entropy function h. In certain nonzero dimensional cases we can use partitions, which are “almost” clopen: 4.0.1 Definition. A partition A of X will be called essential if its elements have boundaries of measure zero for all invariant measures. From the measure-theoretic point of view, these partitions behave like clopen, in particular, the corresponding entropy functions are u.s.c. However, topological notions of entropy are harder to view, because we cannot avoid some overlapping in open covers. In the general case (i.e., not necessarily zero dimensional) we will rely on existence of certain entropy-preserving zero dimensional extensions: 4.0.2 Definition. Let ψ : (X 0 , T 0 ) → (X, T ) be a topological factor map between two topological dynamical systems. Then (X 0 , T 0 ) is called a principal extension of (X, T ) if h(µ0 ) = h(ψµ0 ) for every µ0 ∈ PT 0 (X 0 ), or equivalently, when hψ ext ≡ h. It is easy to see that if a system (X, T ) has a refining sequence of essential partitions (Ak ) then it has a zero dimensional principal extension (X 0 , T 0 ) in which

ENTROPY STRUCTURE

15

the partitions Ak lift (up to invariant measure zero) to clopen partitions. This extension has the additional property that the conjugate map on invariant measures is 1-1, and hence is a homeomorphism. Moreover, the factor map ψ serves as a measure-theoretic isomorphism between (X 0 , µ0 , T 0 ) and (X, µ, T ) whenever µ = ψµ0 . We briefly recall the construction: Construction: We view each partition Ak as a finite set of symbols (alphabet) and to each point x ∈ X we associate the double sequence (array) x := (xk,n )k∈N,n∈Z ∈ AZ1 × AZ2 × . . . (which we will call the (Ak )-array-name of x), determined by the rule xk,n = A ⇐⇒ A ∈ Ak & T n x ∈ A. Then we let (X 0 , T 0 ) be the closure (in the product topology over discrete topologies in the finite sets Ak ) of the collection of all so obtained array-names with the horizontal shift map (T 0 x)k,n = xk,n+1 . The factor map ψ : X 0 → X is such that it sends each cylinder of the form {x : xk,n = Ak,n , k ≤ k0 , |n| ≤ n0 } T into the intersection of the closures T −n (Ak,n ) (over the same set of indices). A point x ∈ X admitting multiple preimages in X 0 must visit the boundary of some element of some partition Ak . Everywhere else ψ is injective. Since the set of such points x has measure zero for all invariant measures, the map ψ serves as a measure-theoretic isomorphism for every invariant measure, in particular, it preserves the entropy, as required in the definition of the principal extension. The statement below has been derived in [B-D] (Theorem 7.6 there) from the work of E. Lindenstrauss and B. Weiss (Theorem 4.2 in [Li-W] and Theorem 6.2 in [Li]). 4.0.3 Fact. Let (X, T ) be an arbitrary topological dynamical system with finite topological entropy and admitting a nonperiodic minimal factor. Then there exists a refining sequence of essential partitions Ak of X. ¤ If (X, T ) does not have such a minimal factor, then we need one more intermediate system as defined below: 4.0.4 Definition. The direct product (X s , T s ) := (X × T, T × τ ) of (X, T ) with a (fixed throughout this paper) irrational rotation (T, τ ) of the unit circle, endowed with the metric ds ((x, t), (y, s)) = max{d(x, y), |t, s|}, where |·, ·| is the normalized 1 (by 2π ) arc length metric on T, will be called the standard product. Clearly, (X s , T s ) is a principal extension of (X, T ), by projection onto the first axis. However, the conjugate map on invariant measures need not be injective. If (X, T ) has finite entropy then (X s , T s ) satisfies the assumptions of Fact 4.0.3, so it has a zero dimensional principal extension (X 0 , T 0 ) which is also a principal extension of (X, T ). We have just deduced that

16

TOMASZ DOWNAROWICZ

4.0.5 Fact. Every finite entropy dynamical system (X, T ) admits a zero dimensional principal extension. ¤ 4.0.6 Question. By using the standard product we usually enlarge the set of invariant measures. Can one improve the above result so that each invariant measure lifts to exactly one invariant measure in the zero dimensional principal extension? The zero dimensional principal extension obtained via (Ask )-array-names for a refining sequence of essential partitions Ask in the standard product (X s , T s ) will be called the standard principal extension. With the existence of zero dimensional principal extensions granted, we are in a position to define the reference entropy structure. 4.0.7 Definition. By a reference entropy structure for a finite topological entropy dynamical system (X, T ) we shall mean the sequence Href = (href k ) of functions defined on PT 0 (X 0 ), where (X 0 , T 0 ) is a zero dimensional principal extension of 0 0 0 0 (X, T ), and href k (µ ) = h(µ , Ak ) for a refining sequence of clopen partitions Ak of X 0. It is clear that the functions href converge to href which coincides with the lifted k from PT (X) entropy function h. The obvious disadvantage of the reference entropy structure is that is not defined directly on PT (X). Soon we will learn how this can be overcome. 5. The entropy structure The definition below introduces the key notion of this paper in almost full generality (we temporarily need finite topological entropy assumption). 5.0.1 Definition. By an entropy structure of a finite topological entropy dynamical system (X, T ) we shall mean any increasing sequence H = (hk ) of functions defined on PT (X) such that for any choice of a zero dimensional principal extension (X 0 , T 0 ) and any choice of clopen partitions A0k in X 0 , the lift of H to PT 0 (X 0 ) is uniformly equivalent to the reference entropy structure Href = (h(·, A0k )). It is obvious, that if a sequence H as above exists (which is not granted yet), then any other sequence is an entropy structure if and only it belongs to the same uniform equivalence class on PT (X) as H. Moreover, then it suffices to test it by verifying its uniform equivalence with only one choice of a reference entropy structure. Without causing confusion, “entropy structure” will mean both, a particular sequence, and the entire uniform equivalence class, depending on the context. Below we state one of the most important features of the entropy structure: 5.0.2 Theorem. The entropy structure (as a class) is a topological invariant in the following sense: (1) If φ : (Y, S) → (X, T ) is a topological conjugacy and H = (hk ) is an entropy structure for (X, T ) then G = (gk ) defined on PS (Y ) by gk (ν) = hk ◦ φ(ν) is an entropy structure for (Y, S). (2) Any two entropy structures of the same system are uniformly equivalent.

ENTROPY STRUCTURE

17

Proof. Claim (1) is trivial, because topologically conjugate systems have the same principal extensions. As already noticed, claim (2) becomes obvious once the existence of any entropy structure is established. This will be done in section 7 (see Theorem 7.0.1). ¤ We also summarize some other properties of this type: 5.0.3 Theorem. uni (1) If (X, T ) is a topological factor of (Y, S), then HY ≥ HX , where HY and HX denote the entropy structure of (Y, S) and the (lifted to PS (Y )) entropy structure of (X, T ), respectively. 0 (2) If (X 0 , T 0 ) is a principal extension of (X, T ) then HX and (lifted) HX are uniformly equivalent. (3) If H = (hk ) is the entropy structure for (X, T ) and m ∈ N then (mhk ) is the entropy structure for (X, T m ). (4) A uniform equivalence class defined on any metrizable Choquet simplex is (up to affine homeomorphism) an entropy structure for some dynamical system (and then also for some minimal zero dimensional one) if and only if it contains an increasing sequence of nonnegative u.s.c. functions with u.s.c. differences. Proof. The assertion (1) follows from Theorem 7.5 in [B-D], (2) and (3) are obvious, (4) results in one direction from Theorem 3 in [D-S2], in the other it will follow from Theorem 7.0.1 and the properties of Hf un . ¤ Notice that in spite of (4) no semicontinuity properties are required from a particular entropy structure H. This will save us elaborate, and often vain efforts when defining candidate notions. 5.1 Master invariant theorems Note that, by Theorem 2.3.2, objects such as lim H, superenvelopes, EH, uH α, H and α0 do not depend on the choice of a particular sequence H representing the entropy structure. They are parameters of the entropy structure as an equivalence class, and thus they also become topological invariants of (X, T ). They coincide with the entropy characteristics or resulting invariants (as listed in the introduction) as follows: (A) The fact that lim H coincides with the entropy function h is obvious. (B) The symbolic extension entropy versus superenvelope result of [B-D] can be now formulated in terms of entropy structures: 5.1.1 Theorem. A function E : PT (X) → R equals hπext for some symbolic extension π of (X, T ) if and only if E is a bounded affine superenvelope of the entropy structure H of (X, T ). In particular, hsex = EH = h + uα0 and hsex = sup EH. The function hsex is attained as hπext for a symbolic extension π if and only if EH is finite and affine. Proof. Every such superenvelope E lifted to PT 0 (X 0 ) has two properties: it is constant on fibers of ψ : PT 0 (X 0 ) → PT (X) and, by our Theorem 2.3.2(4) it is a

18

TOMASZ DOWNAROWICZ

(bounded affine) superenvelope of the reference entropy structure Href . By Theo0 rem 5.5 Case Z in [B-D] it represents hπext for some symbolic extension π 0 of (X 0 , T 0 ). Then π = ψ ◦ π 0 is a symbolic extension of (X, T ) and hπext = E. Conversely, let π : (Y, S) → (X, T ) be a symbolic extension and let E = hπext . We already know that such E is affine. Then, by Theorem 7.5 in [B-D], there exists a principal symbolic extension ψ 0 : (Y 0 , S) → (Y, S) which is, at the same time, a symbolic extension of (X 0 , T 0 ), π 0 : (Y 0 , S) → (X 0 , T 0 ) and the diagram commutes: π ◦ ψ 0 = ψ ◦ π 0 . Then 0

0

ψ◦π E(µ) = hπ◦ψ ext (µ) = hext (µ) =

sup µ0 ∈ψ −1 (µ)

0

hπext (µ0 ).

0

Now, E 0 = hπext is, by Theorem 5.5 Case Z in [B-D], a bounded affine superenvelope of Href . So, by Theorem 2.3.2(4), it is a (bounded affine) superenvelope of H lifted to PT 0 (X 0 ). Choose H to be a sequence with u.s.c. differences (Hf un , for example). By Lemma 2.1.6, for each k, E 0 (µ0 ) − hk (µ) is nonnegative and u.s.c. as a function of µ0 (µ = ψ(µ0 )). Therefore ³ ´ E(µ) − hk (µ) = sup E 0 (µ0 ) − hk (µ) = sup E 0 (µ0 ) − hk (µ) µ0 ∈ψ −1 (µ)

µ0 ∈ψ −1 (µ)

is nonnegative and also u.s.c. (see Remark 2.6 in [B-D]), which implies that E is a superenvelope of H. The remaining statements now follow directly from Facts 2.1.7, 2.2.3, 3.4.3, and [B-D] Theorem 8.3. ¤ Remark. It is now immediate to see that the characterization of Theorem 5.1.1 can be formulated exclusively in terms of the reference entropy structure, i.e., relying only on the zero dimensional case, as follows: A bounded affine function E on PT (X) is an extension entropy function for a symbolic extension if and only if its lift to PT 0 (X 0 ) is a superenvelope of the reference entropy structure. The same statement is actually proved in [B-D] (combine Theorem 5.5 and Lemma 7.9 there) without using uniform equivalence. Instead, a rather complicated argument (involving partitions and entropies) has been used to show that Href and lifted HLeb (see next section for definitions) have the same superenvelopes. Apparently, uniform equivalence provides a more convenient and more general tool. (C) We already noted that the transfinite sequence uα is a topological invariant. We will use u1 to define a new topological invariant h∗ and then we state its connection with the familiar Misiurewicz parameter h∗ . 5.1.2 Definition. We define the tail entropy function by h∗ (µ) := uH 1 (µ). 5.1.3 Proposition. For each invariant measure µ, h∗ (µ) bounds from above the defect of upper semicontinuity of h at µ. ...

Proof. By 2.1.2, h = e h − h ≤ (h^ − hk ) + e hk − h. Using an entropy structure ... f un consisting of u.s.c. functions (H , for instance) the last equals θ k ≤ θek . Passing ... with k to infinity we obtain h ≤ u1 . ¤

ENTROPY STRUCTURE

19

Remark. In [D-S1] we have used the name “tail entropy function” for a slightly different notion (denoted h∗ (µ)) based on topological fiber entropies in zero dimensional systems, which does not coincide with the one defined above. In particular, that function bounds the defect of h from below (see [D-S1], Theorem 6). Because Tail Variational Principle does not hold for that function, we believe the new term deserves the name better than the old one. 5.1.4 Tail Variational Principle. We have h∗ (X, T ) =

sup

h∗ (µ).

µ∈PT (X)

The proof will be provided in section 9. We then also have (5.1.5)

h∗ (X, T ) = lim

sup

k µ∈PT (X)

θk (µ).

This follows from the fact that in the expression supµ inf k θek we can change the order of supremum and infimum (see Proposition 2.4 in [B-D]), and then skip the u.s.c. envelopes. Remarks. (1) Ledrappier [Le] proved another variational principle for h∗ equating it to the pointwise supremum of a certain function defined on the measures of the cartesian square of (X, T ). He also deduced that principal extension preserves h∗ . Our Tail Variational Principle uses no indirect notions and via Theorem 5.0.3(2) also implies the preservation of h∗ under principal extensions. (2) By Proposition 5.1.3 the number sup h∗ bounds the defect of h globally. This is also the most frequently used property of h∗ . The fact that sup h∗ and h∗ are exactly the same has a surprisingly complicated proof requiring many technical preparations. Apparently, it is the purely topological nature of Misiurewicz’s definition that makes it so distant from our measure-related approach. Finally, we address the order of accumulation. 5.1.6 Definition. We define the order of accumulation of entropy as the order of accumulation α0H where H is the entropy structure. This new topological invariant allows to classify all topological dynamical systems in ω1 classes. Examples show that all classes are in fact nonempty. We choose to skip their presentation. As an application of the tail entropy function and order of accumulation of entropy, we can derive from Theorem 5.1.1, inequality 2.2.4 combined with Theorem 5.0.3(3), and Definition 5.1.2, that systems with finite entropy and finite order of accumulation of entropy α0H have finite symbolic extension entropy, and (5.1.7)

hsex ≤ h + α0H h∗ , in particular hsex ≤ htop + α0H h∗ .

20

TOMASZ DOWNAROWICZ

It is worth mentioning that, as the studies of smooth dynamical systems in [D-N] revealed, a typical non-Anosov C r mapping (1 ≤ r < ∞) on a compact Riemannian manifold has an infinite order of accumulation of entropy. See also Fact 9.0.3 for the C ∞ case. In [B-D] and [D-N] we provide more estimates for hsex based on topological order of accumulation in certain sets (see [B-D] Propositions 3.9, 3.10, and 3.11 and [D-N] Proposition 3.4 and Lemma 5.2). 6. Candidate entropy notions In the following subsections 6.1–6.8 of this section we introduce the increasing candidates. 6.1 Entropy with respect to partitions Let Ak be an arbitrary refining sequence of partitions of X. On PT (X) we define Hpar := (h(·, Ak ))k∈N . It is known that Hpar converges nondecreasingly to h. No semicontinuity properties of the component functions are granted in general. 6.2 Entropy with respect to a family of functions We continue with a relatively new method of calculating entropy in topological dynamical systems, in which partitions are replaced by families of continuous functions. This approach seems more natural in presence of a topological structure in the phase space X than any other. We remark, that the same method allows to define measure-theoretic entropy not only for continuous transformations but also for a wider class of Markov operators (see [D- F]). We also mention that in [G-L-W] a different definition of entropy with respect to continuous functions (partitions of unity) is considered. Let f : X → [0, 1] be a continuous function. The sets {(x, t) ∈ X × [0, 1] : t ≤ f (x)} and {(x, t) ∈ X × [0, 1] : t > f (x)} constitute a two-element partition Af of X × [0, 1]. For a finite family F of continuous functions, we let AF =

_

Af .

f ∈F

Now, for any invariant measure µ on X set H(µ, F) := H(µ × λ, AF ), where λ denotes the Lebesgue measure on the interval. Finally we define 6.2.1 Definition. hf un (µ, F) := lim n

1 H(µ, F n ) (descending limit), n

ENTROPY STRUCTURE

where n

F :=

n−1 [

T i F,

21

(T i F = {f ◦ T i : f ∈ F}).

i=0

Note that hf un (µ, F) is equal to the classical entropy h(µ × λ, AF ) evaluated for the action of T × Id. The following observations are obvious: (1) H(·, F) is a continuous function of the invariant measure (which is the main advantage of this notion over any other, in the topological context), and hf un (·, F) is affine and u.s.c. (hence also harmonic). (2) If F and G are two finite families of continuous functions then hf un (·, G|F) := hf un (·, G ∪ F) − hf un (·, F) is a u.s.c. function (the subadditivity argument applies, for the corresponding partitions; see Lemma 1 in [D-S1]). (3) We can arrange an increasing (wrt. inclusion) sequence of families Fk such that the partitions AFk refine in the product X × [0, 1]. Therefore lim hf un (µ, Fk ) = h(µ × λ) = h(µ)

k→∞

for all invariant measures µ on X. With such arrangement we define ¡ ¢ HF un := hf un (·, Fk ) k∈N . 6.3 Entropy of the product with the Lebesgue measure Another candidate is presented below. This notion has been used in [B-D]. 6.3.1 Definition. Let As be an essential partition in the standard product. On PT (X) we define hLeb (µ, As ) := h(µ × λ, As ), where λ is the normalized Lebesgue measure on the unit circle. This function is obviously harmonic. Let (Ask ) be a refining sequence of essential partitions in (X s , T s ). By the properties of essential partitions, each hLeb (·, Ask ) is u.s.c., and so is hLeb (·, Ask+1 ) − hLeb (·, Ask ). We now let ¡ ¢ HLeb := hLeb (·, Ask ) k∈N . 6.4 Modified Bowen’s entropy The notion below is also a new concept. It is a modification of the Bowen’s definition of topological entropy, in such a way that it becomes dependent on the invariant measure. The idea for such modification is taken from Newhouse’s definition of local entropy (see Definition 6.10.1 below).

22

TOMASZ DOWNAROWICZ

6.4.1 Definition. For a measurable set F ⊂ X let (a) H(n, ²|F ) := log max{#E : E is a (dn , ²)-separated set within F }; (b) h(²|F ) := lim supn n1 H(n, ²|F ); (c) hBow (µ, ²|σ) := inf{h(²|F ) : µ(F ) > σ} (0 < σ < 1); (d) hBow (µ, ²) := limσ→1 hBow (µ, ²|σ). The above is applied to ergodic µ. For other invariant measures we use the harmonic extension: Z Bow h (µ, ²) = hBow (ν, ²)dMµ (ν). We use upper integral in order to avoid a lengthy argument that the integrated function is in fact measurable. We give up the investigation of semicontinuity properties. It turns out that the above definition contains an excessive feature: the limit in (d) is of a constant function, in other words, hBow (µ, ²) = hBow (µ, ²|σ)

(6.4.2)

for any σ strictly between 0 and 1. This claim requires a proof. Proof. It suffices to consider µ ergodic. Clearly, hBow (µ, ²|σ) cannot decrease as σ increases. Consider σ 0 > σ, and let F be a set of measure at least σ. By ergodicity, there exists k such that k−1 [ 0 F = T i (F ) i=0 0

exceeds σ in measure (invertibility of T intervenes). Let E 0 be a maximal (dn , ²)separated set in F 0 . By simple dividing, it is seen that for some index i the inter0 −i section of E 0 with T i (F ) has cardinality at least #E we can see k . Applying T n+i n+k that F contains a (d , ²)-separated (hence (d , ²)-separated) set E of the same cardinality. We have shown that H(n + k, ²|F ) ≥ H(n, ²|F 0 ) − log k. Dividing by n + k and letting n go to infinity, we obtain h(²|F ) ≥ h(²|F 0 ). Applying first the infimum over F 0 and then over F we obtain the required inequality hBow (µ, ²|σ) ≥ hBow (µ, ²|σ 0 ). ¤ Remark. We cannot apply a set F of full measure in the above definition as this usually leads to topological entropy (it suffices that µ has full topological support – for example it is so for all invariant measures in minimal systems). It will follow from future considerations that with ²k decreasing to 0, hBow (µ, ²k ) increases to h(µ). We set ¡ ¢ HBow := hBow (·, ²k ) k∈N . 6.5 Katok’s entropy n,² The following idea was introduced in 1980 by Katok [K]: Let Bσ,µ be any collection of (n, ²)-balls whose union has µ-measure larger than σ. For µ ergodic put

ENTROPY STRUCTURE

23

n,² 6.5.1 Definition. hKat (µ, ²|σ) := lim supn n1 log min #Bσ,µ . As before, for nonergodic measures we apply the harmonic extension.

Katok proves that for ²k → 0, limk hKat (µ, ²k |σ) = h(µ) for any 0 < σ < 1. Again, the semicontinuity properties are uncertain. We set ¡ ¢ HσKat := hKat (·, ²k |σ) k∈N . 6.6 Romagnoli’s entropy Let U be an open cover of X. For an invariant measure µ two notions of entropy of µ with respect to U are considered by Romagnoli [R]: h+ µ (U, T ) := inf{hµ (A, T ) : A < U}, 1 h− inf{Hµ (A) : A < U n }. µ (U , T ) := lim n→∞ n It is obvious that both functions above are supharmonic. Also note that the limit defining h− µ (U , T ) is descending, because the corresponding sequence is subadditive. By a standard argument involving regularity and Urysohn functions it is easy to show that the infimum over partitions A in both above definitions can be restricted to partitions into sets whose boundaries have zero measure µ (individually for each invariant measure). For such partitions, Hµ (A) is continuous at µ, and hence both h+ and h− are u.s.c. functions. We don’t know whether for a pair of inscribed covers the corresponding difference function is u.s.c. If T is fixed we will view the latter entropy with respect to U as a function of the measure: 6.6.1 Definition. hRom (µ, U) := h− µ (U, T ). Two facts about these notions will be important (Lemma 5 and Proposition 7 in [R]): 6.6.2 Facts. (1)

1 + n n hµ (U , T ) (descending limit), n→∞ n

hRom (µ, U) = lim

(2) If (X 0 , T 0 ) is an extension of (X, T ), µ ∈ PT (X), µ0 is a lift of µ, U is a cover of X with a lift U 0 , then hRom (µ0 , U 0 ) = hRom (µ, U). ¤ It is worth mentioning (although we will not use it) that Romagnoli proved another Variational Principle (Theorem 2 in [R]), namely: sup µ∈PT (X)

hRom (µ, U) = h(U).

24

TOMASZ DOWNAROWICZ

A variation of Romagnoli’s definition can be obtained as follows: For a metric d on X, ² > 0, and an invariant measure µ we set h+ µ (d, ², T ) := inf{hµ (A, T ) : diamd (A) < ²}, 1 + n hµ (d , ², T n ). h− µ (d, ², T ) := lim n→∞ n The above limit is descending. If d and T are fixed we will view the latter notion as a function of the measure: 6.6.3 Definition. hRom (µ, ²) := h− µ (d, ², T ). Again, this is a supharmonic u.s.c. function on invariant measures. Clearly, whenever diam(U ) < ² and ²0 < Leb(U), then by an obvious inclusion argument we have hRom (µ, ²) ≤ hRom (µ, U) ≤ hRom (µ, ²0 ).

(6.6.4)

Using a refining sequence of covers (Uk ) and a decreasing to zero sequence (²k ) we obtain two uniformly equivalent candidate sequences: ¡ ¢ ¡ ¢ H1Rom := hRom (·, Uk ) k∈N and H2Rom := hRom (·, ²k ) k∈N . We will use HRom to denote either one. 6.6.5 Question. Is it true that supµ hRom (µ, ²) = h(²)? 6.7 Brin-Katok local entropy The definition below defines entropy by analogy to S-M-B, via the average measure of the (n, ²)-balls. The idea is taken from [B-K]. 6.7.1 Definition. Let µ be an ergodic measure. Define (n,²)

(a) I(n, µ, x, ²) := − log µ(Bx ); (b) hBK (µ, x, ²) := lim supn n1 I(n, µ, x, ²); (n,²)

(n−1,²)

⊂ T −1 (BT x ), which easily implies that hBK (µ, x, ²) is subinNotice that Bx variant. Thus hBK (µ, x, ²) is constant µ-almost everywhere. This constant defines hBK (µ, ²). For nonergodic measures we apply the harmonic extension. Brin and Katok proved that hBK (µ, ²) → h(µ) as ² → 0. (n,²) Notice that for nonergodic µ, − log µ(Bx ) may be too large (because − log t is a convex function), this is why we do not define hBK (µ, ²) as the integral of hBK (µ, x, ²).

ENTROPY STRUCTURE

25

6.7.2 Definition. For a cover U the entropy hBK (µ, U ) is obtained by replacing (n,²) Bx in Definition 6.7.1 by the union Unx of all cells Uxn of the cover U n which contain x. Once again, we skip the investigation of semicontinuity. Similarly as for Romagnoli’s entropy, whenever diam(U ) < ² and ²0 < 12 Leb(U), we have (6.7.3)

hBK (µ, ²) ≤ hBK (µ, U) ≤ hBK (µ, ²0 ).

As before, we obtain two uniformly equivalent candidates: ¡ ¢ ¡ ¢ H1BK := hBK (·, Uk ) k∈N and H2BK := hBK (·, ²k ) k∈N . We will use HBK to denote either one. 6.8 Ornstein-Weiss type entropy D. Ornstein and B. Weiss [O-W] presented an interesting way of deriving entropy in ergodic (symbolic) systems from a trajectory of a single (almost every) point x, as the limsup (as n → ∞) of one nth of the logarithm of the time of the first reappearance of the initial n-block of x. Clearly, if this is applied to names with respect to a partition A we obtain the function h(µ, A). However, if we apply (as suggested by M. Urbanski) the same idea to a cover or to (n, ²)-balls, we obtain a new, interesting notion: 6.8.1 Definition. Denote (n,²) (a) R1 (n, ², x) := min{k > 0 : T k (x) ∈ Bx }, the first return time of x to its own (n, ²)-ball (this can be ∞). Then define (b) hOW (x, ²) := Rlim supn n1 log R1 (n, ², x), (c) hOW (µ, ²) := hOW (x, U)dµ(x). Observe that since R1 (n − 1, ², T x) ≤ R1 (n, ², x), the function hOW (x, ²) is subinvariant and hence equal to hOW (µ, ²) µ-almost everywhere, for µ ergodic. Clearly, (c) defines a harmonic function. (n,²)

6.8.2 Definition. Replacing Bx hOW (µ, U ).

in the Definition 6.8.1 by the set Unx we define

Identically as for HBK , whenever diam(U) < ² and ²0 < 12 Leb(U), we have (6.8.3)

hBK (µ, ²) ≤ hBK (µ, U) ≤ hBK (µ, ²0 ).

We set

¡ ¢ ¡ ¢ H1OW := hOW (·, Uk ) k∈N and H2OW := hOW (·, ²k ) k∈N .

We will use HOW to denote either one. 6.8.4 Question. Does any of the direct equalities hold: hRom (µ, ²) = hBK (µ, ²) = hOW (µ, ²), hRom (µ, U) = hBK (µ, U) = hOW (µ, U)?

26

TOMASZ DOWNAROWICZ

Now it is time to present some of the decreasing candidates. 6.9 Modified Misiurewicz’s conditional entropy The definition below is a slight modification of the Misiurewicz’s definition (see 3.3.5) of topological conditional entropy given a cover, leading to a function of the invariant measure. 6.9.1 Definition. Let U , V be open covers of X, and let µ ∈ PT (X). We set (a) H(n, U|x, V) := log max{#minimal subfamily of U n covering Vxn }; (the maximum ranging over all sets Vxn containing x); (b) h(U|x, V) := Rlimn n1 H(n, U|x, V); (c) h(U|µ, V) := h(U|x, V)dµ; (d) hM is (X|µ, V) := supU h(U|µ, V). It is not hard to see that H(n, U|x, V) is a subadditive process (i.e., H(n + m, U|x, V) ≤ H(n, U|x, V) + H(m, U|T n x, V)). So, by the subadditive ergodic theorem (see e.g. Theorem 5.3 in [Kr]) the limit in (b) exists µ-almost everywhere and equals h(U|µ, V) for every ergodic µ. Clearly, (c) defines a harmonic function. Because the supremum in (d) is realized as a monotone limit along a sequence of covers, hM is (X|µ, V) is also a harmonic function of the invariant measure µ. One can also consider variations of the above where the “conditioning resolution” is given by an ² > 0 or by a partition A. More precisely, (n,²)

or by the 6.9.2 Definition. Replacing in Definition 6.9.1 the element Vxn by Bx n n element Ax of a partition A (the maximum can be dropped in both situations), we define hM is (X|µ, ²) and hM is (X|µ, A), respectively. Exactly the same notions hM is (X|µ, V), hM is (X|µ, ²) and hM is (X|µ, A) will be obtained if we replace, in definitions 6.9.1 and 6.9.2, the minimal cardinality of a subfamily of U n by the maximal cardinality of a (dn , δ)-separated set. Clearly, whenever diam(V) < ² and ²0 < Leb(V) then 2 (6.9.3)

hM is (X|µ, ²0 ) ≤ hM is (X|µ, V) ≤ hM is (X|µ, ²).

Also, if A < V then

hM is (X|µ, A) ≤ hM is (X|µ, V).

We obtain two uniformly equivalent decreasing sequences: ¡ ¢ ¡ ¢ T1M is := hM is (X|·, Vk ) k∈N and T2M is := hM is (X|·, ²k ) k∈N . We will use T M is to denote either one. Remark. The inequalities (6.9.4)

h(X|V) ≥ sup hM is (X|µ, V), and µ

h(X|²) ≥ sup hM is (X|µ, ²) µ

ENTROPY STRUCTURE

27

follow immediately from the definitions. Later we will prove some kind of inverted inequality for covers (see Small Variational Principle 9.0.1). 6.10 Newhouse’s local entropy S. Newhouse [N] defines local entropy as yet another function of an invariant measure. Again, we slightly change the notation to fit it to our convention. 6.10.1 Definition. Let F ⊂ X be a measurable set. Define (n,²)

(a) H(n, δ|x, F, ²) := log max{#E : E is a (dn , δ)-separated set within F ∩ Bx (b) H(n, δ|F, ²) := supx∈F H(n, δ|x, F, ²); (c) h(δ|F, ²) := lim supn n1 H(n, δ|F, ²); (d) h(X|F, ²) := limδ→0 h(δ|F, ²); and finally, (e) hN ew (X|µ, ²) := limσ→1 inf{h(X|F, ²) : µ(F ) > σ}.

};

We apply the last to µ ergodic and for other invariant measures we use the harmonic extension. This notion has the same parameters as hM is (X|µ, ²) but, as we shall see, the presence of the set F in the definition makes Newhouse’s notion much more subtle. Originally Newhouse required F to be closed, but it is easily seen, by regularity, that dropping this requirement does not change the final notion at all. Exactly the same notion hN ew (X|µ, ²) will be obtained if we replace the maximal cardinality of a (dn , δ)-separated set by the minimal cardinality of a subfamily of (n,²) U n covering Bx . As before, one considers variations of local entropy where the “conditioning resolution” is given by an open cover V or by a partition A: 6.10.2 Definition. hN ew (X|µ, V) and hN ew (X|µ, A) are obtained by replacing (n,²) in Definition 6.10.1 by Vxn or Anx , respectively. Bx Again, diam(V) < ² and ²0 < (6.10.3)

Leb(V) 2

imply

hN ew (X|µ, ²0 ) ≤ hN ew (X|µ, V) ≤ hN ew (X|µ, ²),

and A < V implies hN ew (X|µ, A) ≤ hN ew (X|µ, V). As for the previous notion, we obtain two uniformly equivalent decreasing sequences: ¡ ¢ ¡ ¢ T1N ew := hN ew (X|·, Vk ) k∈N and T2N ew := hN ew (X|·, ²k ) k∈N . We will use T N ew to denote either one. 7. The passing candidates We are in a position to formulate our most elaborate theorem:

28

TOMASZ DOWNAROWICZ

7.0.1 Theorem. Assume htop (X) < ∞. Then the following sequences of functions defined on PT (X) are entropy structures: HF un , HLeb , HBow , HσKat , HRom , HBK , HOW , and HN ew := h − T N ew . Remark. The first two entropy structures Hf un and HLeb have the additional properties, which make them very similar to the reference entropy structure: the component functions are nonnegative, affine, and u.s.c. (hence harmonic), and so are their differences. No other entropy structure discussed in this paper has all these nice properties in full generality (or, if it does we don’t know how to prove them). Remark. At this point we notice that the definition of Hf un does not require finite topological entropy. Thus we can assign the uniform equivalence class of this sequence to define the entropy structure in case of infinite entropy. Alas, due to the obvious infinite values of many interesting entropy invariants (hsex ≡ ∞, h∗ ≡ ∞), the meaning of entropy structure in this case is marginal (at least from our standpoint). 7.1 Several technical lemmas We isolate large parts of the proof of Theorem 7.0.1 as separate lemmas. The first one concerns measure-preserving transformations in general. 7.1.1 Lemma. Let (X, µ, T ) be an ergodic measure-preserving transformation and let A and B be two partitions of X. Then X hµ (A ∨ B, T ) ≤ hµ (B, T ) + µ(B)hµB (AB , TB ), B∈B

where AB = {A ∩ B : A ∈ A, A ∩ B 6= ∅} is the partition of B imposed by A, µB denotes the conditional measure on B induced by µ, and TB is the induced (first return time) transformation on B. Proof. Because the proof is standard, we only sketch it, skipping most of the technical details. S-M-B implies that in any ergodic system (X, µ, T ), for any partition A, and sufficiently large n the cardinality of all initial n-blocks x(A,T ) [0, n) of the names with respect to A (and T ) of points x ranging over a large set, equals approximately enh(µ,A) . In the setting of our lemma, every (A ∨ B)-name is determined by the B-name and the AB -names with respect to all the considered induced transformations. Obviously, the initial n-block uses only an appropriately shorter initial block for the induced transformation on each B, and, by the ergodic theorem, the proportionality factor between the lengths is approximately µ(B) (on a large set of x’s). Therefore the variety of all initial n-blocks in the (A ∨ B)-names is estimated by the product of the corresponding varieties: Y #{x(A∨B,T ) [0, n)} ≤ #{x(B,T ) [0, n)} · #{x(AB ,TB ) [0, µ(B)n)}, B∈B

leading directly to the desired inequality. ¤ We now return to topological dynamical systems and our candidate notions.

ENTROPY STRUCTURE

29

7.1.2 Lemma. Assume htop (X) < ∞. If Fk and Fk0 are two increasing (by inclusion) sequences of finite families of continuous functions from X into [0, 1] such that lim hf un (·, Fk ) = lim hf un (·, Fk0 ) = h (the entropy function), k→∞

k→∞

then the sequences H = (h lent on PT (X).

f un

(·, Fk )) and H0 = (hf un (·, Fk0 )) are uniformly equiva-

Proof. For fixed k we have lim hf un (·, Fk |Fk0 0 ) = 0lim hf un (·, Fk ∪ Fk0 0 ) − hf un (·, Fk0 0 ) = h − h = 0.

k0 →∞

k →∞

As a continuous (constant) and decreasing limit of nonnegative u.s.c. functions, this must be a uniform one (see 2.1.1), so, for any pre-assigned γ > 0, there exists k 0 such that hf un (·, Fk0 0 ) ≥ hf un (·, Fk ∪ Fk0 0 ) − γ ≥ hf un (·, Fk ) − γ. uni

We have proved H ≥ H0 . The converse follows by symmetry. ¤ Remaining statements relate the candidate notions with their respective analogs on the standard product. For an invariant measure µ ∈ PT (X) by µs we will always denote a lift of µ to the standard product (X s , T s ). 7.1.3 Lemma. hBow (µ, ²) = hBow (µs , ²). Proof. Since hBow (·, ²) is harmonic, it suffices to consider both µ and µs ergodic. For the inequality “≤” we will show that (∀F s ⊂X s ,µs (F s )>σ )(∃F ⊂X,µ(F )>σ )(∀(dn ,²)-separated E⊂F )(∃(dns ,²)-separated E s ⊂F s ) #E s ≥ #E. For such F s we simply let F be the projection of F s onto X. Then every (dn , ²)-separated set E ⊂ F lifts to a (dns , ²)-separated set E s ⊂ F s , of at least the same cardinality. Conversely, we will show an analogous statement with the roles of (X, T ) and (X s , T s ) swapped, and with #E ≥ ²#E s . Since ² will not depend on n, it will vanish after taking logarithm, dividing by n and passing with n to infinity. For a set F ⊂ X with µ(F ) > σ let F s = F ×T. If E s is a (dns , ²)-separated set in F s then its intersection with at least one “horizontal strip” X × I (I is an arc in T) of width ² has cardinality at least ²#E s and projected onto X produces a (dn , ²)-separated set F (because the rotation is an isometry points in such horizontal strip are not sufficiently separated on the second coordinate). In particular the projection is injective, so the cardinality is preserved. ¤ 7.1.4 Lemma. hKat (µ, ²|σ) = hKat (µs , ²|σ). Proof. The inequality “≤” is obvious: each (n, ²)-ball Bs in X s projects to an (n, ²)-ball B in X, and the projection increases its measure. The other inequality follows (for ergodic µ) immediately by noticing that for each (n, ²)-ball B in X the 1 products of B with arcs of “vertical strip” B × T is covered by approximately 2² length 2², which are in fact the (n, ²)-balls Bs in X s . ¤

30

TOMASZ DOWNAROWICZ

7.1.5 Lemma. hBK (µ, ²) = hBK (µ × λ, ²) (the latter evaluated for (X s , T s )). Proof. With the notation as in the previous proof, we obviously have (µ× λ)(Bs ) = 2²µ(B). The rest of the proof is now immediate. ¤ 7.1.6 Proposition. For ² =

1 K

2

(K ∈ N) holds hOW (µs , ²) ≤ hOW (µ, ²8 ).

The proof involves a few more general claims, which might be of independent interest, so we isolate them in three separate lemmas. The first two are easy generalizations of the well-known fact that orbit equivalent maps preserve the same measures: 7.1.7 Lemma. Let Φ : Y → X be a measurable partly defined piecewise-power injection, i.e., the domain Y is a measurable subset of X, for each y ∈ Y , Φ(y) has the form T jy (y) (jy ∈ Z), and #Φ−1 (x) ≤ 1 for any x ∈ X. Then ¡ ¢ µ Φ−1 (C) ≤ µ(C), for any measurable C ⊂ X. Proof. Define Yj = {Φ = T j } := {y ∈ Y : Φ(y) = T j (y)}. These sets are disjoint and measurable (by the well-known fact that in separable metric spaces two measurable maps match on a measurable set). So are their Φ-images Xj =SΦ(Yj ) (because T j is a homeomorphism and Φ is an injection). Now, Φ−1 (C) = T −j (C ∩ Xj ), hence ¡ ¢ X ¡ −j ¢ X µ Φ−1 (C) = µ T (C ∩ Xj ) = µ (C ∩ Xj ) ≤ µ(C). ¤ 7.1.8 Lemma. Let Φ : X → X be a piecewise-power at-most-K-to-one map, i.e., Φ(x) = T jx (x) and #Φ−1 (x) ≤ K for any x ∈ X. Then ¡ ¢ µ Φ−1 (C) ≤ Kµ(C), for any measurable C ⊂ X. Proof. For each x, Φ−1 (x) is contained in the T -orbit of x. Using the natural linear order induced from this orbit (invertibility of T in use), set Φ−1 (x) = {y1 , y2 , . . . yl } (l ≤ K). Denote by Yi the (possibly empty) set of all points which received the index i in this procedure. Clearly, {Yi }1≤i≤K is a partition of X. Measurability follows by writing   [ \ \  Yi = {Φ = Φ ◦ T −j } ∩ {Φ 6= Φ ◦ T −j } . F ⊂N, #F =i−1

j∈F

j ∈F /

The map (graph) Φ is now a union of at most K measurable injections Φi defined on Yi , respectively. The assertion thus follows immediately from the previous lemma. ¤ We now return to considerations of the return times. For K ∈ N denote by (n,²) R (n, ², x) the K th return time of x to Bx . K

ENTROPY STRUCTURE

31

7.1.9 Lemma. Fix ² > 0. Then, for an ergodic measure µ, lim sup n→∞

1 ² log RK (n, ², x) ≤ hOW (µ, 2K ) n

µ almost everywhere. Proof. Denote ²0 =

² 2K .

Recall (Definition 6.8.1) that

lim sup n→∞

1 log R1 (n, ²0 , x) = hOW (µ, ²0 ) n

µ-almost everywhere. So there is a full measure T -invariant set X 0 of points x, for which R1 (n, ²0 , x) is finite for arbitrarily large (and hence every) n. Fix some n ∈ N. For x ∈ X 0 define 1 0 b Φ(x) = T R (n,² ,x) (x).

b 0 ), Φ b −1 (y) is contained in the backward orbit of y, so, if not For each y ∈ Φ(X empty, it has a largest element x0 in the natural linear ordering inherited from this orbit: b −1 (y) = {. . . , x−2 , x−1 , x0 } Φ (Such preimage is in fact finite, because it is (dn , ²0 )-separated, but we will not need that.) Define a new map Φ as follows: b 0 ) = y. Φ(xi ) = xi+1 for (i < 0), and Φ(x0 ) = Φ(x b −1 (y). But every point x ∈ X 0 belongs to a preimage of this form, This applies to Φ so Φ is now defined on X 0 . Clearly, Φ(x) = T nx (x) with 1 ≤ nx ≤ R1 (n, ²0 , x). Observe that Φ is at-most-two-to-one, because Φ−1 (y) may contain at most one b −1 (y) (namely x0 ), and one element (namely xi−1 ) if y plays the element from Φ b −1 (Φ(y)), b role of xi in Φ and nothing more. Now we need to verify measurability of 1 Φ. Notice that R (n, ²0 , x) < M on an open set, so the function x 7→ R1 (n, ²0 , x) is b Measurability of Φ now follows easily from the measurable, and so is the map Φ. following: b = Tj & Φ b ◦ T i 6= T j for i = 1, 2, . . . , j − 1} ∪ {Φ = T j } = {Φ [ b = Tm & Φ b ◦ Tj = Tm & Φ b ◦ T i 6= T m for i = 1, 2, . . . , j − 1}. {Φ m>j

Next, notice that dn (x, Φ(x)) < 2²0 , so that Φ(x), Φ2 (x), . . . , ΦK (x) are all within (n,²) 2K²0 = ² distance from x, so they represent some K returns of x to Bx . This implies that K−1 X K R (n, ², x) ≤ R1 (n, ²0 , Φi (x)). i=0

32

TOMASZ DOWNAROWICZ

Now fix γ > 0 and let n be big enough, so that 1 log R1 (n, ²0 , x) ≤ hOW (µ, ²0 ) + γ n except on a set F of measure at most OW 0 by Ken(h (µ,² )+γ) whenever x∈ /

γ . 2K

K−1 [

Then RK (n, ², x) is estimated from above

Φ−i (F ).

i=0

PK−1 By Lemma 7.1.8, this union has measure smaller than i=0 2i µ(F ) < γ. We have shown that 1 log K log RK (n, ², x) ≤ + hOW (µ, ²0 ) + γ n n on a set of measure at least 1 − γ. By passing with n to infinity and then removing the arbitrarily small γ we complete the proof. ¤ 1 Proof of Proposition 7.1.6. Recall that now ² = K . It suffices to consider µ and s µ ergodic. Again, we restrict our considerations to a full measure invariant set X 0 where RK (n, 4² , x) is finite. Fix some n ∈ N. Let T j1 (x), T j2 (x), . . . , T jK (x) ² (n, 4 )

represent the first K returns of x to Bx

. Then among the K + 1 points

t, τ j1 (t), τ j2 (t), . . . , τ jK (t) in T there is at least one pair that is less than ² apart from each other. Let 0 ≤ jx < RK (n, 4² , x) be the index of the earlier member of the earliest such pair. b Define Ψ(x, t) = (T × τ )jx (x, t). Notice that b R1 (n, ², Ψ(x, t)) ≤ RK (n, 4² , x). Fix a partition of T into K equal arcs (of length ²). This partition induces a b −1 (y, s) with y ∈ X 0 into at most K classes (according partition of each preimage Ψ to the second coordinate). Notice that within each class all points are less than ² apart (this is obvious for the second coordinate; for the first one notice that in all elements of this preimage the first coordinates are less than 2 4² away from y). We will use techniques similar to those of the preceding proof: We apply the linear order inherited from the backward orbit of (y, t), but this time separately to each class: b −1 (y, s) = {. . . , (x−2 , t−2 ), (x−1 , t−1 ), (x0 , t0 )}, class in Ψ and define Ψ on X 0 × T as identity except at points of the form (x0 , t0 ) where we b 0 , t0 ) = (y, s). Verification of measurability of Ψ is standard. set Ψ(x0 , t0 ) = Ψ(x This map is also easily seen to be at-most-(K + 1)-to-one, because each point (y, s)

ENTROPY STRUCTURE

33

b −1 (y, s) (one from each class) receives (in preimage) at most K elements from Ψ and possibly itself. Note that R1 (n, ², Ψ(x, t)) < RK (n, 4² , x). b (see above). In other cases the This is clear at points where Ψ coincides with Ψ b point Ψ(x, t) = (x, t) has a successor in a class of a Ψ-preimage, which is a return (n,²) ² K to B(x,t) in less than R (n, 4 , x) steps. Applying Lemma 7.1.9 to 4² , for each γ we have 2 1 log RK (n, 4² , x) < hOW (µ, ²8 ) + γ, n 1 except on a set F with µ(F ) < K+3 , if n is large enough. On the other hand, we can find such n for which also 1 hOW (µs , ²) − γ < log R1 (n, ², (x, t)) n 1 s except on a subset of X × T of µ -measure at most K+3 . Because the latter return time does not depend on t, this last inequality holds in fact except on a set F 0 × T, 1 where µ(F 0 ) ≤ K+3 . By Lemma 7.1.8, the set F ∪ Ψ−1 (F 0 ) has measure at most K+2 K+3 < 1, so its complement is nonempty. For (x, t) with x in this complement we have 2 1 1 hOW (µs , ²) − γ < log R1 (n, ², Ψ(x, t)) ≤ log RK (n, 4² , x) < hOW (µ, ²8 ) + γ. n n Because γ is arbitrary, the proof is now complete. ¤

Our next lemma is a variant of S-M-B for covers proved in [D-W] and it will be used later with respect to (X s , T s ). We need only a slight modification of that. Recall that Vxn is the union of all Vxn , the elements of the cover V which contain x. 7.1.10 Lemma. If A is an essential partition in (X, T ) then for every γ > 0 there exists a cover V such that for every ergodic measure µ and µ-almost every point x ∈ X, (1)

lim inf n→∞

−1 n

log µ(Vxn ) > h(µ, A) − γ,

and (2)

1 n→∞ n

lim inf

log R1 (Vxn , x) > h(µ, A) − γ,

where R1 (Vxn , x) is the first return time of x to Vxn . Proof. This is verbatim Theorem 2 in [D-W]. The only additional observation needed here is that if A is essential then there is a cover V which is ²-inscribed in A (see [D-W] for definition) with respect to every invariant measure. In other words we need to cover the union δA of boundaries of A by an open set V with uniformly small invariant measure. Note that the functions µ 7→ µ(V ) are u.s.c. for all sets V and they obviously decrease to zero along any decreasing sequence of open sets V shrinking to δA. But such convergence must be uniform (see 2.1.1), so a required V can be found. ¤

34

TOMASZ DOWNAROWICZ

7.1.11 Lemma. hN ew (X s |µs , 2²) ≥ hN ew (X|µ, ²). Proof. We need to show that, for µ and µs ergodic, (∀F s ⊂X s ,µs (F s )>σ )(∃F ⊂X,µ(F )>σ )(∀x∈F )(∃xs ∈F s ) H(n, δ|xs , F s , 2²) ≥ H(n, δ|x, F, ²) − γ, where γ does not depend on n. As in the proof of Lemma 7.1.3, for such F s we let F be the projection of F s onto X. Now, let x ∈ F and let E be a (dn , δ)-separated (n,²) set in F intersected with Bx . The set E lifts to an (dns , δ)-separated set E 1 (of at (n,²) least the same cardinality) contained in F s intersected with Bx ×T. The product (n,²) Bs of Bx with some arc of length ² contains a subset E s of E 1 of cardinality at least ²#E. But since the rotation is an isometry, Bs is contained in the (n, 2²)-ball around any member xs of E s (multiplication by 2 is forced by the first coordinate). Thus, H(n, δ|xs , F s , 2²) ≥ H(n, δ|x, F, ²) + log ². ¤ 7.2 Proof of Theorem 7.0.1 Proof. (1) Let Href be a reference entropy structure obtained for some zero dimensional principal extension (X 0 , T 0 ) of (X, T ). We need to verify that the lift of Hf un is uniformly equivalent to Href on PT 0 (X 0 ). Here is how: The lifted functions hf un (·, Fk ) obviously coincide with the functions hf un (·, Fk0 ), where Fk0 is obtained by lifting Fk to X 0 . Since (X 0 , T 0 ) is a principal extension, the limit of the lifted functions hf un (·, Fk ) coincides with the entropy function href on PT 0 (X 0 ). Further, each href = h(·, A0k ) can be considered as hf un (·, Fk00 ), where Fk00 consists of k the characteristic functions of the elements of the clopen partition A0k , and these functions are continuous. Obviously, href also converge to href . The result now k follows directly from Lemma 7.1.2. At this point the existence and uniqueness of an entropy structure as equivalence class is granted. By previous remarks, from now on it suffices to test the other candidates using a particular choice of the reference entropy structure. (2) We now verify HLeb . Let (X 0 , T 0 ) be the standard principal extension of (X s , T s ) in which the essential partitions Ask lift to clopen partitions A0k (up to invariant measure zero). Then (h(µ0 , A0k )) is a reference entropy structure for (X, T ). Because we have h(µ × λ, Ask ) = h(µ0 , A0k ) for the lift µ0 of µ × λ, it is obvious, that for µ ∈ 0 0 0 PT (X), hLeb (µ, Ask ) is between the maximum and minimum of href k (µ ) = h(µ , Ak ) 0 f un on the fiber {µ projecting to µ}. The entropy structure H lifted to PT 0 (X 0 ) consists of functions constant on such fibers, and since it is uniformly equivalent to the reference entropy structure, it is also uniformly equivalent to HLeb . (3a) We continue with the investigation of HBow . Let µs be an ergodic preimage in the standard product X s of an ergodic µ and let µ0 be the unique lift of µs to the standard principal extension X 0 . By Lemma 7.1.3, we can perform the computation

ENTROPY STRUCTURE

35

of hBow (µ, ²) on µs in X s . Similarly, the elements of the reference entropy structure hk (µ0 ) = h(µ0 , A0k ) = h(µs , Ask ) can be computed on X s . For the reminder of the part (3a) and in part (3b) of this proof, X, µ, A will stand for X s , µs , and Ask , respectively. We need to remember that A is an essential partition. Obviously, it suffices to consider only ergodic measures µ. uni In order to show that Href ≥ HBow we fix ² = ²k > 0 and then we seek an appropriate partition A (= Ask0 ). Namely, we simply choose A with diameter smaller than ². By S-M-B, for σ < 1, γ > 0, and then sufficiently large n0 , the set F of points x ∈ X for which µ(Anx ) ≥ e−n(h(µ,A)+γ) for every n ≥ n0 has measure larger than σ. Let E be a (dn , ²)-separated subset of F . Then each point of E is in a different cell of An , so #E ≤ en(h(µ,A)+γ) . Therefore h(²|F ) ≤ h(µ, A) + γ, and hence hBow (µ, ²|σ) ≤ h(µ, A) (because γ is arbitrary). Now use 6.4.2 to remove σ. (3b) For the reversed domination, we fix an essential partition A in X, a γ > 0, and we need to find an appropriate ². Let V be an open set covering the boundaries of A, such that µ(B) ≤ δ for all invariant measures µ (see proof of Lemma 7.1.10), where δ is such that 5δ log(#A) + δ + η(2δ) + η(3δ) < γ, where η(t) = −t log t − (1 − t) log(1 − t). We choose ² to be the minimal distance between the disjoint closed sets A \ V , A ∈ A. Now we fix some ergodic measure µ. Let F be any set with µ(F ) > σ ≥ 1 − δ. Denote C = F \ V , and notice that µ(C) ≥ 1 − 2δ. For every point x ∈ X consider the sequence x∗ = (x∗n )n where the nth coordinate is the label of the member A ∈ A containing T n (x) with an additional marker (star) if T n (x) falls in the complement of C. Ignoring the marked coordinates for an x ∈ C we obtain the name of x (call it xC ) for the induced transformation on C denoted (C, µC , TC ) and the partition AC (A restricted to C). Throwing away a small subset of C, sequences x∗ corresponding to points x in the remaining set C 0 have, for sufficiently large n, the frequency of stars in [0, n) not exceeding 3δ, and they provide a variety of at least en(1−3δ)(hµC (AC ,TC )−δ) different initial blocks xC [0, n(1 − 3δ)] in the corresponding sequences xC (we have used the ergodic theorem for T and S-M-B for the induced map). By a standard application if Stirling’s formula, the cardinality of all possible such configurations of stars along [0, n) is approximately en(η(3δ)) . This easily implies that there exists at least one configuration of stars which appears in C 0 together with at least en(1−3δ)(hµC (AC ,TC )−δ)−nη(3δ) different blocks along unmarked coordinates. So, in C 0 (hence in F ) we can find at least that many points x separated by the partition An with the separation

36

TOMASZ DOWNAROWICZ

by A happening at times when none of them visits V . The set of such points is (dn , ²)-separated. We have proved that h(²|F ) ≥ (1 − 3δ)(hµC (AC , TC ) − δ) − η(3δ). Now, by Lemma 7.1.1 we easily obtain hµC (AC , TC ) ≥

hµ (A, T ) − hµ ({C, X \ C}, T ) − (1 − µ(C)) log(#A) µ(C) ≥ h(µ, A) − η(2δ) − 2δ log(#A).

So, by the choice of δ, h(²|F ) ≥ h(µ, A) − γ. Taking infimum over the sets F and applying 6.4.2 we obtain that hBow (µ, ²) ≥ h(µ, A) − γ, i.e., that HBow uniformly dominates the reference entropy structure. (4a) This and the next argument concern HσKat . Let F be a set of measure σ (or larger), and let E be a maximal (dn , ²)-separated set in F . Then E is also (dn , ²)-spanning in F , hence the union of (n, ²)-balls around the points of E contains F . This easily implies that hKat (µ, ²|σ) ≤ hBow (µ, ²|σ) = hBow (µ, ²), in particular uni

HBow ≥ HσKat . (4b) Let As be an essential partition of X s lifting to a clopen partition A0 in the standard principal extension, so that href (µ0 ) = h(µs , As ) is an element of the reference entropy structure Href . Fix γ > 0, let V be the cover of X s satisfying the assertion of Lemma 7.1.10(1), and let ² = Leb(V) . Fix an ergodic µs . Let 2 n0 be so large that for each n ≥ n0 , the measure of the set Fn of points for σ s n s s which −1 x ) > h(µ , A ) − γ is larger than 1 − 2 . For every x in Fn , n log µ (V s s µs (Vxn ) < e−nh(µ ,A )+nγ . For n > n0 consider a union of (n, ²)-balls in X s whose measure exceeds σ. Clearly, the balls contained in the complement of Fn have joint measure not exceeding σ2 . A ball containing a point x ∈ Fn is entirely contained s s in Vxn , so its measure is at most e−nh(µ ,A )+nγ . Thus, there must be at least σ nh(µs ,As )−nγ balls in this union. Taking logarithm, dividing by n and passing 2e with n to infinity we arrive to hKat (µs , ²|σ) ≥ h(µs , As ) − γ. By Lemma 7.1.4, this proves that HσKat uniformly dominates Href . (5a) In this argument concerning HRom we will refer to hRom (·, U). We will use hRom (·, ²) in the next. Fix a cover U of X. By Fact 6.6.2(2) we know, that hRom (µ, U) = hRom (µ0 , U 0 ), where µ0 is any lift of µ and U 0 is the lift of U. Choose a n n clopen partition A0 with diam(A0 ) smaller than Leb(U 0 ) in (X 0 , T 0 ). Then A0 < U 0 for each n, hence h(µ0 , A0 ) ≥ hRom (µ0 , U 0 ) = hRom (µ, U). We have proved that on PT 0 (X 0 ), the reference entropy structure uniformly dominates the lift of HRom . (5b) Let F = {f1 , f2 , . . . , fk } be a family of continuous functions on X. Then hf un (·, F) is an element of the entropy structure Hf un . For given γ we can find

ENTROPY STRUCTURE

37

γ 0 such that |s − t| < γ 0 =⇒ |η(s) − η(t)| < γk (η(t) is defined in part (3b) of this proof). If two functions f and f differ by less than γ 0 (uniformly), then Hµ×λ (Af |Af ) < γk for all probability measures µ. For any n we can write _

AF n =

Afj ◦T i .

0≤i
If F n is a family of functions f i,j each being a perturbation of the function fj ◦ T i W W 0 by P less that γ , respectively, then using an elementary fact that Hµ ( i Ai | i Bi ) ≤ i Hµ (Ai |Bi ), we get Hµ×λ (AF n |AF n ) ≤ kn γk = nγ. Let ² satisfy d(x, y) < ² =⇒ |f (x) − f (y)| < γ 0 for every f ∈ F . For given n, let A be any partition of X with diamdn (A) < ². Note that on any element of this partition, any function in F n varies by less than γ 0 . Let B denote the partition of [0, 1] into intervals of length γ 0 . Now we can find a family F n of simple functions being perturbations of the functions from F n by less that γ 0 , such that the corresponding partition AF n of X × [0, 1] is inscribed in A × B. This implies that hf un (µ, F) = hµ×λ (AF , T × Id) =

1 hµ×λ (AF n , T n × Id) ≤ n

1 1 1 hµ×λ (AF n , T n × Id) + γ ≤ hµ×λ (A × B, T n × Id) + γ = hµ (A, T n ) + γ. n n n n We have shown that hf un (µ, F) ≤ n1 h+ (dn ,²) (µ, T ) + γ (introduced few lines above Definition 6.6.3), and since n is arbitrary, hf un (µ, F) ≤ hRom (µ, ²) + γ. We have uni

proved that HRom ≥ Hf un . (6a) By Lemma 7.1.5, Definitions 6.7.1 and 6.3.1, and by S-M-B, we have hBK (µ, ²) = hBK (µ × λ, ²) ≤ hLeb (µ, As ), whenever diam(As ) < ². This implies that the entropy structure HLeb uniformly dominates HBK . (6b) By Lemma 7.1.10(1) and Definition 6.7.2, for every ergodic µs we have hBK (µs , V) ≥ h(µs , As ) − γ, for an appropriately chosen cover V. This inequality extends, through integrating over ergodic decomposition, to all invariant measures µs on the standard product. Applied to µs = µ × λ, along with 6.7.3, this implies that hBK (µ × λ, ²) ≥ hLeb (µ, As ) − γ,

38

TOMASZ DOWNAROWICZ

whenever ² < Leb(V) . Further, by Lemma 7.1.5 the left hand side can be replaced by 2 BK h (µ, ²). We have proved that HBK uniformly dominates the entropy structure HLeb . (7a) In this argument we refer to hOW (·, U). Fix a cover U . Because x returns to n Unx at exactly the same times as any lift x0 of x returns to U0 x0 in (X 0 , T 0 ), we have hOW (µ, U ) = hOW (µ0 , U 0 ). If A0 is a clopen partition of X 0 inscribed in U 0 then the n return times to A0 x0 are larger. Combined with Theorem 1 in [O-W] this implies uni

that Href ≥ HOW . (7b) Fix an essential partition As in X s , so that h(µs , As ) is an element of the reference entropy structure. By Lemma 7.1.10(2), hOW (µs , V s ) ≥ h(µs , As ) − γ, for an appropriately chosen cover V s of X s . Then applying 6.8.3 and Proposition 7.1.6 we deduce that 2 hOW (µ, ²8 ) ≥ h(µs , As ) − γ. uni

where ² is half of the Lebesgue number of the cover V s . This implies that HOW ≥ Href .

(8a) In [N] we find the following inequality (Theorem 1(1.1) there): hN ew (X|µ, ²) ≥ h(µ) − h(µ, A), whenever diam(A) < ², i.e., hN ew (X|µ, ²) ≥ h(µ) − h+ (d,²) (µ, T ). Now, it is not hard to see that nhN ew (X|µ, ²) is equal to the Newhouse’s local entropy evaluated for the power T n and the metric dn (and the same ²). Thus hN ew (X|µ, ²) ≥ h(µ) −

1 + h n (µ, T n ). n (d ,²)

Passing to the limit over n we conclude that h(µ) − hN ew (X|µ, ²) ≤ hRom (µ, ²). We apply this inequality to ergodic measures, and then, since hRom (µ, ²) is supharmonic, it passes, via integrating over the ergodic decomposition to all invariant uni measures. We have shown that HRom ≥ HN ew . uni

(8b) In order to prove that HN ew ≥ Href we need to show that for every essential partition As of X s and every γ > 0 there exists an ² > 0 with h(µ)−hN ew (X|µ, ²) ≥ h(µs , As ) − γ, for every µ and every lift µs of µ. By Lemma 7.1.11 (and since h(µ) = h(µs )), it suffices to find ² with h(µs ) − hN ew (X s |µs , 2²) ≥ h(µs , As ) − γ. But the functions on both sides are harmonic, so it suffices to show the above inequality for all ergodic measures µs . We have reduced the problem to showing that in any system (X, T ) admitting an essential partition A for every γ there exists ² with h(µ) − hN ew (X|µ, ²) ≥ h(µ, A) − γ for all ergodic measures µ. Note that we are now in the settings of Lemma 7.1.10. Let ² = 12 Leb(V), where V is the cover of Lemma 7.1.10(1) applied to γ2 . Fix some δ < ² and let B be a partition of diameter smaller than δ. For given σ < 1 there

ENTROPY STRUCTURE

39

exists an n0 for which the set F (depending on µ) of points x such that for any n ≥ n0 both log µ(Vxn ) < −nh(µ, A) + n γ2 and log µ(Bxn ) > −nh(µ, B) − n γ2 , has measure larger than σ (we use the standard S-M-B for the second part). Let E (n,²) be an (dn , δ)-separated set within Bx ∩ F . Then every element of Bn contains at most one representative of E, and if it does, it is contained entirely in Vxn (use the fact that δ < ², so that δ + ² < Leb(V)). Therefore the cardinality of E is at most the ratio between the measure of Vxn and the minimal measure of an element of B n containing a point from E (hence from F ). The logarithm of this fraction is less than nh(µ, B) − nh(µ, A) + nγ (if n > n0 ). Dividing by n and passing with n to infinity we obtain: h(δ|F, ²) ≤ h(µ, B) − h(µ, A) + γ. Letting δ → 0 and at the same time B become arbitrarily fine, we get h(X|F, ²) ≤ h(µ) − h(µ, A) + γ. We now apply infimum over F and limit over σ, and, according to Definition 6.10.1 of local entropy we obtain the desired inequality h(µ)−hN ew (X|µ, ²) ≥ h(µ, A)−γ. This concludes the entire proof. ¤ 8. Failing candidates In this section we will point out entropy notions, which in general do not serve as entropy structures. Nonetheless, each of them provides some partial information about the characteristics, and hence can be considered useful. Notice that for decreasing sequences we define the transfinite sequence uα directly using the component functions θk (without subtracting them from h). For candidate sequences is (increasing or decreasing) we will simplify the notation of uα , for instance uM α M is stands for uTα . Theorem 8.0.1. Let H denote an entropy structure of (X, T ), and let T = h − H denote the sequence of tails. Then (1)

uni

uni

Hpar ≥ H, and T ≤ T M is .

In particular (2)

H M is upar α ≤ uα ≤ uα .

The converse dominations and inequalities need not hold in general, except one: (3)

∗ u1M is = uH 1 (= h ).

40

TOMASZ DOWNAROWICZ

Proof. We begin with the easy part concerning Hpar . Let Ak be an arbitrary refining sequence of partitions of X, and let ²k = diam(Ak ). Directly by the Definition 6.6.3, we have h(·, Ak ) ≥ hRom (·, ²k ), thus Hpar uniformly dominates the entropy structure HRom . The first inequality in (2) now follows directly from the above, the definition of uα and the fact that H and Hpar have the same limit. The remaining part of the proof, concerning T M is , especially the last (exceptional) equality, is very complicated. Yet we will provide it in full detail because it is needed for the Tail Variational Principle 5.1.4. We now suspend the proof until after the auxiliary notions and lemmas. 8.1 Auxiliary notions: topological fiber entropy and ²-envelope Let (Y, S) be a topological extension of (X, T ) via a factor map π. 8.1.1 Definition. Let U be a cover of Y . For x ∈ X and µ ∈ PT (X) set (a) H(n, U|x) := log #minimal subfamily of U n covering π −1 (x); (b) h(U|x) := Rlimn n1 H(n, U|x); (c) h(U|µ) := h(U|x)dµ; (d) h(Y |µ) := supU h(U|µ). The last is called the topological fiber entropy function with respect to the extension. It is easy to see that H(n, U|x) is a subadditive process thus the limit in (b) exists µ-almost everywhere and equals h(U|µ) for every ergodic µ. Clearly, (c) defines a harmonic function. Because the supremum in (d) is realized as a monotone limit along a sequence of covers, h(Y |µ) is also a harmonic function of the invariant measure µ. We will be using the following fact: 8.1.2 Conditional (Inner) Variational Principle. (Theorem 2.1 in [L-W] or Theorem 4 in [D-S1]) If h(µ) < ∞ then h(Y |µ) = hπext (µ) − h(µ).

¤

Comparing the Definitions 8.1.1 and 6.9.1 we immediately see that if A is a clopen partition of X then (8.1.3)

h(X|µA ) = hM is (X|µ, A),

where µA is the projection of µ onto the subshift factor generated by A. Let (X 0 , T 0 ) be zero dimensional principal extension (X 0 , T 0 ) of (X, T ), and let 0 Ak be a refining sequence of clopen partitions. Denoting by µ0k the projection of µ0 onto the factor generated by the partition A0k we now set: T f ib := (h(X 0 |µ0k ))k∈N . This is a decreasing sequence defined on PT 0 (X 0 ) and it will help us to compare sequence of tails T ref of the reference entropy structure and the (lifted) candidate notion T M is . Another auxiliary sequence follows:

ENTROPY STRUCTURE

41

8.1.4 Definition. Denoting by θk := h − hkref the element of T ref , let θk²k (µ0 ) = sup{θk (ν 0 ) : dist(µ0 , ν 0 ) < ²k }, and ¡ ¢ T ² := θk²k k∈N , where ²k → 0 and dist is a metric on PT 0 (X 0 ). Again, this decreasing sequence defined on PT 0 (X 0 ) will be used for the same purposes as T f ib . 8.2 More technical lemmas Our first two lemmas treat the candidate T M is in the spirit of section 7.1. 8.2.1 Lemma. hM is (X|µ, ²) = hM is (X s |µs , ²). Proof. Because the rotation is an isometry, every (n, ²)-ball Bs in X s is a product of an (n, ²)-ball B in X with an arc of length 2² in T. Moreover, every (dn , δ)-separated set E in B can be lifted to a (dns , δ)- separated set E × F in Bs , where F is a subset of the 2²-arc of cardinality approximately 2² δ . Conversely, every maximal (dns , δ)-separated set in Bs , intersected with a horizontal strip X × [a, b] of width |a, b| = δ and projected onto X produces a (dn , δ)-separated set in B. The cardiδ nality of maximal such set is not less than #E 2² , and if it was larger, the preceding method would produce a larger (dns , δ)-separated set in Bs . This easily implies the assertion. ¤ Let (X 0 , T 0 ) be the standard zero dimensional principal extension obtained via a sequence of essential partitions Ask in (X s , T s ). As before, let A0k denote the corresponding clopen partition of X 0 , and for µs ∈ PT s (X s ) let µ0 denote the (unique) corresponding measure on X 0 . Let V s and V 0 be an open cover in X s and its preimage in X 0 , respectively. Then the following hold 8.2.2 Lemma. (1)

hM is (X s |µs , Ask ) = hM is (X 0 |µ0 , A0k ),

(2)

hM is (X s |µs , V s ) ≤ hM is (X 0 |µ0 , V 0 ).

Proof. We will fix the parameter k and thus we will skip it in the notation. We also skip all the superscripts s remembering that A is an essential partition of X. The proof of the inequalities “≤” are easy: Due to the integration in the Definition 6.9.1 it suffices to consider points x ∈ X with unique lift x0 . We choose the metric d0 on X 0 not smaller than d on images. If now E is a (dn , δ)-separated set n in Anx , (or in Vxn ) then we can select from the preimage of E a (d0 , δ)-separated n n set E 0 of the same cardinality as E, contained in A0 x , (or in V 0 x ), respectively. The proof of the converse inequality in (1) is harder. By 8.1.3, hM is (X 0 |µ0 , A0 ) = h(X 0 |ξ 0 ),

42

TOMASZ DOWNAROWICZ

where ξ 0 is the projection of µ0 onto the subshift generated by the clopen partition A0 . The second interpretation along with the Inner Variational Principle 8.1.2 allows us to write hM is (X 0 |µ0 , A0 ) = sup h(ν 0 ) − h(ξ 0 ), ν0

with the supremum ranging over all measures ν 0 projecting to ξ 0 . Unfortunately, the left hand side of (1) does not admit such interpretation, hence we cannot apply an argument on the measure-theoretic level. Nonetheless, notice that the above displayed formula is valid for ANY (not necessarily zero dimensional) extension of (X, T ) in which µ has a unique preimage µ0 , the extension is a measure-theoretic isomorphism for µ0 and µ, and which identifies (up to µ) the essential partition A with a clopen partition A0 . That means, if we replace (X 0 , T 0 ) by any such extension, we do not change hM is (X 0 |µ0 , A0 ). In the following argument (X 0 , T 0 ) will denote the extension constructed as follows: We view A as a finite set of labels (alphabet) and to each x we associate its A-name x = (An )n∈Z ∈ AZ defined by the rule T n x ∈ An . Then we let X 0 be the closure of the set: Ω := {(x, x) : x ∈ X} ⊂ X × AZ , with the product action of T with the shift on AZ , and with the metric d0 obtained as maximum of distances on both projections (use any standard distance for the subshift). Notice that the zero coordinate (in the second component) provides a clopen partition A0 of X 0 of the same cardinality as A. Except for points whose trajectories visit the boundaries of the elements of the partition A, every point x ∈ X has a unique lift (x, x) in X 0 , so the projection on the first axis it is a measure-theoretic isomorphism for every invariant measure, sending the elements of A0 onto the closures of the elements of A. Now, fix an x ∈ X which never visits the boundaries of A and let (x, x) be its lift. n n By our convention let A0 (x,x) be the element of the partition A0 which contains n n (x, x). Let E 0 be an (d0 , δ)-separated set in A0 (x,x) . If (y, b), (z, c) ∈ E 0 then either dn (y, z) > δ or b and c differ at a position between −m and n + m − 1, where m depends on δ (and not on n). Obviously, b and c do not differ at positions between 0 and n − 1 (here they coincide with x). The elements (y, b) ∈ E 0 can be classified by the configurations of values of b at positions between −m and −1 and between n and n + m − 1. There are at most (#A)2m such classes, so one of them contains at #E 0 least (#A) 2m elements of the form (y, b), where the sequences b are not sufficiently separated, and so the corresponding elements y form a (dn , δ)-separated set E in X. Since each b satisfies b[0, n) = x[0, n) = (A0 , A1 , . . . , An−1 ), the set E is contained Tn−1 in i=0 T −i Ai . We need to get rid of the closure marks (and, unfortunately, the intersection of closures may be larger than the closure of the intersection). But every point (y, b) of X 0 can be approximated by points (y 0 , b0 ) ∈ Ω, where b0 coincides with b at least on [0, n). So, we can modify the set E so that it remains (dn , δ)-separated, Tn−1 contained in i=0 T −i Ai , and, additionally contained in Ω. Now, by the definition of Ω, E is contained in the same intersection without the closure marks, i.e., in Anx .

ENTROPY STRUCTURE

43

We have proved that H(n, δ|x, A) ≥ H(n, δ|(x, x), A0 ) − 2m log(#A), for almost every x. Dividing by n, passing with n to infinity, integrating, and then passing with δ to zero, we get hM is (X|µ, A) ≥ hM is (X 0 |µ0 , A0 ).

¤

8.2.3 Question. Does the “missing inequality” in Lemma 8.2.2(2), i.e., hM is (X s |µs , V s ) ≥ hM is (X 0 |µ0 , V 0 ), hold? We continue by investigating the auxiliary sequences. uni

uni

uni

8.2.4 Lemma. T ref ≤ T f ib ≤ T M is ≤ T ² . Proof. The Inner Variational Principle 8.1.2 allows us to write h(X 0 |µ0k ) = sup h(ν 0 ) − h(µ0k ) = sup h(ν 0 ) − h(µ0 , A0k ), ν0

ν0

with the supremum ranging over measures ν 0 ∈ PT 0 (X 0 ) such that νk0 = µ0k . In particular, h(X 0 |µ0k ) ≥ h(µ0 ) − h(µ0 , A0k ). This proves the first uniform yielding in the lemma. Lemma 8.2.1 states that hM is (X|µ, ²k ) = hM is (X s |µs , ²k ). For given k let k 0 be such that Ask0 has diameter smaller than ²k . Then obviously, hM is (X s |µs , ²k ) ≥ hM is (X s |µs , Ask0 ). Further, by Lemma 8.2.2(1) and formula 8.1.3, hM is (X s |µs , Ask0 ) = hM is (X 0 |µ0 , A0k0 ) = h(X 0 |µ0k0 ) in the standard principal extension (X 0 , T 0 ). This proves the central uniform yielding. We will prove the last yielding by showing that T M is uniformly yields to (T f ib )² (which, by Lemma 8.2.4 uniformly yields to T 2² , and the latter is obviously uniformly equivalent to T ² ). We failed to find a less intricate argument for this inis equality (or any other leading to uM ≤ uH 1 1 ). Some ideas below are similar to those used in the proof of Theorem 1 in [D-W]. We need to show that for each k and γ > 0 there exists k 0 such that hM is (X|µ, ²k0 ) ≤ h(X 0 |νk0 ) + γ

44

TOMASZ DOWNAROWICZ

for some ν 0 within ²k distance from some lift µ0 of µ. By 8.1.3, we can replace h(X 0 |νk0 ) by hM is (X 0 |ν 0 , A0k ). On the other hand, by Lemma 8.2.1, inequalities 6.9.3, and Lemma 8.2.2(2), we have hM is (X|µ, ²k0 ) = hM is (X s |µs , ²k0 ) ≤ hM is (X s |µs , V s ) ≤ hM is (X 0 |µ0 , V 0 ), for any cover V 0 of X 0 and sufficiently small ²k0 (i.e., for sufficiently large k 0 ). Thus, it suffices to find a cover V s of X s lifting to V 0 in X 0 (and a mesaure ν 0 near µ0 ) such that (8.2.5)

hM is (X 0 |µ0 , V 0 ) ≤ hM is (X 0 |ν 0 , A0k ) + γ.

Clearly, since the considered functions are harmonic and the metric on the space of measures is convex, it is enough to consider ergodic measures µ0 only. The characteristic functions of the elements of the partitions A0k are continuous and separate points, so we can use them in the definition of the metric dist on PT 0 (X 0 ). In particular, there are l ≥ k and δ > 0 such that any two measures agree on the elements of A0 = A0l up to δ then they are less than ²k apart. Because A0 is finer than A0k , it suffices to prove the inequality 8.2.5. for A0 . We can also choose γ 0 δ smaller than 2 log(#A 0 ) . Recall that all partitions in the sequence Ak are “made” from essential partitions Ask of X s . Denote As = Asl . Define V s to be the cover by the elements A ∪ U , where A ∈ As and U is an open set covering the boundaries of all sets A, satisfying µs (U ) < δ for all invariant measures on (X s , T s ). The cover V 0 is now specified as the lift of V s . We can skip the apostrophes and indices in 8.2.5 remembering that (X, T ) is zero dimensional, A is a clopen partition and that V is obtained by adding to each element of A a small (in the sense of measure) open set V . We continue to seek an appropriate measure ν near an ergodic µ. Let B be another clopen partition (hence a cover) so fine that hM is (X|µ, V) ≤ hM is (B|µ, V) +

γ 2

We will complete the proof by constructing a measure ν near µ such that (8.2.6)

hM is (B|µ, V) ≤ hM is (B|ν, A) + γ2 .

Let x be a point where the convergence of limn n1 H(n, B|x, V) to h(B|x, V) holds (see Definition 6.9.1(b)), and such that it visits V and all sets A ∈ A with frequencies corresponding to the µ-measures of these sets. We need to estimate the number N (n, B|Vxn ) of elements B of Bn intersecting an element Vxn of V n containg x. We have \ T −i AT i x , Vxn ⊂ Vxn ⊂ i∈[0,n)\Jn

where Jn ∈ [0, n) comprises the times i for which T i x ∈ V . Each B ∈ B n intersecting Vxn is contained in some AnB ∈ An . Then \ AnB = Any = T −i AT i y , i∈[0,n)

ENTROPY STRUCTURE

45

for any y ∈ B, and AT i y = AT i x whenever i ∈ / Jn . This implies that there is a #Jn n variety of at most (#A) different sets AB . Let Bn be one of the elements B for which the number of sets B intersecting (contained in) AnB is maximal. We have derived the following estimate: N (n, B|Vxn ) ≤ (#A)#Jn N (n, B|AnBn ), or H(n, B|x, V) ≤ H(n, B|yn , A) + #Jn log(#A), where yn is a point chosen from Bn . Now, for not necessarily invariant measure define Z (8.2.7) H(n, B|µ, A) = H(n, B|x, A)dµ, and observe that this is a subadditive and harmonic process on measures. Clearly H(n, B|x, A) = H(n, B|δx , A), so, by Lemma 3.2.1 we have, for every m < n, 1 1 2m 1 H(n, B|x, V) ≤ H(m, B|νn , A) + log(#B) + #Jn log(#A), n m n n Pn where νn = n1 i=0 δT i yn . Passing with n to infinity, we get hM is (B|µ, V) ≤

1 H(m, B|ν, A) + δ log(#A), m

where ν is an accumulation point of the sequence of measures νn (since the elements of the partitions A and B are clopen, the function H(m, B|·, A) is continuous on points and hence also on measures). Finally, passing with m to infinity and using 1 the dominated convergence theorem (the function m H(m, B|ν, A) never exceeds log #B) we arrive at the desired inequality 8.2.6. Clearly ν is an invariant measure. The mass assigned by ν to an element A ∈ A is the limit of frequencies at which A appears as AT i yn in the representation of AnBn , so it differs from µ(A) by at most δ. This implies that ν is ²k -close to µ. The proof is now completed. ¤ 8.3 Proof of Theorem 8.0.1, continuation Proof for HM is . The latter uniform yielding in Theorem 8.0.1(1) is simply part of Lemma 8.2.4. The second inequality in (2) follows directly from this yielding. The same lemma implies that M is lim θek ≤ uH ≤ u²1 = lim θek²k . 1 ≤ u1 k

k

On the other hand, note that the double sequence θek²n is decreasing in both indices, k and n, so the diagonal θek²k converges to the iterated limit. But for fixed k, it follows almost directly from the definition of the u.s.c. envelope that limn θek²n = θek , so the diagonal converges to uH 1 and all above functions u1 are equal, which completes the proof of (3). ¤ It remains to show that the discussed candidates may indeed fail. The following simple example does the job.

46

TOMASZ DOWNAROWICZ

8.3.1 Example. Let (X, T ) be a disjoint union of countably many copies (Xn , S) of a strictly ergodic 0-1-subshift with entropy 1, with diam(Xn ) → 0, and accumulating at a fixpoint x0 . (A similar example was considered by Misiurewicz [M2] to show that the entropy function needn’t be u.s.c.) Denote by µn the measure supported by Xn and let µ0 = δx0 . Clearly, µn → µ0 . We can easily arrange a refining sequence of clopen partitions Ak so that h(µn , Ak ) = 1 for 0 < n ≤ k, and 0 otherwise (including n = 0). This defines the entropy structure of (X, T ). The entropy function equals 1 at each µn and zero at µ0 . The tails θk = h − hk attain 0 at µn with small n and at µ0 , and 1 only at µn with n > k. Taking limit of the u.s.c. envelopes of these tails we H obtain uH 1 = 1{µ0 } . The functions u1 + θk equal 1 at µ0 , in fact they coincide with H H θek , hence the function uH 2 is the same as u1 . In particular α0 = 1. M is We now examine T . Viewing the clopen partitions as covers and using 8.1.3 M is we can identify T with T f ib . For each Ak the fixpoint belongs to an open set along with some entire subsets Xn , hence the corresponding mesures µn project by πk to the same (fix-point mass) measure as µ0 . By the Inner Variational Principle 8.1.2 we obtain: h(X|πk (µ0 )) ≥ h(µn ) − h(πk (µ0 )) = 1. In fact, the component functions h(X|πk (·)) are identical as θek . Such functions are is = 1{µ0 } u.s.c. and decrease to the characteristic function of {µ0 }. Therefore uM 1 M is (which we expected, by Theorem 8.0.1(3)). For the evaluation of u2 observe that is + h(X|πk (·)) have the following form: 0 at µn with 0 < n ≤ k, the functions uM 1 1 at µn with n > k, and 2 at µ0 . Such functions are also u.s.c. but they decrease is is = 2 · 1{µ0 } . It is now easy to see that uM = α · 1{µ0 } for any integer α to uM α 2 leading to (wrong) order of accumulation ℵ0 . We test Hpar using the same example. Consider in X the partition C into two sets: C0 consisting of all point with 0 at the zero coordinate (regardless of Xn ) and C1 defined analogously, with 1. We include the fixpoint x0 into C1 . It is obvious that h(µ, C) = h(µ) for each µ. Choosing (Ck ) to be any refining sequence of partitions with C1 = C we obtain the sequence Hpar with hk = h for each k. This implies that upar ≡ 0 6= uH 1 . 1 8.3.2 Question. (S. Newhouse) Does there always exist a sequence of partitions such that Hpar is an entropy structure? Remark. Since in our example C divides X into one open and one closed set being the closure of its own interior, there is little hope for a purely topological restriction on the partitions making Hpar necessarily an entropy structure. Remark. Partitions with so called predictable boundary property would do. This notion is due to E. Lindenstrauss who conjectures that such partitions exist in any finite entropy system. A set ∆ is predictable if for every ² there is an open neighborhood U of ∆ such that the closed subshift generated by the {U, X \ U }-names has topological entropy smaller than ².

ENTROPY STRUCTURE

47

9. Tail entropy and asymptotic h-expansiveness We will now prove our Tail Variational Principle (5.1.4). For an open cover V, by V we will mean the closed cover by the closures of the elements of V. e such 9.0.1 Lemma. (Small Variational Principle) For any open covers V and V e holds that V < V e ≤ h(X|V). e h(X|V) ≤ sup hM is (X|µ, V) µ

Proof. The latter inequality has been already observed in 6.9.4. Comparing the Definitions 3.3.5 and 6.9.1 we can write 1 h(X|V) = sup lim sup H(n, U|x, V). n n x U Fix a cover U for which the first supremum above is attained up to some small δ. For n ∈ N let xn be a point which nearly realizes the latter supremum. Then, for n large, 1 h(X|V) − δ < H(n, U|xn , V). n The formula 6.9.1(a) can be also applied to closed covers, and the right hand side above will not drop if we replace V by V. As in the proof Lemma 8.2.4 observe that the process H(n, U|µ, V) defined by 8.2.7 is subadditive and harmonic on measures, so, by Lemma 3.2.1 we have, for every m < n, 1 1 2m H(n, U|xn , V) ≤ H(m, U|µn , V) + log(#U), n m n Pn where µn = n1 i=0 δT i xn . Because the elements of V are closed, the function H(m, U|·, V) is u.s.c. on points, and hence also on measures. Thus, passing with n to infinity, we get h(X|V) − δ ≤

1 1 e H(m, U|µ, V) ≤ H(m, U|µ, V), m m

where µ is an accumulation point of the sequence of measures µn . Clearly µ is an invariant measure. Finally, passing with m to infinity and using the dominated convergence theorem, we get e h(X|V) − δ ≤ h(U|µ, V). We can now apply supremum over the covers U and over all measures on the right hand side, and then ignore δ. ¤ is M is (X|µ, V ). = inf k h] Proof of Theorem 5.1.4. By Theorem 8.0.1(3), h∗ ≡ uM k 1 By an elementary argument on u.s.c. functions (see Proposition 2.4 in [B-D]) one obtains M is (X|µ, V ) = inf sup hM is (X|µ, V ). sup h∗ (µ) = inf sup h] k k µ

k

µ

k

µ

48

TOMASZ DOWNAROWICZ

For each index k let k 0 be such that diam(Vk0 ) < Leb(Vk ). Then V k0 < Vk . Clearly, the sequence h(X|Vk ) decreases in k, therefore by Lemma 9.0.1, sup h∗ (µ) = inf h(X|Vk ) = h∗ (X, T ). µ

k

¤ We can now take a well-known and often used property called asymptotic hexpansiveness defined by Misiurewicz as h∗ = 0 into a new perspective. It appears as a very special (trivial) case of the entropy structure. The statement below can be also found in [B-D], but we are now able to provide a purely “entropy structural” proof (ridding it of the quotation to [Le]). 9.0.2 Theorem. The following conditions are equivalent: (1) H converges to h uniformly on PT (X); (2) the constant sequence (h) is an entropy structure; (3) H uniformly dominates all sequences of functions increasing to h; (4) EH ≡ h < ∞; (5) h∗ ≡ 0; (6) α0H = 0 (zero order of accumulation of entropy); (7) h∗ = 0 (asymptotic h-expansiveness); (8) (X, T ) admits a principal symbolic extension. In particular, h is then u.s.c. Proof. It is immediate to see that each of these conditions requires that (X, T ) has finite topological entropy. The equivalence between (1), (2), and (3) is obvious. Consider an entropy structure H = (hk ) with u.s.c. differences (for example Hf un ). If (4) holds then, by Lemma 2.1.6, T = (h − hk ) is a sequence of u.s.c. functions converging nonincreasingly to zero. By 2.1.1, such convergence is uniform, so (1) holds. If (1) is true then every h − hk is a uniform limit of u.s.c. functions hk0 − hk , hence is u.s.c., so h is a superenvelope, and, obviously, it is the minimal one, so (4) holds. The equivalence between (2), (5), (6), and (7) follows directly from the defH inition of h∗ = uH 1 , α0 , and from the Tail Variational Principle 5.1.4. Equivalence between (4) and (8) follows from Theorem 5.1.1. Upper semicontinuity of h follows from (1) applied to any entropy structure consisting of u.s.c. functions (Hf un for example). ¤ Notice, that the last upper semicontinuity condition is essentially weaker than asymptotic h-expansiveness; even constant finite entropy function may occur in systems with infinite hsex (see Example 1 in [D] for a more explicit exposition). We also remark that (2) indicates that the entropy structure theory makes no distinction between expansive, h-expansive, and asymptotically h-expansive systems. We cannot help but restate a remarkable result concerning smooth dynamics by Buzzi [B]. 9.0.3 Fact. Every C ∞ map on a compact Riemannian manifold is asymptotically h-expansive. ¤

ENTROPY STRUCTURE

49

Remark. Newhouse [N] proved that his notion of local entropy provides an upper bound of the defect of h. Then, using Yomdin [Y], he provided an estimate of local entropy, which turns zero in C ∞ systems, implying upper semicontinuity of h ([N], Theorem 4.1). Eight years later Buzzi argued in a similar way, replacing Newhouse’s estimate by a different parameter. As we see form the definition (and which somehow escaped the author’s attention), Buzzi’s parameter is precisely h∗ , so, while he only claims upper semi-continuity of h, he actually proves asymptotic h-expansiveness of C ∞ maps. As our current investigation of hN ew and the formula 5.1.5 reveal, Newhouse’s upper bound of the defect of h also equals h∗ , and hence his C ∞ result is equivalent to asymptotic h-expansiveness. 10. Remarks on noninvertible continuous maps Many of the results of this paper apply almost unchanged to continuous maps. Before we provide evidence for such statement, we want to point out that in this theory there is one place where invertibility cannot be removed without a drastic loss of generality. Namely, whenever we consider symbolic extensions, we must have in mind subshifts on two-sided sequences (hence homeomorphisms). An attempt to build an extension in form of a one-sided subshift may fail badly, because without any assumptions on the system (X, T ), it may contain an invertible subsystem (or factor) with positive entropy, and then the theorem below applies: 10.0.1 Theorem. Let T : X → X be a homeomorphism, and suppose (X, T ) has a one-sided symbolic extension (Y, S) ( Y ⊂ ΛN ). Then htop (X) = 0. Proof. This is a consequence of a much more general statement: Suppose (Y, ν, S) is a measure-preserving transformation admitting a one-sided generator A. Then any invertible factor (X, µ, T ) of (Y, ν, S) has entropy zero. To see this, note that in such case any sigma-algebra in Y lifted from X is contained T∞ W∞ b (here Ab denotes the sigma-algebra in the tail sigma-algebra n=1 i=n T −i (A) generated by the partition A). It is known, however, that the tail sigma-algebra has entropy zero (see e.g. Remark 5.21 in [P]). ¤ Nonetheless, invertibility of the symbolic extension is practically the only “invertibility restriction” in the theory of entropy structures. The system (X, T ) itself need not be invertible in order to allow us to define the entropy structure H so that it supports all the characteristics and invariants, as listed in the introduction, including the existence of (invertible) symbolic extension whose extension entropy function coincides with a preset bounded affine superenvelope of H. The easiest way to make the passage is via natural extensions. 10.0.2 Definition. Let T : X → X be a continuous map. The natural extension of (X, T ) is defined as (X, T ), where X := {x = (xn )n∈Z : xn+1 = T xn for each n ∈ Z} ⊂ X Z , T ((xn )n ) = (xn+1 )n .

50

TOMASZ DOWNAROWICZ

It is obvious that (X, T ) is an action of a homeomorphism on a compact metric space, and that it factors to (X, T ) by projection onto coordinate zero: π0 (x) := x0 . If T is a homeomorphism, then π0 provides a topological conjugacy. Also note that, in any case, the conjugate map µ 7→ µ = π0 (µ) is an affine homeomorphism between PT (X) and PT (X) preserving the entropy function. This follows, roughly speaking, from the fact that both the invariant measures and their entropies depend upon the behavior of the forward orbits, which is identical in both systems. We skip the well-known details. 10.0.3 Definition. For a (noninvertible) system (X, T ) we define the entropy structure H by hk = hk ◦ π0−1 , where H = (hk ) is an entropy structure of the natural extension (X, T ). By definition, such entropy structure supports these entropy invariants of the system which do not change under passage to natural extension. Let us verify the only nontrivial one – the characterization of the extension entropy functions for (two-sided) symbolic extensions: Proof of Theorem 5.1.1, noninvertible case. By the application of π0 we can identify PT (X) with PT (X), and hence H with H. Let E be an affine superenvelope of H. Then there exists a (two-sided) symbolic extension (Y, S) of (X, T ) with the extension entropy function equal to E. But clearly, (Y, S) is also a symbolic extension of (X, T ), and so (X, T ) has a symbolic extension with the extension entropy function E. Conversely, let (Y, S) be a (one or two-sided) symbolic extension of (X, T ). It is not hard to see that then the two sided subshift (Y , S) obtained as the natural extension of (Y, S) (or (Y, S) itself, if it is two-sided) is an extension of (X, T ). Therefore, the extension entropy function for the factor map from (Y , S) to (X, T ) is an affine superenvelope of H. But (Y , S) has the same entropy function as (Y, S), so the extension entropy function for the map between (Y, S) and (X, T ) is the same, in particular, it is an affine superenvelope of H. ¤ The last thing that might be of our concern is whether we can define entropy structure on a noninvertible system directly, without referring to the natural extension. For example, can we define the reference entropy structure? The answer is certainly positive in the presence of essential partitions. In such case the entropy structure can be defined directly as the reference entropy structure. Otherwise there is a serious trouble, as the Lindenstrauss-Weiss’ theory applies to group actions only, so we are deprived of a basic tool at the very start of the construction. This is why the application of natural extensions seems unavoidable. However, just like in the invertible case, once we learn which candidate notions are good, the reference to special extensions will no longer be needed. So, it suffices to indicate at least one of our candidate notions which has two properties: (1) the definition does not require invertibility, and (2) the component functions hk remain unchanged when passing to the natural extension.

ENTROPY STRUCTURE

51

Define Hf un = (hf un (·, Fk ))k∈N , where Fk are finite families of functions such that AFk refine in X × [0, 1] (existence of such functions does not involve the action of T at all). 10.0.4 Theorem. Hf un is an entropy structure for (X, T ). Proof. On X define F k by f (x) = f (x0 ). It is now easy to verify that for each n ∈ N the partitions (π0 × id)−1 (AnFk ) and AnF of X × [0, 1] coincide. Because π0 k preserves invariant measures, we obtain hf un (µ, Fk ) = hf un (µ, F k ). In particular, hf un (·, F k ) converge to the lifted entropy function (which equals f un the entropy function on PT (X)), therefore the sequence H = (hf un (·, F k ))k∈N satisfies the condition of Lemma 7.1.2, and hence is an entropy structure for (X, T ). At the same time, via π0−1 , it transports to Hf un , and hence the latter is an entropy structure on (X, T ), as defined in Definition 10.0.3. ¤ Two other successful (for homeomorphisms) candidates are easily seen to pass also in the noninvertible case. We decided to give up notions where the verification is not immediate. (10.0.5) HRom (as well as the original definition of h− (µ, U)) never uses invertibility. If we consider on X the covers U k lifted from X, the corresponding entropies will stay the same. Although such covers do not refine in X, it is not hard to see that 0 k 2k U k = T (U k ) do. Now, it is a standard exercise to see that the limit defining h− (·, U k ) is insensitive to this modification. (10.0.6) Verification of HBK is also successful: One has to examine the cover version (hBK (·, Uk ))k∈N , and apply an argument like in the case of HRom . There are some technical problems with the remaining candidates. For instance, HLeb requires essential partitions in the standard product, which depends on [Li-W]. Our uniform dominations for HσKat rely on essential partitions in the standard product, too. Newhouse defines local entropy without invertibility, but it is not obvious whether this notion passes to natural extension unchanged. Also, in part (8b) of the proof of Theorem 7.0.1 we use the standard product. Actually, part (8a) uses no invertibility (and refers to HRom , which is OK). Perhaps part (8b) could be proved differently, using Hf un , but at this moment we leave this with no answer. Lack of essential partitions is the only obstacle for the above three candidates. So, at least in systems where essential partitions are granted, they pass as entropy structures. The situation is a bit worse for the remaining two candidates: We have used invertibility directly in the proof of 6.4.2, so HBow is uncertain, and in the estimates for HOW invertibility was strongly used in Proposition 7.1.6. Finally, we point out that the Tail Variational Principle relies on Lemma 8.2.4, where standard principal extension strongly interferes. It is hard to believe that it might fail in any case, however, without going into further elaborate calculations we can claim it for noninvertible maps only in the presence of essential partitions.

52

TOMASZ DOWNAROWICZ

References [B] [B-D] [B-F-F] [B-K] [D-G-S] [D] [D-F] [D-N] [D-S1] [D-S2] [D-W] [G-L-W] [Gm] [Gw] [K] [Kr] [Le] [L-W] [Li] [Li-W] [M1] [M2] [N] [O-W] [P] [R] [W]

J. Buzzi, Intrinsic ergodicity of smooth interval maps, Israel J. Math. 100 (1997), 125–161. M. Boyle and T. Downarowicz, The entropy theory of symbolic extensions, Inventiones Math. 156 (2004), 119–161. M. Boyle, D. Fiebig and U. Fiebig, Residual entropy, conditional entropy and subshift covers, Forum Math. 14 (2002), 713-757. M. Brin and A. Katok, On local entropy, Geometric dynamics (Rio de Janeiro 1981), Springer Lec. Notes in Math., vol. 1007, Springer-Verlag, Berlin, 1983, pp. 30–38. M. Denker, C. Grillenberger and K. Sigmund, Ergodic Theory on Compact Spaces, Springer Lec. Notes in Math., vol. 527, Springer-Verlag, 1976. T. Downarowicz, Entropy of a symbolic extension of a totally disconnected dynamical system, Ergodic Th. and Dyn. Sys. 21 (2001), 1051–1070. T. Downarowicz and B. Frej, Topological and measure-theoretic entropy of a Markov operator, Ergodic Th. and Dyn. Sys. (to appear). T. Downarowicz and S. Newhouse, Symbolic extensions and smooth dynamical systems (preprint). T. Downarowicz and J. Serafin, Fiber entropy and conditional variational principles in compact non-metrizable spaces, Funda. Math. 172 (2002), 217–247. T. Downarowicz and J. Serafin, Possible entropy functions, Israel J. Math. 172 (2002), 217–247. T. Downarowicz and B. Weiss, Entropy theorems along times when x visits a set, Illinois J. Math. 48 (2004), 59–69. E. Ghys, R. Langevin and P.G. Walczak, Entropie mesur´ ee et partitions de l’unit´ e, C. R. Acad. Sci. Paris, S´ er. I 303 (1986), 251–254. T. N. T. Goodman, Relating topological and measure entropy, Bull. London Math. Soc. 3 (1971), 176–180. L. W. Goodwyn, Topological entropy bounds measure-theoretic entropy, Proc. Amer. Math. Soc. 23, 3 (1969), 679–688. A. Katok, Lyapunov exponents, entropy and periodic orbits for diffeomorphisms, Publ. Math. I.H.E.S. 51 (1980), 137–173. U. Krengel, Ergodic Theorems (1985), de Gruyter Studies in Mathematics, Berlin, New York. F. Ledrappier, A variational principle for the topological conditional entropy, Springer Lec. Notes in Math., vol. 729, Springer-Verlag, 1979, pp. 78–88. F. Ledrappier and P. Walters, A relativised variational principle for continuous transformations, J. London Math. Soc. 16 (1977), 568–576. E. Lindenstrauss, Mean dimension, small entropy factors and an imbedding theorem, Publ. Math. I.H.E.S. 89 (1999), 227–262. E. Lindenstrauss and B. Weiss, Mean topological dimension, Israel J. Math. 115 (2000), 1–24. n action on a compact M. Misiurewicz, A short proof of the variational principle for a Z+ space, Asterisque 40 (1976), 147–158. M. Misiurewicz, Topological conditional entropy, Studia Math. 55 (1976), 175–200. S. Newhouse, Continuity properties of entropy, Annals of Math. 129 (1989), 215–235 corr. in 131 (1990), 409–410. D.S. Ornstein and B. Weiss, Entropy and data compression schemes, IEEE Trans. Inform. Theory 39 (1993), 78–83. W. Parry, Entropy and Generators in Ergodic Theory (1969), W.A. Benjamin, Inc., New York. P.P. Romagnoli, A local variational principle for the topological entropy (preprint). P. Walters, An Introduction to Ergodic Theory (1982), Springer–Verlag, Berlin.

ENTROPY STRUCTURE [Y]

53

Y. Yomdin, Volume growth and entropy, Israel J. Math. 57 (1987), 285–301.

˙ Wyspian skiego 27, Institute of Mathematics, Technical University, Wybrzeze 50-370 Wroclaw, Poland. E-mail address: [email protected]

ENTROPY STRUCTURE Tomasz Downarowicz May 11 ...

May 11, 2004 - In the last section we will also address the case of noninvertible continuous maps. It is intuitively obvious that chaotic behavior in a topological ...

408KB Sizes 1 Downloads 189 Views

Recommend Documents

POSSIBLE ENTROPY FUNCTIONS Tomasz ...
By a topological dynamical system (X, T) we shall mean a compact metric space. X with a continuous map T : X ↦→ X. The set of all invariant measures of such.

POSSIBLE ENTROPY FUNCTIONS Tomasz ...
hand, we can now ignore some technical details responsible for minimality of the ..... of course, also a Toeplitz function (into an appropriate product alphabet), ...

QUANTUM MECHANICS AND MOLECULAR STRUCTURE - 11 11 ...
The expression for the slater type orbitals for 2s electron in nitrogen ... Displaying QUANTUM MECHANICS AND MOLECULAR STRUCTURE - 11 11.pdf.

Tomasz Bogdal - GitHub
https1WWgithubGcomWqueezythegreatWCVW. Citizenship. HTML. XHTML. XML. CSS. jQuery. Dojo ... Tomasz Bogdal email / phone number. Agile. TDD.

2014 05 11 Newsletter May 11 2014.pdf
May 11, 2014 - ... enjoyed our trip to Christ Church Ca- thedral on April 5th. We met up over coffee, were present for the. 12 noon Peace Prayer, and then had an Historic tour. After lunch. in the crypt, we had a tour of the Stained Glass and Icon of

QUANTUM MECHANICS AND MOLECULAR STRUCTURE - 11 14 ...
a) 0 b) P/(2π) c) P/(4π) d) Pm2. /(4π. 2. ) 5. Determine the commutators of the operators d/dx and 1/x,. a) -1 b) 2. 1. x. − c) 2. 1. x. d) 2. x. Reg. No. Page 1 of 6 ...

NSE/ CMTR/34846 *** Date : May 11
May 11, 2017 - Change in Market lot for security JETFREIGHT in SME Platform ... Fax No. Email id. 1800-266-0053. +91-22-26598155 [email protected].

Friday AM - May 11, 2018
On the Menu • Events / 2 • Neighbours / 3 • Marketplace / 4 • For the Record & Weather / 5 • Ideas / 8 • AMusing / 9 .... Salty Dog Enduro race, May 13 .... “It's truly amazing how well the Bioflex system has ... n Trans Canada West by

May 11, 2018.pdf
6 days ago - "Seniors, If you are bringing someone to the Prom who does not attend Inter-Lakes, you will. need to fill out a permission slip. Permission slips ...

NSE/ CMTR/34846 *** Date : May 11
May 11, 2017 - In view of the guidelines mentioned in the above circular the Exchange has reviewed the lot size of the security Jet Freight Logistics Limited on ...

pdf entropy
Page 1 of 1. File: Pdf entropy. Download now. Click here if your download doesn't start automatically. Page 1 of 1. pdf entropy. pdf entropy. Open. Extract.

EC6301 Object Oriented Programming and Data Structure 11- By ...
EC6301 Object Oriented Programming and Data Structure 11- By EasyEngineering.net.pdf. EC6301 Object Oriented Programming and Data Structure 11- By ...

EC6301 Object Oriented Programming and Data Structure 11- By ...
virtual function: A genetic function, with a specific return type, extended later for each new argument. type. ... work array: A temporary array used for the storage of intermediate results during processing. Question Bank with ... Class needs access

EC6301 Object Oriented Programming and Data Structure 11- By ...
... and Data Structure 11- By EasyEngineering.net.pdf. EC6301 Object Oriented Programming and Data Structure 11- By EasyEngineering.net.pdf. Open. Extract.

Tomasz Dąbrowski / Rockhard GIC 2016 Poznań - GitHub
Page 3 ... so I can hot-swap any code at any moment. • will prefer dynamic data structures over performance. • also important on platforms where you want to hotpatch code over the network (mobile, consoles) ... are just huge because they can. •

Google Apps Incident: May 11 to 13 - Gmail
May 11, 2010 - Google Apps Gmail - May 11 through May 13, 2010. Prepared for Google ... for your business and continued support during this time. Sincerely,.

NSE/CML/32362 Date : May 11, 2016 Circular Ref.
May 11, 2016 - Regulations Part A, it is hereby notified that the list of securities further ... designated security codes thereof shall be as specified in Annexure.

NSE/CML/34845 Date : May 11, 2017 Circular Ref.N
May 11, 2017 - In pursuance of Regulation 3.1.2 of the National Stock Exchange (Capital Market) Trading. Regulations Part A, it is hereby notified that the ...

NSE/CML/34848 Date : May 11, 2017 Circular Ref.N
May 11, 2017 - This circular shall be effective with immediate effect. For and on behalf of ... Manager. Telephone No. Fax No. Email id. 022-26598459/8346.

NSE/CML/34849 Date : May 11, 2017 Circular Ref.
May 11, 2017 - CIN: U67120MH1992PLC069769 Tel: +91 22 26598235/36 ... No.: 0485/2017. To All Members,. Sub: Listing of further issues of Empee ...

LPS APPLICATION FORM Revision-May 11 2017.pdf
LPS APPLICATION FORM Revision-May 11 2017.pdf. LPS APPLICATION FORM Revision-May 11 2017.pdf. Open. Extract. Open with. Sign In. Main menu.

MAY 11 09-10 b1021.pdf
Write down the relationship between Mathematics and. Physics. PouzvØS® C ̄ؤ ̄3⁄4US® Cøh÷ ̄ EÒÍz öuõho¤øÚ. GÊxP. 2. List out social aims of teaching ...

11 MAY 16 Issue 14.pdf
Anderson, Cooper Watson, Samuel Taylor, Tim Long, Christian Iuliano and Patrick. Hannaford. The staff shavees are: Father Andrew, Mr Mann, Mrs Pieper, Mrs ...