F B M S: O F S   W  R T  C

I S T

Presented in Partial Fulfillment of the Requirements for the Degree Bachelor of Arts in the Department of Mathematics and Computer Science at The College of Wooster

by Jaymie Strecker

The College of Wooster 2004 Advised by: Jon Breitenbucher Dale Brown

c Copyright by

Jaymie Strecker April 21, 2004

Abstract Fractals are shapes in which parts of the shape resemble the whole shape in some way. Brownian motion, a type of random walk, is a fractal. Fractional Brownian motion, a biased random walk in which the walker favors certain directions at each step, is also a fractal. Used to model a wide range of phenomena, from river levels and landscape topography to computer network traffic and stock market indicators, fractional Brownian motion is easy to find in natural processes but not as easy to simulate. This paper introduces a new simulation algorithm called fracture-stretch and compares its effectiveness to existing simulation algorithms.

v

Acknowledgments Jon Breitenbucher and Dale Brown—for feedback, for support, and for helping me figure out the really difficult concepts. Mom and Dad—for support and patience. Ben—for being my target audience for parts of this paper. Andy—for feedback, for support, and for leaving notes in my carrel. Melisa, Emily, and Vicky—for watching me practice my presentations. Linda Eddy—for teaching me how to write.

vii

Contents Abstract

v

Acknowledgments

vii

Contents

ix

List of Figures

xi

List of Tables

xiii

CHAPTER 1

2

3

PAGE

Fractals 1.1 Definition . . . . . . . . . . . . . 1.2 Applications . . . . . . . . . . . . 1.3 Measurement . . . . . . . . . . . 1.3.1 Integer Dimensions . . . . 1.3.2 Fractal Dimensions . . . . 1.3.3 Similarity Dimension . . . 1.3.4 Box Counting Dimension 1.3.5 Hausdorff Dimension . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

1 1 7 7 9 11 12 14 18

Brownian Motion 2.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Fractal Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 A Simple Random Walk . . . . . . . . . . . . . . . . . . . . 2.3.2 A Random Walk with Normally Distributed Step Lengths 2.4 Wiener’s Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Continuity and Non-Differentiability . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

21 21 22 25 25 29 32 33

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

Fractional Brownian Motion 43 3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2 Hurst’s R/S Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 ix

3.3 3.4 3.5 4

5

6

Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 The Hurst Exponent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 The Fractal Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

A Survey of Fractional Brownian Motion Simulation Algorithms 4.1 Hurst’s Card Simulation . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Discrete Approximation of Mandelbrot’s and Van Ness’s Formula 4.3 Random Midpoint Displacement . . . . . . . . . . . . . . . . . . . 4.4 Successive Random Addition . . . . . . . . . . . . . . . . . . . . . 4.5 Fast Fourier Transform Filtering . . . . . . . . . . . . . . . . . . . . The Fracture-Stretch Algorithm 5.1 Description . . . . . . . . . 5.2 Analysis . . . . . . . . . . 5.2.1 Correctness . . . . 5.2.2 Computation Time 5.2.3 Features . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . . . . . . .

. . . . .

57 58 59 60 63 65

. . . . .

67 68 69 69 71 72

Implementation of the Fracture-Stretch Algorithm with the SimFBM Application 6.1 The SimFBM Application . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . 6.1.2 Major Classes and Methods . . . . . . . . . . . . . . . . . . . . 6.1.3 Implementation Details . . . . . . . . . . . . . . . . . . . . . . 6.2 Performance of the Fracture-Stretch Algorithm . . . . . . . . . . . . . 6.2.1 Data Analysis Methods . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

References

75 75 75 76 79 79 79 80 84 87

x

List of Figures Figure 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11

1.12 1.13 1.14 1.15 1.16 1.17

Page Generation of the triadic Cantor set [Feder (1988, page 62)]. . . . . . . Generation of the triadic Koch curve [Addison (1997, page 17)]. . . . Generation of the Sierpinski triangle [Weisstein (2004)]. . . . . . . . . Fractal forest [Mandelbrot (1977, page 57)]. . . . . . . . . . . . . . . . Fractal generated as the limit set of a group of mappings to the complex plane [Mandelbrot (1977, page 178)]. . . . . . . . . . . . . . . The Mandelbrot set [Mandelbrot (1977, page 188)]. . . . . . . . . . . . Successive magnifications of the curve y = x3 . . . . . . . . . . . . . . . Randomized Sierpinski triangle [McGuire (1991, pages 21-22)]. . . . . A coastline with fractal properties [Addison (1997, page 29)]. . . . . . Objects with a topological dimension of one [Addison (1997, page 11)]. Two-dimensional hyperspheres (discs) covering (a) a line segment and (b) a bent line segment intersect only in pairs, while threedimensional hyperspheres (spheres) covering (c) a surface intersect in groups of three [Addison (1997, page 12)]. . . . . . . . . . . . . . . Measuring the similarity dimension of a line segment, a square, and a cube [Addison (1997, page 15)]. . . . . . . . . . . . . . . . . . . . . . In the box counting method, the covering boxes may be laid out in a grid over the object [Addison (1997, page 32)]. . . . . . . . . . . . . . Measuring the box counting dimension of a line segment [Addison (1997, page 31)]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimating the box counting dimension [Addison (1997, page 33)]. . . Calculating the box counting dimension of the triadic Koch curve at its first step of generation. . . . . . . . . . . . . . . . . . . . . . . . . . Measuring the Hausdorff dimension of a smooth surface [Addison (1997, page 35)]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

2 3 3 4 5 6 6 8 9 10

11 13 15 16 17 17 19

2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.1 3.2

3.3 3.4 4.1 4.2 6.1 6.2 6.3 6.4 6.5

A Brownian motion path recorded by Perrin in 1909 [Mandelbrot (1977, page 13)]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A plot of the Brownian motion function [Addison (1997, page 66)]. A pair of Brownian motion traces and their corresponding twodimensional path [Addison (1997, page 75)]. . . . . . . . . . . . . . A Brownian motion trace viewed at three different scales [Addison (1997, page 59)]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . All possible paths for a simple random walk of three steps. . . . . . A binomial distribution with n = 20 and p = 0.5 [Wackerly et al. (2002, page 100)]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A normal distribution with expected value µ [Wackerly et al. (2002, page 171)]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yk over the interval Ik for some points of T. . . . . . . . . . . . . . . For |ta − tb | ≤ 1/2n , (a) |X(ta ) − X(k/2n )|, (b) |X(k/2n ) − X((k + 1)/2n )|, and (c) |X((k + 1)/2n ) − X(tb )| are all less than or equal to (d) Yn∗ . . . . The fractal noise at top consists of the increments of the regular Brownian motion at bottom [Feder (1988, page 165)]. . . . . . . . . . Fractional Brownian motion traces with (a) anti-persistence , H = 0.2; (b) neutrality, H = 0.5; and (c) persistence, H = 0.8 [Addison (1997, page 66)]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Measuring the box counting dimension of a fractional Brownian motion trace [Addison (1997, page 68)]. . . . . . . . . . . . . . . . . Pairs of fractional Brownian motion traces and their corresponding two-dimensional paths [Addison (1997, page 75)]. . . . . . . . . . .

. 22 . 23 . 24 . 26 . 27 . 28 . 30 . 38 . 39 . 45

. 50 . 53 . 55

Simulation using random midpoint displacement over increasing levels of recursion. For this sample, H = 0.8 [Voss (1985, page 833)]. . 63 Simulation using successive random addition over increasing levels of recursion. For this sample, H = 0.8 [Voss (1985, page 835)]. . . . . . 64 The SimFBM graphical user interface. . . . . . . . . . . . . . . . . . . The SimFBM graphical user interface after the user has run a simulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Class Diagram for SimFBM . . . . . . . . . . . . . . . . . . . . . . . . Simulations using fracture-stretch with 100 steps and intended H = 0.8. The log-log plot of (a) increment size, (b) increment variance, or (c) R/S against time length, along with the best-fit line, for a sample time series with 1000 steps and intended H = 0.8. . . . . . . . . . . . .

xii

76 77 78 81

82

List of Tables Table

Page

3.1

Values of H for natural processes, as calculated by Hurst et al [Feder (1988, page 153)]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

6.1

Average values for the mean of increment values and for H values estimated from increment size, increment variance, increment correlation, and R/S analysis. For the fracture-stretch method, averages are calculated for time series with 100 to 9000 steps; for the sums of random increments, for time series with 100 to 5000 steps. . . . . . . . 83

xiii

CHAPTER 1

Fractals

1.1

Definition

Fractals, while easy to understand through pictures, are slippery to define in words. Even Benoit Mandelbrot, the mathematician who pioneered fractal research with his book Les Objets Fractals and coined the term fractal, hesitates to define the term [Mandelbrot (1977, page 361)]. In the style of McGuire’s An Eye for Fractals [McGuire (1991)], this paper introduces the idea of fractals informally with pictures, then follows with a definition. One of the simplest examples of a fractal is the triadic Cantor set (Figure 1.1). To generate the triadic Cantor set, begin with the line segment [0, 1] and remove the middle third from it, leaving the line segments [0, 1/3] and [2/3, 1]. From each of these, again remove the middle third. The result of this process after infinite repetitions is what we call the triadic Cantor set. The triadic Cantor set includes all points x ∈ [0, 1] whose ternary expansion, x=

c1 c2 c3 + + + ..., 3 32 33 1

2

1. Fractals

has ci = 0 or ci = 2 (not ci = 1) for all i [Weisstein (2004)].

Figure 1.1: Generation of the triadic Cantor set [Feder (1988, page 62)].

Another classic example of a fractal is the triadic Koch curve (Figure 1.2). As with generation of the triadic Cantor set, begin with the line segment [0, 1], but this time replace the middle third with two angled line segments, giving the original line segment a triangular indentation. The object that we call the triadic Koch curve is created after infinite repetitions of this process. A third classic example of a fractal is the Sierpinski triangle, shown in Figure 1.3. The process of generating the Sierpinski triangle begins with an equilateral triangle and involves repeatedly removing smaller triangles from the original object. Figures 1.4, 1.5, and 1.6 illustrate other examples of fractals. For such complex objects, all of these fractals are generated by a surprisingly simple recursive process. In each fractal, magnifications of certain parts of the fractal look exactly or almost like the original fractal. Note that, in the strictest sense of the word, none of the objects illustrated on this page are actually fractals, since these illustrations only depict finite levels of recursion. These are imitations of fractals, recursed only to

1.1. Definition

3

Figure 1.2: Generation of the triadic Koch curve [Addison (1997, page 17)].

Figure 1.3: Generation of the Sierpinski triangle [Weisstein (2004)].

the limits of the resolution of this page, but we imagine that they possess detail at an infinitely small scale [Addison (1997, page 8)]. Our observations about what these examples of fractals have in common lead us to a definition of fractals. Mandelbrot warily defines a fractal as follows:

4

1. Fractals

Figure 1.4: Fractal forest [Mandelbrot (1977, page 57)].

Definition 1.1 (fractal). A fractal is “a shape made of parts similar to the whole in some way” [Addison (1997, page 8)]. The cause for Mandelbrot’s cautious imprecision is that he does not want to pin down the term fractal in such a way that some objects that look like fractals are excluded from his definition [Mandelbrot (1977, page 361)]. In place of a more specific definition, Mandelbrot lists two qualities that are frequently associated with fractals: invariance under displacement and invariance under scaling. For an object to be invariant under displacement, different regions of the object must look similar to each other. For an object to be invariant under scaling, magnifications of parts of the object at different levels of scaling must look similar to the whole object [Mandelbrot (1977, page 18)]. Fractal geometry’s properties of invariance under displacement and invariance under scaling may, at first, counteract our mathematical intuition. From our experience with Euclidean geometry (the geometry of regular shapes like circles and

1.1. Definition

5

Figure 1.5: Fractal generated as the limit set of a group of mappings to the complex plane [Mandelbrot (1977, page 178)].

triangles [Voss (1985, page 806)]), we know that if we magnify a differentiable point on a curve as in Figure 1.7, then the magnified section of the curve appears straighter the more we zoom in. With fractal objects, successive magnifications yield more detail, not less. In fact, fractal geometry has been so counterintuitive to some people that, in the nineteenth century, the mathematicians who created the first fractal objects deemed them “pathological” and a “gallery of monsters” (F. J. Dyson, quoted in [Mandelbrot (1977, page 3)]).

6

1. Fractals

Figure 1.6: The Mandelbrot set [Mandelbrot (1977, page 188)].

Figure 1.7: Successive magnifications of the curve y = x3 .

1.2. Applications

1.2

7

Applications

The existence of non-Euclidean geometries has led some mathematicians to wonder about the limitations of Euclidean geometry, particularly with regard to the natural world. As Mandelbrot eloquently argues, “Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.” In describing nature, fractal geometry often succeeds where Euclidean geometry fails [Mandelbrot (1977, page 1)]. The motion of particles, the shape of coastlines, the likelihood of errors in electronic transmissions, the clustering of galaxies, the boundaries of clouds, the distribution of Moon craters, and the topology of blood vessels are just a few of the natural phenomena with which fractals have been connected [Mandelbrot (1977)]. If the fractals presented in Figures 1.1- 1.6 seem unnatural, it may only be because they are generated deterministically. With the addition of an element of randomness, the Sierpinski triangle bears a striking resemblance to mountainous terrain (Figure 1.8).

1.3

Measurement

If we are studying a particular fractal object, one question that arises is: how do we measure it? In other words, how much space does the object occupy? As we shall see shortly, Euclidean methods of measurement such as length and area prove inadequate for measuring fractals. As an example, Peters [Peters (1991, page 57)] describes an attempt to measure the length of a coastline, such as the one pictured in Figure 1.9, that has fractal

8

1. Fractals

Figure 1.8: Randomized Sierpinski triangle [McGuire (1991, pages 21-22)].

properties. We might estimate the length of the coastline by placing meter sticks end to end along the entire coastline. However, repeating this procedure with 10-centimeter sticks, we could capture more of the detail along the coastline, and so we would arrive at a very different length estimate. Measured at 10-centimeter resolution, the coastline is longer than it is when measured at meter resolution. Peters states, “The smaller the ruler, the longer the coastline. The length of the coastline is dependent on the size of the ruler!” Although the fractal nature of actual coastlines disappears at some finitely small resolution, making Peters’s statement true only for finitely small rulers, theoretical fractal coastlines like the one just described have infinite length. Since they are merely boundaries and have no width, their area is zero. Clearly, some method other than length (a measurement in one dimension) or area (a measurement in

1.3. Measurement

9

Figure 1.9: A coastline with fractal properties [Addison (1997, page 29)].

two dimensions) is needed to give meaningful measurements of fractal objects. For this, we turn to measurements that lie in non-integer dimensions.

1.3.1

Integer Dimensions

When we speak in everyday language about the dimension of an object, for example a two-dimensional photograph or a three-dimensional puzzle, we are typically referring to its Euclidean dimension. Definition 1.2 (Euclidean dimension). The Euclidean dimension DE of an object is

10

1. Fractals

“the number of co-ordinates required to specify the object” [Addison (1997, page 11)]. Another commonly used type of dimension is the topological dimension, which is related to the shape of an object and remains the same for the object regardless of deformations involving “such motions as stretching, shrinking, bending, but without tearing and without identifying any distinct points” [Baker (1997, page 62)]. For example, in Figure 1.10, the topological dimension of each object is one. The topological dimension of an object is determined by the way that discs, spheres, or, more generally, hyperspheres cover the object, as Figure 1.11 illustrates. Definition 1.3 (Topological dimension). “The covering of an object by [DE -dimensional hyperspheres] of small radius requires intersections between a minimum of DT + 1 groups of elements”, where DT is the topological dimension [Addison (1997, page 12)]. For both the Euclidean dimension and the topological dimension, their definitions imply that they are always integers [Addison (1997, page 11)].

Figure 1.10: Objects with a topological dimension of one [Addison (1997, page 11)].

1.3. Measurement

11

Figure 1.11: Two-dimensional hyperspheres (discs) covering (a) a line segment and (b) a bent line segment intersect only in pairs, while three-dimensional hyperspheres (spheres) covering (c) a surface intersect in groups of three [Addison (1997, page 12)].

1.3.2

Fractal Dimensions

Some other measures of dimension, which we collect under the term fractal dimension, are not limited to integers. Among objects classified as fractals, the fractal dimension of the object “is usually (but not always) a non-integer dimension greater than its topological dimension.” Heuristically, we can think of the fractal dimension as “a measure of the ’crinklyness’ or ’degree of convolution”’ of an object [Addison (1997, pages 8, 69)]. Many types of fractal dimension exist, including the similarity dimension, the

12

1. Fractals

divider dimension, the Hausdorff dimension, the box counting dimension, the correlation dimension, the information dimension, the pointwise dimension, the Lyapunov dimension, and others. As the names suggest, different dimension measures emphasize different aspects of the fractal object and may yield different results. What all of these dimension measures have in common is that they probe the fractal object to discover how much of it there is. How much space the object occupies is closely tied to the way that the object expresses its invariance under scaling [Addison (1997)]. Among the abundance of measures of fractal dimension, three of the most basic are the similarity dimension, the box counting dimension, and the Hausdorff dimension.

1.3.3

Similarity Dimension

All measures of dimension relate to the way that the object being measured scales, but the similarity dimension does so most literally. As a result, the similarity dimension is perhaps the easiest way to introduce the idea of measuring fractal dimensions. However, this measure of dimension only applies to exactly selfsimilar objects like the Sierpinski triangle (Figure 1.3), not to random fractals like the randomized Sierpinski triangle (Figure 1.8). Imagine that we are trying to measure how much space is filled by a line segment, a square, or a cube—in other words, the length of the line segment, the area of the square, or the volume of the cube. To do so, divide the object being measured into N equal-sized parts, each of which is a miniature copy of the object. Let ε be the side length of each scaled-down copy, as Figure 1.12 illustrates.

1.3. Measurement

13

Figure 1.12: Measuring the similarity dimension of a line segment, a square, and a cube [Addison (1997, page 15)].

For the line segment, the square, and the cube, the amount of space that each miniature copy occupies is 1/ε the length, 1/ε2 the area, or 1/ε3 the volume, respectively, of the original object. We will refer to the amount of space that an object occupies as its hypervolume, abbreviated V ∗ . Thus, the hypervolume of a line segment is its length (V ∗ = Nε); a square, its area (V ∗ = Nε2 ); and a cube, its volume (V ∗ = Nε3 ). To generalize the measurement of hypervolume to objects with other dimensions, let the exponent of ε be the variable DS . Then the hypervolume is NεDS = V ∗ .

Solving for DS , we have

14

1. Fractals

Definition 1.4 (similarity dimension, DS ). DS =

log(N) − log V ∗ . log(1/ε)

Although DS , the similarity dimension, is an integer for line segments, squares, and cubes, it can also take on non-integer values. To demonstrate, we calculate DS for the triadic Cantor set (Figure 1.1). Generation of the triadic Cantor set begins with a line segment of length one, from which the middle third is removed, leaving two smaller line segments. Thus, N = 2, ε = 1/3, and DS = log(2)/ log(1/(1/3)) = log(2)/ log(3) ≈ 0.6309 [Addison (1997, pages 14-16)].

1.3.4

Box Counting Dimension

Measuring the box counting dimension of an object is very much like measuring the similarity dimension, but instead of looking at how self-similar parts of the object itself scale, we look at how boxes covering the object scale. Since we consider the self-similarity of the covering boxes and not of the object itself, the box counting dimension extends the basic idea of the similarity dimension to include fractal objects whose self-similarity is not exact, such as the randomized Sierpinski triangle in Figure 1.8. An advantage of the box counting dimension is that its calculation lends itself to computerization, as the covering boxes may be laid out in a grid over the object (Figure 1.13). Imagine again that we are trying to measure how much space a line segment occupies, but now we do so by covering the line segment with N boxes of side

1.3. Measurement

15

Figure 1.13: In the box counting method, the covering boxes may be laid out in a grid over the object [Addison (1997, page 32)].

length δ, as Figure 1.14 shows. To cover the line segment requires at a minimum V ∗ /δ one-dimensional boxes. Note that for a line segment, covering the object with boxes is identical to dividing the object into self-similar parts. The hypervolume of the line segment is Nδ. Now, generalizing this hypervolume measurement by letting the exponent of δ be the variable DB , we have NδDB = V ∗ .

Solving for DB , we have Definition 1.5 (box counting dimension, DB ). DB =

log(N) − log(V ∗ ) . log(1/δ)

Usually when we are calculating DB , the box counting dimension, V ∗ is unknown

16

1. Fractals

Figure 1.14: Measuring the box counting dimension of a line segment [Addison (1997, page 31)].

to us. In order to determine the value of DB , we rewrite the equation above in slopeintercept form: log(N) = DB log(1/δ) + log(V ∗ ). Thus, DB is the slope of the plot of log(N) against log(1/δ). Given a sufficient number of data points for log(N) versus log(1/δ), we can estimate DB without knowing V ∗ , as Figure 1.15 demonstrates [Addison (1997, pages 31-33)]. As an example, we calculate the similarity dimension and the box counting dimension of the triadic Koch curve. Generation of the triadic Koch curve begins with a line segment of length one, from which 4 miniature copies, each 1/3 the size of the original, are created. Thus, DS = log(4)/ log(1/(1/3)) = log(4)/ log(3) ≈ 1.2618 [Addison (1997, page 17)]. Using the box counting method, at the nth step in generating the triadic Koch curve, we can cover the curve with 4n boxes of side length (1/3)n ; Figure 1.16 illustrates this for n = 1. Using the box counting dimension formula from Definition 1.5 in slpe-intercept form with N = 4n and

1.3. Measurement

17

Figure 1.15: Estimating the box counting dimension [Addison (1997, page 33)].

δ = (1/3)n , log(4n ) = DB log(1/(1/3)n ) + log(V ∗ ). Since this equation holds true for all n, we can infer that DB = log(4)/ log(3).

Figure 1.16: Calculating the box counting dimension of the triadic Koch curve at its first step of generation.

18

1. Fractals

1.3.5

Hausdorff Dimension

Of all of the measures of dimension, the Hausdorff dimension is perhaps the most authoritative, since Mandelbrot once defined a fractal as an object whose Hausdorff dimension exceeds its topological dimension. However, the Hausdorff dimension is not especially useful in practice, as it is difficult to compute. Fortunately, the box counting dimension is often a passable alternative to the Hausdorff dimension. Imagine once more that we are trying to measure the hypervolume of an object by covering the object with N boxes, each of side length δ. Let Vm∗ be the hypervolume that we measure using that set of boxes. The hypervolume measured when using boxes of dimension DE is NδDE = Vm∗ .

Suppose that the object in question is the smooth surface shown in Figure 1.17, and we pick DE = 1. This is equivalent to trying to measure the area of the surface by covering it with N line segments, so Vm∗ → ∞ as δ → 0. Suppose now that we pick DE = 3. In other words, we are trying to measure the area of the surface by covering it with N cubes, so Vm∗ → 0 as δ → 0. Only if DE = 2 does Vm∗ assume a value other than 0 or ∞. In general, if the value selected for DE is too small, Vm∗ → 0 as δ → ∞. If DE is too large, Vm∗ → 0 as δ → 0. At the critical value DE = DH , where DH is called the Hausdorff dimension, limδ→∞ Vm∗ switches from ∞ to 0 [Addison (1997, pages 33-36)].

1.3. Measurement

19

Figure 1.17: Measuring the Hausdorff dimension of a smooth surface [Addison (1997, page 35)].

CHAPTER 2

Brownian Motion

2.1

History

In the nineteenth century, Scottish botanist Robert Brown was observing pollen grains in water under a microscope when he noticed something peculiar. Instead of remaining stationary or drifting slowly, as he expected such non-living particles to do, the pollen grains traveled rapidly and erratically through the water, along paths like the one shown in Figure 2.1. As it turns out, the cause of the pollen grains’ erratic motion, which we now call Brownian motion, was bombardment by other, smaller particles in the water. Though invisible to Brown, the smaller particles struck the pollen grains with enough force to knock them along zig-zagging paths through the water [Addison (1997, page 54)] [Falconer (1985, page 119)]. Initially unaware of Brown’s observations, Albert Einstein predicted this erratic motion of particles in 1905. Einstein discovered the existence of this kind of motion not by observation but through mathematics, using the principles of mechanics [Kaye (1989, page 182)] [Falconer (1985, page 119)] [Mandelbrot (1977, page 408)]. In 1923, Wiener developed the probabilistic model for Brownian motion discussed in Section 2.4. In 1953, Taylor showed that, with probability 1, the Hausdorff 21

22

2. Brownian Motion

Figure 2.1: A Brownian motion path recorded by Perrin in 1909 [Mandelbrot (1977, page 13)].

dimension of Brownian motion paths is 2; we demonstrate this in Section 3.5 [Falconer (1985, page 119)].

2.2

Definition

Mathematicians define Brownian motion in the following way: Definition 2.1 (Brownian motion, B(t)). Brownian motion is a function B(t), defined for equally-spaced time steps ∆t, such that all increments ∆B(t) are independent, isotropic, and random [Mandelbrot (1977, page 238)]. Independent means that the value of the current increment does not affect the value of the next. Isotropic means that the increments are equally likely to occur in all directions. Random means that the values of the increments are unpredictable. Since the time steps ∆t may be arbitrarily small, the Brownian motion function boils down to a one-dimensional random walk in which the step size may be arbitrarily small [Feder (1988, page 164)]. Figure 2.2 gives an example of Brownian motion.

2.2. Definition

23

Figure 2.2: A plot of the Brownian motion function [Addison (1997, page 66)].

Brownian motion has many mathematically interesting features. The Brownian motion function is continuous but nowhere differentiable, as discussed in Section 2.5. In fact, Brownian motion was “the first example of a physical process characterized by non-differentiability” [M´ehaut´e (1990, page 81)]. Brownian motion falls under several categories of stochastic processes. It is a Markov process—a process in which “the future is determined by the present and is independent of the past” [Kaye (1989, page 149)]. Brownian motion’s stationary, independent increments make it a L´evy process [Pitman (2003, page 1)]. Brownian motion is also a martingale—a process in which the expected value of B(ti+1 ) given B(t1 ), B(t2 ), B(t3 ), . . . B(ti ) is the same as the expected value of B(ti ) [Weisstein (2004)]. When we discuss Brownian motion, we often consider another function in addition to the function B(t) in Definition 2.1. The function B(t), also referred to as a trace, describes a single dimension of a particle’s path plotted over time, as in Figure 2.2. In contrast, the path function describes the plot of the two- or higher-dimensional path that a particle follows, as in Figure 2.1. To understand

24

2. Brownian Motion

the relationship between paths and traces, note that a two-dimensional path may be created by plotting together two traces, one for each coordinate of the path. Conversely, a trace may be created from a path by plotting the motion in just one dimension of the path. Figure 2.3 illustrates the relationship between traces and paths [Addison (1997, pages 54, 63)].

Figure 2.3: A pair of Brownian motion traces and their corresponding twodimensional path [Addison (1997, page 75)].

Although an actual plot of the Brownian motion function would have detail at infinitely small scales, it is easy to approximate the Brownian motion function: simply use a sum of random increments. More formally,

B(ti ) =

i X

ε j,

j=1

where the ε j s are identically distributed random variables [Addison (1997, page 58)].

2.3. Fractal Properties

2.3

25

Fractal Properties

In a caption for the erratic particle path illustrated in Figure 2.1, Perrin writes: This plate gives only a weak idea of the prodigious entanglement of the real trajectory. If indeed this particle’s positions were marked down 100 times more frequently, each interval would be replaced by a polygon smaller than the whole drawing but just as complicated, and so on. It is clear from this description that, on some levels of scaling, particle paths are self-similar [Mandelbrot (1977, pages 12-13)]. It is important to note that physical and mathematical Brownian motions differ in that the self-similarity of physical paths ends at some finitely small scale, whereas the self-similarity of theoretical paths persists at infinitely small scales. The Brownian motion trace viewed at three different scales in Figure 2.4 suggests that the Brownian motion function is also a fractal. At each level of magnification, the plot of the function has approximately the same jaggedness. In fact, the function is statistically self-similar. Definition 2.2 (statistical self-similarity). A function is statistically self-similar if the statistical properties of the function scale with the length of time for which the function is observed [Addison (1997, pages 58-60)].

2.3.1

A Simple Random Walk

To show that a plot of the Brownian motion function is self-similar, we begin by demonstrating self-similarity for a simplified model, following the approach that

26

2. Brownian Motion

Figure 2.4: A Brownian motion trace viewed at three different scales [Addison (1997, page 59)].

2.3. Fractal Properties

27

Feder [Feder (1988, pages 164-170)] uses for the more complicated model that we explore next. Consider a simple random walk in which all steps are the same size and each step may be either up or down. At each step, the probability of moving up is equal to the probability of moving down. Such a random walk is illustrated in Figure 2.5, which shows all possible paths for a random walk of three steps, with the probability of landing at each possible position marked for each time step. The dotted line shows one possible path for the random walk. In this random walk, any two paths that start at the same point and have the same number of up-steps and down-steps as each other must end at the same point.

1/8 1/4 1/2 1

3/8 1/2

1/2

3/8 1/4 1/8

Figure 2.5: All possible paths for a simple random walk of three steps.

If we let the random variable U represent the number of up-steps in our random walk, then we can model U with a binomial distribution. A binomial distribution for a random variable Y describes the likelihood of each possible outcome of an experiment, where the experiement consists of n independent trials and each trial has the same probability p of success. For example, the experiment might be to toss a coin 20 times, with the probability of tossing heads 1/2 for each trial. The

28

2. Brownian Motion

graph in Figure 2.6 shows the probability of each possible number y of successes. The probability that Y = y is ! n y n−y p(y) = p q . y The expected value of Y is E(Y) = np and the variance is V(Y) = np(1 − p); these follow directly from p(y) [Wackerly et al. (2002, pages 97-106)].

Figure 2.6: A binomial distribution with n = 20 and p = 0.5 [Wackerly et al. (2002, page 100)].

2.3. Fractal Properties

29

For our simple random walk, if u is the number of up-steps and n is the total number of steps, then the probability distribution of u is n p(u) = u

!  n 1 . 2

We have let p = 1/2 since the increments of Brownian motion are isotropic. The expected value of U is 1 E(U) = n, 2 or half of the total number of steps. Hence, half of the steps are expected to be up and half down; the net change in height for the random walk is expected to be 0. The variance of U is 1 V(U) = n. 4 Thus, the variance of U is proportional to n, and the standard deviation of U is proportional to n1/2 . Since the variance of the simple random walk scales with the number of steps taken, the simple random walk is statistically self-similar.

2.3.2

A Random Walk with Normally Distributed Step Lengths

Now that we have demonstrated the self-similarity of a simple random walk, we will show self-similarity for a more realistic model of Brownian motion. Feder [Feder (1988, pages 164-169)] describes a random walk in which the step length has a normal distribution and proves in the following manner that this random walk is self-similar. A normally distributed random variable has the bell-shaped distribution illustrated in Figure 2.7. If a random variable Y has a normal distribution, its probability

30

2. Brownian Motion

density function is ! (y − µ)2 1 f (y) = √ exp − . 2σ2 σ 2π Y has expected value E(Y) = µ and variance V(Y) = σ2 ; these follow from f (y) [Wackerly et al. (2002, pages 170-173)].

Figure 2.7: A normal distribution with expected value µ [Wackerly et al. (2002, page 171)].

For our random walk with normally distributed steps, if we let the random variable ε represent the length of each step, including its positive or negative direction, then we can construct the following probability density function for ε: ! ε2 p(ε, τ) = √ exp − . 4Dτ 4πDτ 1

2.3. Fractal Properties

31

The variable τ represents the length of each time step, and the constant D is known as the diffusion coefficient. From p(ε, τ), we can infer that E(ε, τ) = 0

and V(ε, τ) = 2Dτ. If, instead of observing the value of ε at every time step, we only record the value of ε at every bth time step, the probability density function of ε changes. We can obtain the new probability density function of ε by substituting bτ for τ: ! ε2 p(ε, bτ) = √ exp − . 4Dbτ 4πDbτ 1

Thus, the new variance of ε is V(ε, bτ) = 2Dbτ.

Given that a plot of this random walk for ∆t time steps has vertical range ∆B, we can predict the vertical range of a plot for b∆t time steps. On average, this vertical range will be b1/2 ∆B. In the first two random walk plots in Figure 2.4, the number of time steps for each plot is close to four times the number of time steps in the plot below it, and the vertical range is about twice the vertical range of the plot below it. Because the vertical range of this random walk does not scale directly with the horizontal range, but rather with the square root of the horizontal range, the term self-affine more accurately describes this random walk than does the

32

2. Brownian Motion

term self-similar. Self-similarity is a special case of self-affinity in which the vertical range and horizontal range are directly proportional [Feder (1988, pages 167-169)] [Addison (1997, pages 59-60)].

2.4

Wiener’s Function

Instead of thinking about Brownian motion in terms of increments ε in the position of the particle, Wiener considers the random function B(t), which describes the position itself. To obtain the probability density function, expected value, and variance of B(t) − B(t0 ), we simply replace ε with B(t) − B(t0 ) and τ with |t − t0 | in the equations derived in Section 2.3.2. The resulting probability density function, expected value, and variance are ! (B(t) − B(t0 ))2 , p(B(t) − B(t0 )) = √ exp − 4D|t − t0 | 4πD|t − t0 | 1

E(B(t) − B(t0 )) = 0, and V(B(t) − B(t0 )) = 2D|t − t0 |. Wiener showed that the function B(t) − B(t0 ) defined by 1 B(t) − B(t0 ) ∼ ε|t − t0 |H , H = , 2 where ε is a normally distributed random variable with zero expected value and unit variance, has a probability density function, an expected value, and a variance that coincide with the above expressions. This function for B(t) − B(t0 ) provides

2.5. Continuity and Non-Differentiability

33

an algorithm to determine the value of B(t) given B(t0 ): choose a random number from the distribution of ε, multiply it by |t − t0 |, and add the result to B(t0 ) [Feder (1988, page 169)].

2.5

Continuity and Non-Differentiability

Breiman [Breiman (1968)] defines Brownian motion as a stochastic process (random process) over a probability space. Leading up to a definition of Brownian motion, we define these terms. Definition 2.3 (probability space). A probability space consists of a triple (Ω, F , P) where i) Ω is a space of points ω, called the sample space and sample points. ii) F is a collection of subsets of Ω called the events. iii) P(·) is a probability measure on F , referred to as the probability [Breiman (1968, page 19)]. Definition 2.4 (stochastic process). A stochastic process or continuous parameter process is a collection {Xt (ω)} of random variables on (Ω, F , P) where t ranges over an interval I ⊂ R. Whenever convenient the notation {X(t, ω)} or simply {X(t)} will be used [Breiman (1968, page 248)]. Definition 2.5 (Brownian motion as a stochastic process). Brownian motion is a stochastic process on [0, ∞) such that X(t) ≡ 0 and the joint distribution of X(tn ), . . . , X(t0 )

tn > tn−1 > · · · > t0 >≥ 0,

34

2. Brownian Motion

is specified by the requirement that X(tk ) − X(tk−1 ), k = 1, · · · , n be independent, normally distributed random variables with E(X(tk ) − X(tk−1 )) = (tk − tk−1 )µ, σ2 (X(tk ) − X(tk−1 )) = (tk − tk−1 )σ2 .

[Breiman (1968, page 250)] For all Brownian motion, Breiman assumes that the following continuity property is true: Definition 2.6 (continuity of Brownian motion). If a Brownian motion X(t) is continuous, then P(|X(t + ∆t) − X(t)| ≥ δ) = 0 for all δ > 0. ∆t→0 ∆t lim

We must show that this assumption does not contradict any of the other qualities of Brownian motion found in Definition 2.1. En route to a proof of continuity, we propose the following: Lemma 2.7. Let T0 be any finite collection of points, 0 = t0 < t1 < . . . < tn = τ; then . . . P(max X(t) > x) ≤ 2P(X(τ) > x) t∈T0

P(max |X(t)| > x) ≤ 2P(|X(τ)| > x). t∈T0

2.5. Continuity and Non-Differentiability

35

Proof. Let j∗ be the index of the first time step where the Brownian motion rises above x; thus, X(t j∗ ) > x. The probability that the maximum value of the Brownian motion exceeds x is the same as the probability that at least one value exceeds x, so P(max X(t) > x) = t∈T0

n X

P( j∗ = j).

j=1

Since P(X(tn ) − X(t j ) > 0) + P(X(tn ) − X(t j ) < 0) = 1, P(max X(t) > x) = t∈T0

n X

P(j = j)P(X(tn ) − X(t j ) > 0) + ∗

j=1

n X

P( j∗ = j)P(X(tn ) − X(t j ) < 0).

j=1

Because Brownian motion has independent, isotropic increments, P(X(tn ) − X(t j ) > 0) = P(X(tn ) − X(t j ) < 0), so P(max X(t) > x) = 2 t∈T0

n X

P(j∗ = j)P(X(tn ) − X(t j ) > 0).

j=1

Because P( j∗ = j) and P(X(tn ) − X(t j ) > 0) are independent, P(max X(t) > x) = 2 t∈T0

n X

P( j∗ = j, X(tn ) − X(t j ) > 0).

j=1

When j∗ = j, P(X(tn ) > X(t j )) ≤ P(X(tn ) > x) since X(t j ) > x. Hence, P(max X(t) > x) ≤ 2 t∈T0

n X

P( j∗ = j, X(tn ) > x).

j=1

Not every Brownian motion has a time step t j∗ where the Brownian motion exceeds

36

2. Brownian Motion

x. Using this fact and substituting τ for tn , we have P(max X(t) > x) ≤ 2P(X(τ) > x). t∈T0

This proves the first part of Lemma 2.7. For the second part, since {maxt∈T0 |X(t)| > x} = {maxt∈T0 X(t) > x} ∪ {mint∈T0 X(t) < −x}, P(max |X(t)| > x) = P(max X(t) > x) + P(min X(t) < −x). t∈T0

t∈T0

t∈T0

Again because Brownian motion has independent, isotropic increments, P(max |X(t)| > x) = 2P(max X(t) > x). t∈T0

t∈T0

By the first part of Lemma 2.7, P(max |X(t)| > x) ≤ 4P(X(τ) > x). t∈T0

Finally, since P(|X(t)| > x) = 2P(X(t) > x), P(max |X(t)| > x) ≤ 2P(|X(τ)| > x) t∈T0

[Breiman (1968, page 258)]. Theorem 2.8 (continuity of Brownian motion). For any Brownian motion X(t) there is a dense set T in [0, ∞) such that X(t) is uniformly continuous on T ∩ [0, a] [for all a and] for almost every ω. Uniformly continuous means that, for any ε > 0, there exists δ > 0 such that,

2.5. Continuity and Non-Differentiability

37

for all ∆t, |∆t| < δ implies that | f (t + ∆t) − f (∆t)| < ε [Baker (1997, page 109)]. A dense set in [0, ∞) is one whose closure equals [0, ∞); the closure of the set, in turn, is the smallest closed set (union of closed intervals, where closed intervals include (−∞, a] and [a, ∞)) that contains this set [Baker (1997, pages 37-38, 40)]. For example, the closure of A = (0, 1) ∪ (1, ∞) is [0, ∞), which makes A dense in [0, ∞). Hence, in Theorem 2.8, the dense set T includes all points in [0, ∞), with the possible exception of some singleton points; this theorem guarantees the continuity of X(t) everywhere except at those points. Theorem 2.8 only applies if Brownian motion is continuous in probability, which Breiman assumes it to be. For a stochastic process {X(t)}, t ∈ I, to be continuous in P

probability, it must be true for every t ∈ I that whenever tn → t, then X(tn ) → X(t) P

[Breiman (1968, page 255)]. Convergence in probability (Xn → X) means that for every  > 0, P(|Xn − X| > ) → 0 as n → ∞ [Breiman (1968, page 33)]. Proof. We show that Theorem 2.8 is true for a = 1, with the understanding that we could easily prove this for other values of a by modifying the set Tn that we are about to define. Let (

) k n Tn = n ,where k = 0, 1, . . . , 2 . 2 For example, 1 , 2

T1 = {0, T2 = {0, T3 = {0,

1 , 4 1 , 8

1 , 4

Let T=

1}

1 , 2 3 , 8

∞ [ n=1

1 , 2

Tn .

3 , 4 5 , 8

3 , 4

1} 7 , 8

1.}

38

2. Brownian Motion

Also, let Un =

sup

t j ,tk ∈T |t j −tk |≤1/2n

|X(t j ) − X(tk )|.

The supremum (sup) is the least upper bound. By the definition of continuity (X(t) → X(t0 ) as t → t0 ), Un must converge to 0 as n → ∞ in order for X(t) to be continuous on T ∩ [0, a]. Because {t j , tk ∈ T||t j − tk | ≤ 1/2m } ⊂ {t j , tk ∈ T||t j − tk | ≤ 1/2n } for m < n, the least upper bound of |X(t j ) − X(tk )| for |t j − tk | ≤ 1/2n cannot exceed the least upper bound for |t j − tk | ≤ 1/2m . Hence, the sequence {Un } monotonically decreases as n increases. Since {Un } is non-increasing, it converges to 0 only if it converges in probability to 0; that is, lim P(Un > δ) = 0.

n→∞

To verify that {Un } converges in probability to 0, we define a sequence {Yn∗ } where Un∗ ≤ 3Yn∗ for all n and show that {Yn∗ } converges in probability to 0. Let k k+1 Ik = n , n 2 2 "

#

and Yk = sup |X(t) − X( t∈Ik ∩T

k )| for k = 0, 1, . . . , 2n − 1. 2n

Figure 2.8 illustrates Yk over the interval Ik . Let Yn∗ = max Yk . k

Suppose that |ta − tb | ≤ 1/2n . Then |X(ta ) − X(k/2n )|, |X(k/2n ) − X((k + 1)/2n )|, and

2.5. Continuity and Non-Differentiability

39

Figure 2.8: Yk over the interval Ik for some points of T.

|X((k + 1)/2n ) − X(tb )| are all ≤ Yn∗ , as Figure 2.9 illustrates. By the triangle inequality,

Figure 2.9: For |ta − tb | ≤ 1/2n , (a) |X(ta ) − X(k/2n )|, (b) |X(k/2n ) − X((k + 1)/2n )|, and (c) |X((k + 1)/2n ) − X(tb )| are all less than or equal to (d) Yn∗ .

Un ≤ 3Yn∗ .

40

2. Brownian Motion

Now it remains to show that lim P(Yn∗ > δ) = 0.

n→∞

For some fixed n, the probability that the maximum value of Yk is greater than δ is the same as the probability that Yk > δ for at least one k. Thus,    [ P(max Yk > δ) = P  {Yk > δ} . k k

Since it is possible that Yk > δ for more than one δ and since k ranges from 0 to 2n − 1, P(max Yk > δ) ≤ k

n −1 2X

P(Yk > δ).

k=0

Because Yk has the same distribution as Y0 for all k, P(max Yk > δ) ≤ 2n P(Y0 > δ). k

We conclude the proof by showing that P(Y0 > δ) → 0 as n → ∞. Using previous definitions of variables, we can rewrite Y0 as Y0 =

sup |X(t) − X(0)| t∈I0 ∩T

=

|X(t)|

sup t∈[0,1/2n ]∩T

=

lim

max

N→∞ t∈[0,1/2n ]∩TN

|X(t)|

2.5. Continuity and Non-Differentiability

41

Furthermore, by Lemma 2.7,     1 P( max |X(t)| > δ) ≤ 2P |X n | > δ for N ≥ n. t∈I0 ∩TN 2 (For the above equation, recall that 1/2n = τ.) For arbitrarily large N, this inequality holds, so     1 P(Y0 > δ) ≤ 2P |X n | > δ 2 [Breiman (1968, pages 257-259)]. Theorem 2.9 (non-differentiability of Brownian motion). Almost every Brownian motion path is nowhere differentiable. This is a surprising result, given that almost every Brownian motion path is continuous. For a proof of this theorem, see Breiman [Breiman (1968, pages 261262)]. Corollary 2.10 (infinite variation of Brownian motion). Almost every sample path of X(t) has infinite variation on every finite interval. In other words, a plot of the Brownian motion function can have an infinite vertical range within a finite length of time [Breiman (1968, page 262)].

CHAPTER 3

Fractional Brownian Motion

3.1

Definition

If we think of Brownian motion as a type of random walk, then it is easy to extend the idea of Brownian motion to encompass semi-random walks. Peters describes a biased random walk—a walk in which the steps are not independent, but instead each step depends on all previous steps [Peters (1991, page 61)]. Fractional Brownian motion, a generalization of regular Brownian motion, is a type of biased random walk. Mandelbrot has extended the concept of Brownian motion by designating the following function:

Definition 3.1 (fractional Brownian motion). Fractional Brownian motion is a function BH (t) such that, for 0 < H < 1, BH (t) − BH (t0 ) ∼ ε|t − t0 |H , E(BH (t) − BH (t0 )) = 0, 43

44

3. Fractional Brownian Motion V(BH (t) − BH (t0 )) = 2Dτ(|t − t0 |/τ)2H , and C(BH (0) − BH (−t), BH (t) − BH (0)) = 22H−1 − 1.

In the first three expressions, Mandelbrot has substituted BH (t) for B(t) in Wiener’s expressions for regular Brownian motion (discussed in Section 2.4). When H = 1/2, these expressions simplify to Wiener’s. The last expression, the correlation of increments, follows directly from the first three expressions when certain assumptions are made about the probability density function of ε [Feder (1988, pages 170-171)]. Definition 3.2 (fractal noise). The increments BH (t) − BH (t0 ) of fractional Brownian motion are called fractal noise [Feder (1988, page 174)]. For example, Figure 3.1 depicts a regular Brownian motion together with its fractal noise.

3.2

Hurst’s R/S Analysis

When statisticians study natural processes such as temperatures or rainfall, they frequently model these processes with a random walk like Brownian motion. Such processes possess so many degrees of freedom, or independent factors that influence the processes, that patterns or other symptoms of determinism may seem unlikely to emerge over time. In fact, the “fuzzy” central limit theorem states that “data which are influenced by many small and unrelated random effects are approximately normally distributed”, and statisticians often associate the normal distribution

3.2. Hurst’s R/S Analysis

45

Figure 3.1: The fractal noise at top consists of the increments of the regular Brownian motion at bottom [Feder (1988, page 165)].

with random variables [Weisstein (2004)]. However, the work of hydrologist H. E. Hurst demonstrates that, instead of displaying complete randomness, many natural processes appear to follow a biased random walk like fractional Brownian motion [Peters (1991, page 61)]. During the first half of the twentieth century, Hurst studied the problem of constructing an optimally-sized dam along the Nile River. Hurst wished to design a dam whose reservoir neither overflowed nor emptied, even as the river’s depth

46

3. Fractional Brownian Motion

fluctuated from year to year. To design an optimal dam, he needed to be able to predict the range, or difference between the maximum and the minimum, of the river levels over a span of many years [Peters (1991, page 62)]. Hurst gathered data about river levels in previous years. To these data he applied a statistical method of his own invention, now termed rescaled range analysis or R/S analysis. First, he isolated a time series—a set of data spanning some period of time—to analyze. For this time series, he calculated R (the range), S (the standard deviation), and R/S (the rescaled range). Rescaling the range proportionally to the standard deviation gave Hurst a meaningful way to compare two time series that originated from different sources, such as different rivers’ levels or even different types of processes [Peters (1991, page 63)]. Hurst hypothesized that a record of the Nile River’s height over time would resemble regular Brownian motion; that is, fluctuations in the river level would be independent, isotropic, and random. If such were the case, then for a time series of T years, the range and the standard deviation would both be proportional to T1/2 , as Wiener’s work predicts (Section 2.4). Hence, Hurst analyzed the river level data using the following equation: Definition 3.3 (R/S analysis). R = (aT)H , where a is a constant. S

In this equation, Hurst expected that H = 1/2. Rewriting this equation as

log

  R = H log(T) + C, S

3.3. Applications

47

where C is a constant, Hurst could plot points along this line to estimate H; this is reminiscent of the box counting dimension estimate discussed in Section 1.3.4 [Peters (1991, pages 63, 70)].

3.3

Applications

Contrary to Hurst’s expectations, H for the Nile River levels was not 1/2, but 0.9. When Hurst applied rescaled range analysis to other rivers’ levels and even to other natural processes, he consistently found values of H greater than 1/2 [Peters (1991, page 63)]. Rescaled range analysis revealed H > 1/2 for processes including lake levels, rainfall, temperature, sunspot counts, and tree rings (Figure 3.1) [Feder (1988, page 153)]. Empirical evidence that H , 1/2 for time series of many natural processes suggests that fractional Brownian motion can model these processes better than can regular Brownian motion. Researchers have applied fractional Brownian motion to a wide range of problems, such as particle diffusion, landscape topography, DNA sequences, bacterial colonies, electrochemical deposition, and stock market indicators [Addison (1997, page 54)]. In particular, computer science applications of fractional Brownian motion include modeling network traffic and generating graphical landscapes [C ¸ aglar (2000, page 1)] [Mandelbrot (1977, page 263)]. Fractal statistics from natural processes often exhibit two characteristics that Mandelbrot calls the Noah Effect and the Joseph Effect. These terms allude to two stories in the book of Genesis: the forty-day flood in Noah’s time and Joseph’s prediction of seven bountiful years followed by seven years of famine. The Noah

48

3. Fractional Brownian Motion Phenomenon

Range of N years Mean 0.72 0.77 0.71 0.71

H Std. Dev. 0.091 0.055 0.082 0.088

River discharges Roda Gauge River and lake levels Rainfall Varves Lake Sake Moen and Tamiskaming Cointos and Haileybury Temperatures Pressures Sunspot numbers Tree-rings and spruce index Totals and means of sections Water statistics Varves Meteorology and trees Grand totals and means

10-100 80-1,080 44-176 24-211

Range 0.50-0.94 0.58-0.86 0.59-0.85 0.46-0.91

50-2,000 50-1,200 50-650 29-60 29-96 38-150 50-900

0.69 0.77 0.77 0.68 0.63 0.75 0.79

0.064 0.094 0.098 0.087 0.070 0.056 0.076

0.56-0.87 0.50-0.95 0.51-0.91 0.46-0.92 0.51-0.76 0.65-0.85 0.56-0.94

10-20,000

0.72 0.74 0.72 0.726

0.08 0.09 0.08 0.082

0.46-0.94 0.50-0.95 0.46-0.94 0.46-0.95

Table 3.1: Values of H for natural processes, as calculated by Hurst et al [Feder (1988, page 153)].

Effect refers to the tendency for natural time series to have wide variation, reminiscent of the unbounded variation of Brownian motion proved in Corollary 2.10. The Joseph Effect describes the tendency for natural time series to follow biased random walks [Mandelbrot (1977, page 248)].

3.4

The Hurst Exponent

The exponent H in the rescaled range analysis equation and the subscript H in the fractional Brownian motion function BH (t) are exactly equivalent. H is called

3.4. The Hurst Exponent

49

the Hurst exponent. As we have seen, when H = 1/2, both the R/S analysis definition (Definition 3.3) and the fractional Brownian motion definition (Definition 3.1) reduce to the case for regular Brownian motion, a random walk. Otherwise, the fractional Brownian motion is a biased random walk [Peters (1991, page 64)]. To understand how the value of H affects the bias of a semi-random walk, consider the scaling properties of fractional Brownian motion as compared to regular Brownian motion. In Section 2.3.2, we showed that, given a Brownian motion plot with ∆t time steps and vertical range ∆B, we expect a plot with b∆t time steps to have vertical range b1/2 ∆B. Given a fractional Brownian motion plot with the same dimensions, we expect a plot with b∆t time steps to have vertical range bH ∆B. For example, if H > 1/2, then the vertical range of BH (t) is greater than the vertical range of B1/2 (t) [Addison (1997, page 65)]. From the point of view of a particle undergoing fractional Brownian motion, what determines the vertical range of the motion is how often the particle changes direction. If the particle reverses its direction frequently, then it does not travel as far as a particle that rarely doubles back on its path. When H > 1/2, so that the particle tends to persist in the same direction, the fractional Brownian motion is called persistent. When H < 1/2, so that the particle path tends to turn back on itself, the fractional Brownian motion is called anti-persistent. Fractional Brownian motion traces with persistence, anti-persistence, and (for regular Brownian motion) neutrality are illustrated in Figure 3.2 [Addison (1997, pages 65-66)]. To see precisely how often a particle tends to switch directions, we can examine the correlation between future increments BH (t)−BH (0) and past increments BH (0)− BH (−t). A positive correlation indicates that increments tend to be in the same direction; a negative correlation, that increments tend to be in opposite directions;

50

3. Fractional Brownian Motion

and a zero correlation, that increments are independent. The greater the absolute value of the correlation, the more closely related future and past increments are. When H > 1/2, the correlation is positive; when H < 1/2, the correlation is negative; and when H = 1/2, the correlation is 0. Interestingly, C(BH (0)−BH (−t), BH (t)−BH (0)) depends only on H and not at all on t. Therefore, an increasing or decreasing trend in the past, no matter how long ago, influences future increments [Feder (1988, pages 170-171)]. With persistence and anti-persistence, fractional Brownian motion has infinite dependence on the past. Because of persistence and anti-persistence, together known as long-range dependence, fractional Brownian motion does not separate neatly into a periodic trend component and a random noise component. Consequently, fractional Brownian motion requires much more computational effort to simulate than does regular Brownian motion. With regular Brownian motion, to interpolate the expected value of the function at some time step between times t0 and t, we need only know the value of the function at t0 and t, whereas with fractional Brownian motion, we must consider the value of the function at times long before or after t0 and t [Mandelbrot (1977, pages 251-252)].

3.5

The Fractal Dimension

Since the Hurst exponent measures the scaling properties of fractional Brownian motion, we might guess that the Hurst exponent closely relates to the fractal dimension, which also measures scaling properties. In fact, these measures do share a linear relationship:

3.5. The Fractal Dimension

51

Theorem 3.4 (relationship between DB and H).

DB = 2 − H.

Proof. Recall that to calculate the box counting dimension DB of a fractal object, we cover the object with N boxes of side length δ and then compute DB using Definition 1.5. For a fractional Brownian motion trace, suppose that we isolate a time series of T time steps that spans 1 unit of time (Figure 3.3). During each time step of length 1/T, the average vertical range of the function is (1/T)H due to the scaling properties described in Section 3.4. In order to cover the plot of the function during a single time step, a rectangle of width 1/T and height (1/T)H is required. The area of this rectangle is (1/T)H+1 , so the number of squares with side length 1/T needed to cover it is (1/T)H−1 . For all T of the time steps, the total number of squares needed to cover the plot of the function is (1/T)H−2 . If we let N = (1/T)H−2 and δ = 1/T, then the box counting dimension is DB =

log(1/T)H−2 = 2 − H. log(T)

The time series spans 1 unit of time, so V ∗ = 1DB = 1 [Addison (1997, pages 65-69)].

This relationship between the fractal dimension and the Hurst exponent aligns perfectly with the notion of fractal dimension as a measure of the roughness of an object. As H increases and the fractional Brownian motion has more persistence, the plot of the function becomes smoother and DB decreases accordingly. Conversely,

52

3. Fractional Brownian Motion

as H decreases and the fractional Brownian motion is more anti-persistent, the plot of the function becomes more jagged and DB increases [Addison (1997, page 69)]. In Section 2.2, we described the path function, a parametric function in which each coordinate’s function is a Brownian motion trace. Similarly, we can construct fractional Brownian motion paths from traces, as Figure 3.4 illustrates. To calculate the box counting dimension of a fractional Brownian motion path with two coordinates, we examine a section of the path that results from T time steps spanning 1 unit of time. During each time step of length 1/T, each of the two traces has range (1/T)H , so we can cover the path during a single time step with a square of side length (1/T)H . For all T time steps, the path requires T such squares to cover it. Letting N = T and δ = (1/T)H , we have DB =

log(T) 1 = . H log((1/T) ) H

Since H can fall between 0 and 1, 1/H can assume values greater than 2. However, the fractal dimension of the path cannot exceed its Euclidean dimension DE , so we modify the box counting dimension to be  1 DB = min , DE . H 

Thus, the fractal dimension of regular Brownian motion and anti-persistent fractional Brownian motion paths is 2, and the fractal dimension of persistent paths lies between 1 and 2 [Addison (1997, pages 74-75)].

3.5. The Fractal Dimension

53

Figure 3.2: Fractional Brownian motion traces with (a) anti-persistence , H = 0.2; (b) neutrality, H = 0.5; and (c) persistence, H = 0.8 [Addison (1997, page 66)].

54

3. Fractional Brownian Motion

Figure 3.3: Measuring the box counting dimension of a fractional Brownian motion trace [Addison (1997, page 68)].

3.5. The Fractal Dimension

55

Figure 3.4: Pairs of fractional Brownian motion traces and their corresponding twodimensional paths [Addison (1997, page 75)].

CHAPTER 4

A Survey of Fractional Brownian Motion Simulation Algorithms Empirical evidence shows that the behavior of many natural processes resembles fractional Brownian motion. In order to model these processes, we need an algorithm to generate fractional Brownian motion. However, the long-range dependence inherent in fractional Brownian motion defies simple, localized interpolation of values of the function (Section3.4). To effectively simulate fractional Brownian motion, an algorithm must create time series that appear to account for long-range dependence, but must require few enough computations to run within a reasonable time frame. In his comparison of fractional Brownian motion simulation algorithms, Voss [Voss (1985, page 820)] judges various algorithms using these criteria: Considerations will be given to sample statistics (mathematically, how well does it approximate [fractional Brownian motion]?), visual features (does it look natural?) and computation characteristics (how does computation time vary with sample size? can one magnify or extend the sample beyond its boundaries?). 57

58

4. A Survey of Fractional Brownian Motion Simulation Algorithms

Similarly, as we survey several classic methods of simulating fractional Brownian motion, we compare the accuracy, computation time, and features of each.

4.1

Hurst’s Card Simulation

Many fractional Brownian motion simulation algorithms require computations that are far too difficult and time-consuming to perform by hand. However, Hurst was able to simulate time series that mirrored his observations of natural processes using only a special deck of cards. Hurst numbered a deck of 52 cards with the values ±1, ±3, ±5, and ±7. The distribution of cards approximated a normal distribution. Using the values on the cards to represent random increments, Hurst could generate a random walk by repeatedly shuffling the deck, cutting a card, and using the value of the card as the next increment in the random walk. To simulate a biased random walk, Hurst dealt himself a biased hand of cards in the following way. First, he cut a card from the deck and, after noting its value, replaced it. Then, he dealt two hands of 26 cards. He biased one of these hands according to the card that he had previously cut. For example, if the value of the cut card was +3, then he moved the 3 highest cards from the other hand to the biased hand and removed the 3 lowest cards from the biased hand. Finally, after adding a joker to the biased hand, Hurst repeatedly shuffled and cut cards as he had done for the regular random walk, starting over with a new biased hand when he cut the joker. Because Hurst used hands of cards that had a positive or negative bias, the

4.2. Discrete Approximation of Mandelbrot’s and Van Ness’s Formula

59

semi-random walk that resulted from his simulation exhibited persistence. Generally, Hurst found a positive correlation between adjacent increments. In fact, the Hurst exponent he calculated from his card simulation was close to recurring Hurst exponents that he had found for data from river heights and other natural phenomena (Table 3.1). Clearly, Hurst’s method has shortcomings. In his biased random walk, trends only continue until the joker is drawn, which occurs on average every 27 steps. Since the amount of bias in one hand is independent of the amount in any other hand, no correlations exist across different hands of cards. Thus, if one calculates the Hurst exponent of the biased random walk using R/S analysis with a step size τ >= 27, then H approaches 1/2 as the value of τ increases. Based on these results, Feder [Feder (1988, pages 157-160)] wisely observes, “We find it, with this background, very difficult to assess the significance of Hurst exponents H , 1/2 estimated from a limited set of observations”.

4.2

Discrete Approximation of Mandelbrot’s and Van Ness’s Formula

Mandelbrot and Van Ness define fractional Brownian motion in the following way: 1 BH (t) − BH (0) = γ(H + 12 ) where

Z

t

K(t − t0 ) dB(t0 ), −∞

  0 H−1/2  if 0 ≤ t0 ≤ t   (t − t ) 0 K(t − t ) =     (t − t0 )H−1/2 − (−t0 )H−1/2 if t0 < 0.

60

4. A Survey of Fractional Brownian Motion Simulation Algorithms

As Feder explains, “this definition states that the value of the random function at time t depends on all previous increments dB(t0 ) at time t0 < t of an ordinary [normally distributed] random process with average zero and unit variance.” This formula fulfills the scaling relationships for increment size and variance found in the definition of fractional Brownian motion (Definition 3.1). A discrete approximation arises directly from this formula by replacing the integral with a finite sum. To discretize the integral, split each integer time step t into n intervals. To make the discretized sum finite, rather than considering time steps infinitely into the past, consider only a limited number M of time steps that occurred previous to the tth time step. The discrete approximation that results is nt   X 1 −1/2 1 K t− n εi BH (t) − BH (t − 1) = n Γ(H + 21 ) i=n(t−M)

In the above equation, the εi s are normally distributed random variables with mean zero and variance one. To generate N time steps of fractional Brownian motion, this method requires O(NnM) computations. For the best results, M must be much larger than n. [Feder (1988, pages 172-174)] [Addison (1997, pages 70-71)].

4.3

Random Midpoint Displacement

A more efficient simulation algorithm, sometimes used in computer graphics because of its speed, is random midpoint displacement. The idea of random midpoint displacement is to interpolate the midpoint between two known values BH (t0 ) and BH (t) and then to randomly perturb the function value at that midpoint.

4.3. Random Midpoint Displacement

61

Consider a fractional Brownian motion where, for simplicity, BH (0) = 0. Let the variance for ∆t = ±1 be σ2 . Then, for any time t, V(BH (t) − BH (0)) = t2H σ2

is the variance in the time span between BH (0) and BH (t). Knowing that the variance of the increments BH (±1) − BH (0) is σ2 , assign the values of BH (±1) to be random numbers from a normal distribution with mean zero and variance σ2 . Next, interpolate the values of BH (±1/2) from the known values of BH (0) and BH (±1). The interpolated values are  BH

 1 1 ± = (BH (0) + BH (±1)), 2 2

or the averages of BH (0) and BH (±1). Now, add a random perturbation to each of the midpoints; this yields   1 1 BH ± = [BH (0) + BH (±1)] + ∆1 , 2 2 where ∆1 is a random variable. The variance of ∆1 is smaller than σ2 ; more precisely, σ2 σ2 V(∆1 ) = 2H − 2 4 σ2 = 2H [1 − 22H−2 ]. 2 The term σ2 /22H represents the variance that the fractional Brownian motion should have over a time span ∆t = 1/2. The term σ2 /4 represents the variance that is already

62

4. A Survey of Fractional Brownian Motion Simulation Algorithms

present as a result of interpolating between BH (0) and the random value BH (±1); this is the variance of (1/2)[BH (0) + BH (±1)]. Recursively, the algorithm continues to interpolate and perturb midpoints, using random variables with smaller variance for the perturbation as the time difference between points decreases. When the distance from each midpoint to its surrounding points is 1/2n , following the pattern begun for V(∆1 ), the variance of the random variable ∆n is V(∆n ) =

σ2 (2n )2H

[1 − 22H−2 ].

The simulations that result from random midpoint displacement (Figure 4.1) are fractals. Furthermore, the algorithm is more efficient than the discretized approximation of Mandelbrot’s and Van Ness’s formula; simulation of N points requires only θ(N) computations. Unfortunately, this method is unable to generate true long-term dependence, since, at each level of recursion in the algorithm, the midpoint calculations are all independent of each other. Also, in fractional Brownian motion, all time spans of length ∆t have the same variance, whereas in random midpoint displacement simulations, this criterion can fail. For example,

V(BH

        1 1 1 − BH − ) = 2V BH − BH (0) 2 2 2 2 σ = 2 2H . 2

For this time step of length 1, the variance should be σ2 , but this only occurs if H = 1/2 [Voss (1985, pages 824-826)].

4.4. Successive Random Addition

63

Figure 4.1: Simulation using random midpoint displacement over increasing levels of recursion. For this sample, H = 0.8 [Voss (1985, page 833)].

4.4

Successive Random Addition

Based on random midpoint displacement, an algorithm called successive random addition remedies many of the random midpoint displacement method’s flaws while maintaining a runtime of θ(N) for N time steps. Like random midpoint displacement, successive random addition interpolates midpoints and adds a random displacement at each midpoint. The difference is that successive random addition also adds a random displacement to all of the other points that exist at each step. Figure 4.2 shows a sample simulation. Motivating his successive random addition algorithm, Voss writes: With midpoint displacement, once determined, the value at a point remains fixed. At each stage only half of the points are determined

64

4. A Survey of Fractional Brownian Motion Simulation Algorithms

Figure 4.2: Simulation using successive random addition over increasing levels of recursion. For this sample, H = 0.8 [Voss (1985, page 835)].

more accurately. If one images the process of magnifying an actual object, as the spatial resolution increases all points are determined more accurately. Along with improved accuracy, successive random addition is more flexible than random midpoint displacement about the way it recursively subdivides time intervals. Instead of always interpolating the midpoint between endpoints BH (t0 ) and BH (t), successive random addition allows the interpolation of points at any ratio r between endpoints. According to Voss, The free choice of a value for r is an important addition for filmmaking. The addition of irregularities to a surface can be accomplished continuously as the resolution is slowly increased from frame to frame.

4.5. Fast Fourier Transform Filtering

65

At the (n + 1)th recursion into the algorithm, first interpolate the Nn+1 = Nn /r new points from the existing points, then add a random perturbation ∆n to every point, new and old. The variance of the random variable ∆n is V(∆n ) ∝ (rn )2H

[Voss (1985, pages 826-827)].

4.5

Fast Fourier Transform Filtering

Thus far, we have thought of fractal noise (Definition 3.2) as a function of time; call this function y(t). Alternatively, it is possible to think of fractional Gaussian noise as a sum of periodic (wave-like) components of various frequencies, much like audible noise. In this case, the spectrum function Y( f, T) gives information about the periodic component for each frequency f over the time interval 0 < t < T. The more powerful the periodic component with frequency f during 0 < t < T, the higher the value of Y( f, T). For regular Brownian motion increments, sometimes called white noise, all periodic components are equally powerful, so the spectrum function is virtually flat. The Fourier transform and inverse Fourier transform translate between functions of time like y(t) and spectrum functions like Y( f, T). The function Y( f, T) is a Fourier transform of y(t): T

Z Y( f, T) =

y(t)e2πi f t dt. 0

66

4. A Survey of Fractional Brownian Motion Simulation Algorithms

Conversely, y(t) is an inverse Fourier transform of Y( f, t): Z



y(t) =

Y( f, T)e−2πi f t d f . −∞

Using the Fourier transform and inverse Fourier transform, it is possible to convert white noise to other fractal noise. For the details of this algorithm, see Voss [Voss (1985, pages 822-824)] or Turcotte [Turcotte (1997, pages 148-149)]. With a Fast Fourier Transform, simulation of N time steps requires θ(N log N) computations.

CHAPTER 5

The Fracture-Stretch Algorithm

The fracture-stretch algorithm provides a new way to generate time series that approximate fractional Brownian motion. To simulate a fractional Brownian mo  tion with N time steps, this algorithm requires θ N log2 (N) computations. The fracture-stretch algorithm modifies a regular Brownian motion to make it resemble a fractional Brownian motion with a specified Hurst exponent. A simulated fractional Brownian motion created by the algorithm can easily be magnified to a higher resolution or extended to include time steps beyond its boundaries. The fracture-stretch algorithm draws from several of the fractional Brownian motion simulation methods surveyed in the previous chapter. Like random midpoint displacement and successive random addition, fracture-stretch begins creating the fractional Brownian motion on a large scale, then recurses into smaller portions of the fractional Brownian motion. Unlike these algorithms, fracturestretch begins by operating on a regular Brownian motion instead of a straight line segment; in this respect, the algorithm resembles the Fast Fourier Transform simulation. 67

68

5. The Fracture-Stretch Algorithm

5.1

Description

Given a desired Hurst exponent H and a regular Brownian motion time series B(t) spanning time t0 to tN , fracture-stretch returns an approximate fractional Brownian motion time series BH (t). The algorithm is recursive, operating first on the whole time series, then on smaller time spans. For each time span, the algorithm chooses a time step to be the fracture point and, from the fracture point onward, adds an adjustment to the time series at each step. This stretches or shrinks the vertical range of the time series to be consistent with a fractional Brownian motion with the desired H. Let tstart = t0 and tend = tN . Let tmin and tmax be the time steps at which the minimum and maximum values of B(t) occur in the time span tstart to tend . The value stretch that is added to the latter part of the time series is calculated as the difference between the expected vertical range of BH (t) and the actual vertical range of B(t) over the time span. In Section 3.4, we observed that if a fractional Brownian motion has vertical range ∆B over time span ∆t, then we expect the vertical range over time span b∆t to be bH ∆B. Thus, over tstart to tend , we expect the vertical range of B(t) to be R = B(tmax ) − B(tmin ) = b1/2

and the vertical range of BH (t) to be RH = BH (t0max ) − BH (t0min ) = bH , where t0min and t0max are the time steps where the minimum and maximum of BH (t)

5.2. Analysis

69

occur, possibly different from tmin and tmax . For simplicity, we have assumed that the increments B(∆t) come from a random distribution with standard deviation 1, making ∆B = 1. Knowing the value of R, define stretch to be   2H  if tmin < tmax   R −R . stretch =     −(R2H − R) if tmin > tmax Randomly select a time step t f racture to be the fracture point, where tmin < t f racture ≤ tmax . At each time step from t f racture to tN , add stretch to B(t). Adding stretch across this time span guarantees that the vertical distance between BH (tmin ) and BH (tmax ) is RH . Note that t f racture , tmin because adding stretch beginning at tmin does not stretch or shrink the vertical range over the time span tstart to tend as desired. The algorithm now reaches the recursive step and must choose two new time spans t0start to t0end . First, let t0start = tstart and t0end = t f racture−1 ; then, t0start = t f racture and t0end = tend . Note that over both of the new time spans, the time series has not yet been modified from the initial regular Brownian motion. The base case of the recursion occurs when tend ≤ tstart .

5.2 5.2.1

Analysis Correctness

The correctness of the fracture-stretch algorithm is determined by how well its outputs match the qualities in the definition of fractional Brownian motion (Definition 3.1). The fractional Brownian motion approximation that results from this algorithm inherits many of its statistical properties, including some of its fractal

70

5. The Fracture-Stretch Algorithm

nature, from the initial regular Brownian motion. However, it is up to the algorithm to modify the initial regular Brownian motion so that the resulting output has the desired value of H. First, we establish that, for any time span tstart to tend , E(stretch) = 0. Because regular Brownian motion has isotropic increments, the two cases tmin < tmax and tmin > tmax are equally likely. For any given value of stretch, the value −stretch is equally likely to occur, so the positive and negative values of stretch cancel each other out in the calculation of the expected value. Now, we examine how well the outputs of the fracture-stretch algorithm fulfill each quality in the fractional Brownian motion definition (Definition 3.1).

Increment size. The vertical distance between BH (t) at times t0 and t is RH = BH (t) − BH (t0 ) = B(t) − B(t0 ) +

X

stretchi ,

i

where

P i

stretchi is the sum of all stretch values added to the time series

within the time span t0 to t. Since the expected value of stretch is 0, on average BH (t) − BH (t0 ) = B(t) − B(t0 ). However, the amount of stretch over each time span tstart to tend is chosen specifically to make the vertical distance between tmin and tmax proportional to |tmin − tmax |H . Whether the addition of stretch successfully scales the vertical range of points other than tmin and tmax , and whether additions of stretch over subsets of a time span adversely affect the

5.2. Analysis

71

vertical range across the whole time span, remains to be discovered through testing. Mean of increments. The expected value of BH (t) − BH (t0 ) is   X   stretchi  = E(B(t) − B(t0 )) = 0. E(BH (t) − BH (t0 )) = E B(t) − B(t0 ) + i

The expected value of increments in the output is consistent with that of a fractional Brownian motion. Increment variance. The variance of BH (t) − BH (t0 ) is   X   V(BH (t) − BH (t0 )) = V B(t) − B(t0 ) + stretchi  . i

Whether the variance of the increments in the output scales with |t − t0 |H also remains to be tested. Increment correlation. Whether the correlation of the increments in the output corresponds to fractional Brownian motion remains to be tested.

5.2.2

Computation Time

During a single stage of recursion over the time span tstart to tend , the computation time of the fracture-stretch algorithm is linear in tend − tstart + 1, the number of time steps. The bottlenecks are the search for tmin and tmax and the addition of stretch to values of the time series. After operating on tstart to tend , the algorithm recurses into two time spans, each having on average (tend − tstart + 1)/2 time steps. The total work done at this level

72

5. The Fracture-Stretch Algorithm

of recursion is also linear in tend − tstart + 1. The smallest time spans on which the algorithm operates have 2 time steps. In all, the algorithm visits on average log2 (tend − tstart + 1) levels of recursion. When the fracture-stretch algorithm simulates N time steps, the algorithm visits   θ log2 (N) levels of recursion, with θ(N) operations at each level. Therefore, the   average-case computation time is θ N log2 (N) . In the worst case, the computation time is θ(N2 ).

5.2.3

Features

It is easy to extend a time series output from this algorithm to include additional time steps. To supplement BH (t), t0 to tN , with additional time steps tN+1 to tM , create a regular Brownian motion B(t) over the time span tN+1 to tM and apply fracturestretch to it, producing BH (t) over the same time span. Then, to each point after tN , add −BH (tN+1 )+BH (tN )+ε, where ε is a random variable with mean 0 and variance 1. This causes the increment between BH (tN ) and BH (tN+1 ) to have the same statistical properties as a regular Brownian motion increment. Lastly, use tmin and tmax over the time span t0 to tM to calculate the value of stretch and add this to the latter part of the time series, using tN+1 as t f racture . Unlike the normal order of operations in the fracture-stretch algorithm, here the stretch calculation considers the whole time series after smaller portions have already been modified. Thus, the vertical range RH from BH (tmin ) to BH (tmax ) is exactly, not just approximately, R2H . In a similar fashion, it is possible to increase the resolution of a fractional Brownian motion simulated by the fracture-stretch algorithm. To place a step t1/2 between steps t0 and t1 , begin by letting BH (t1/2 ) = BH (t0 ) + ε, where ε is a random

5.2. Analysis

73

variable with mean 0 and variance 1/2. Then, calculate stretch over the time span t0 to t1 (which in this case always happens to be 0) and add stretch to the time series beginning at t f racture = t1/2 .

CHAPTER 6

Implementation of the Fracture-Stretch Algorithm with the SimFBM Application

6.1

The SimFBM Application

The SimFBM application simulates fractional Brownian motion using the fracturestretch algorithm and regular Brownian motion using sums of random increments. SimFBM enables the user to create and analyze fractional Brownian motions and their associated fractal noises with statistics and graphs.

6.1.1

Graphical User Interface

Through the graphical user interface, illustrated in Figure 6.1, the user can enter settings about the simulated fractional Brownian motion and about the way that the simulation will execute. Under “FBM Options”, the user can choose for the simulation algorithm either “Sum of Random Increments” or “Fracture-Stretch”. The user can also specify the number of time steps to be simulated and, for fractional Brownian motion simulations, the Hurst exponent. With the group of “Simulation Options”, the user can control aspects of the 75

766. Implementation of the Fracture-Stretch Algorithm with the SimFBM Application

Figure 6.1: The SimFBM graphical user interface.

simulation output unrelated to the fractional Brownian motion. SimFBM allows the user to run multiple simulations at once; the user can specify the number of simulations. Several output options can be toggled on and off, including whether to plot the fractional Brownian motion and noise after each iteration of the algorithm or after each simulation, whether to display and plot statistics for each simulation, and whether to record the statistics for each simulation in a user-specified log file. When the user presses the “Run” button, the application simulates a fractional Brownian motion according to the options input by the user, as shown in Figure 6.2. SimFBM displays the plots and the collection of statistics in separate windows. If the user has opted to write the statistics to a log file, the application appends the statistics to the specified file, which can be opened in a spreadsheet application.

6.1.2

Major Classes and Methods

Figure 6.3 shows a diagram of the major classes of SimFBM. TimeSeries. The TimeSeries class provides an abstraction of a time series with equally-spaced time steps. A TimeSeries object stores as data members the

6.1. The SimFBM Application

77

Figure 6.2: The SimFBM graphical user interface after the user has run a simulation.

number of steps in the time series and the value of the time series at each step. In the TimeSeries class, the MinMax method (used in the fracture-stretch algorithm to find the minimum and maximum values of the time series over a given range) and other methods calculate statistics over the time series. The statistical methods are discussed further in Section 6.2.1. Noise2 and Motion. The Noise2 class, named as such to avoid conflicts with preexisting classes, represents a fractal noise time series. The Motion class represents a fractional Brownian motion time series. Associated with each Motion object is a Noise2 object; the value of the noise time series at a given time step corresponds to the increment in the motion time series between the same time step and the time step before it. Simulation. The Simulation class combines the data and methods necessary to

786. Implementation of the Fracture-Stretch Algorithm with the SimFBM Application

Figure 6.3: Class Diagram for SimFBM

run a simulation of fractional Brownian motion. Data members include H, the Hurst exponent of the motion; motion, the fractional Brownian motion time series to be created; and noise, the noise associated with motion. A key component of the Simulation class is the virtual method Run, in which a child class supplies its own simulation algorithm. RandomSim and FractureStretchSim. Inheriting from Simulation, these two classes

6.2. Performance of the Fracture-Stretch Algorithm

79

represent simulations of regular and fractional Brownian motion. RandomSim simulates a regular Brownian motion time series by summing random increments. FractureStretchSim simulates a fractional Brownian motion by using the fracture-stretch algorithm, which is implemented in the recursive FractureStretch method.

6.1.3

Implementation Details

SimFBM is implemented in C++. The primary reason for choosing this language is to allow the application to utilize the Matpack C++ Numerics and Graphics Library [Gammel (2003)]. Available under the GNU Public License, Matpack provides random number generators, statistics functions (including R/S analysis), and X Windows GUI components used in SimFBM. To plot time series and statistics, SimFBM uses a freeware application called Gnuplot [Gnuplot].

6.2 6.2.1

Performance of the Fracture-Stretch Algorithm Data Analysis Methods

To evaluate the performance of the fracture-stretch algorithm, SimFBM calculates statistics for individual simulations based on the qualities in the definition of fractional Brownian motion (Definition 3.1). Mean of Increments. The mean of the increment values BH (t + ∆t) − BH (t) should be 0. SimFBM calculates the mean of the increments for all t in the generated time series using ∆t = 1.

806. Implementation of the Fracture-Stretch Algorithm with the SimFBM Application Increment Size and Variance. The size of the increments should be proportional to |∆t|H , and the variance should be proportional to |∆t|2H . In a plot of the log of the mean increment size, calculated for time steps of length |∆t|, against the log of |∆t|, a best-fit line through the plotted points should have slope H. Similarly, in a plot of the log of the increment variance, calculated for time steps of length |∆t|, against the log of |∆t|, a best-fit line should have slope 2H. SimFBM uses the slope of the best-fit lines in these plots to estimate H and compare it to the intended H. Increment Correlation. For a time series BH (t) spanning t0 to tN , SimFBM calculates the correlation of the increments {BH (tN/2 ) − BH (tN/2−1 ), . . . , BH (t1 ) − BH (t0 )} with the increments {BH (tN/2+1 ) − BH (tN/2 ), . . . , BH (tN ) − BH (tN−1 )}. For the correlation C of the increments, the equation C = 22H−1 − 1 should hold. Solving for H, we find that H = (1 + log2 (C + 1))/2. SimFBM substitutes the calculated correlation into this equation to estimate H. R/S Analysis. R/S analysis (Definition 3.3) provides another way to estimate H for the generated time series. As with the increment size and variance, SimFBM plots log(R/S) against log(∆t) for several values of ∆t and estimates H to be the slope of the best-fit line through the plotted points.

6.2.2

Results

Visual inspection of fractional Brownian motion simulations generated by SimFBM (Figure 6.4) suggests that these time series have fractal properties. Log-log plots of increment size, increment variance, and R/S against time length, such as the plots

6.2. Performance of the Fracture-Stretch Algorithm

81

depicted in Figure 6.5, confirm the fractal nature of the time series, since these plots consistently show a linear relationship between the plotted variables.

Figure 6.4: Simulations using fracture-stretch with 100 steps and intended H = 0.8.

826. Implementation of the Fracture-Stretch Algorithm with the SimFBM Application

Figure 6.5: The log-log plot of (a) increment size, (b) increment variance, or (c) R/S against time length, along with the best-fit line, for a sample time series with 1000 steps and intended H = 0.8.

Table 6.1 gives the value of several statistics averaged over many fracture-stretch simulations, each simulation having between 100 and 9000 time steps. The mean

6.2. Performance of the Fracture-Stretch Algorithm

83

increment value is approximately 0 for all H. For increment variance, increment correlation, and R/S analysis, the estimated H does not deviate far from 0.5, the value expected for a random time series. In comparison, the last row of Table 6.1 lists the average statistic values for regular Brownian motion simulated by sums of random increments. When increment size is used to estimate H, the estimated value deviates significantly from 0.5. The estimated H decreases as the intended H goes from 0.9 to 0.6, although the estimated H remains close to 0.8185(±0.0595). When the intended H is 0.5, the estimated H corresponds almost exactly to it. For intended H less than 0.5, the estimated H is greater than 0.5. In fact, for intended H values of 0.3, 0.2, and 0.1, the estimated H is almost the same as for intended H = 0.6.

Intended H

Mean of Incr.

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1

0.012 -0.115 -0.0255 -0.016 0.001 -0.008 0.019 -0.018 0.015

0.5

0.001

Est. H (Size) Est. H (Var.) Est. H (Corr.) Est. H (R/S) Fracture-Stretch 0.878 0.551 0.499 0.558 0.858 0.565 0.505 0.572 0.822 0.586 0.499 0.585 0.759 0.598 0.501 0.608 0.499 0.476 0.498 0.550 0.680 0.511 0.495 0.517 0.752 0.563 0.498 0.573 0.758 0.580 0.498 0.593 0.755 0.580 0.500 0.597 Sums of Random Increments 0.524 0.498 0.503 0.558

Table 6.1: Average values for the mean of increment values and for H values estimated from increment size, increment variance, increment correlation, and R/S analysis. For the fracture-stretch method, averages are calculated for time series with 100 to 9000 steps; for the sums of random increments, for time series with 100 to 5000 steps.

846. Implementation of the Fracture-Stretch Algorithm with the SimFBM Application

6.2.3

Conclusions

The fracture-stretch algorithm creates fractal time series with zero means, as expected for a fractional Brownian motion. In the created time series, the increment size is consistent with that of a persistent fractional Brownian motion, unless the intended H is 0.5. However, that persistent fractional Brownian motion does not necessarily have the H that the algorithm intended to produce. The greatest difference between the intended H and the value of H estimated from the size of increments in the time series occurs when the algorithm intends to produce an anti-persistent fractional Brownian motion. Instead, the generated time series exhibits persistence. The problem with the algorithm lies in the calculation of stretch. For intended H > 0.5, the addition of stretch to the time series guarantees that the minimum B(tmin ) and maximum B(tmax ) of the regular Brownian motion time series within the current time span become separated by a vertical distance of RH , the range that corresponds to the intended H. For intended H < 0.5, the value of stretch is chosen so as to shrink the distance between B(tmin ) and B(tmax ). However, when the distance between the minimum and maximum is shrunk rather than stretched, the minimum and maximum before shrinking may not remain so after shrinking. As it turns out, adding stretch to try to shrink time series actually expands them vertically. The increment variance, increment correlation, and R/S analysis of the time series generated by the fracture-stretch algorithm are close to those of a regular Brownian motion. Thus, having increment sizes consistent with a fractional Brownian motion does not guarantee a consistent increment variance, increment correlation, or R/S analysis result “for free”.

6.2. Performance of the Fracture-Stretch Algorithm

85

Fracture-stretch does not rival any of the leading fractional Brownian motion simulation algorithms in execution time or accuracy. However, fracture-stretch does generate time series that resemble persistent fractional Brownian motion in some respects, and it does so in a way that is relatively easy to understand. With future modifications, some of the algorithm’s flaws could be remedied, making the algorithm more useful for simulation than it currently is.

References Paul S. Addison. Fractals and Chaos: An Illustrated Course. Institute of Physics Publishing, Bristol, 1997. Crump W. Baker. Introduction to Topology. Krieger Publishing Company, Malabar, Florida, 1997. Leo Breiman. Probability. Addison-Wesley Publishing Company, Reading, Massachusetts, 1968. M. C ¸ aglar. Simulation of fractional brownian motion with micropulses. Advances in Performance Analysis, 3:43–49, 2000. Michel Dekking, Jacques L´evy V´ehel, Evelyne Lutton, and Claude Tricot. Fractals: Theory and Applications in Engineering. Springer, London, 1999. K. J. Falconer. The Geometry of Fractal Sets. Cambridge University Press, Cambridge, 1985. Jens Feder. Fractals. Plenum Press, New York, 1988. Berndt M. Gammel. Matpack c++ numerics and graphics library. Available on: http://www.matpack.de, 2003. Gnuplot. Gnuplot central. Available on: http://www.gnuplot.info. Brian H. Kaye. A Random Walk Through Fractal Dimensions. VCH Publishers, New York, 1989. Benoit B. Mandelbrot. The Fractal Geometry of Nature. W. H. Freeman and Company, New York, 1977. 87

88

References

Benoit B. Mandelbrot. Multifractals and 1/ f Noise: Wild Self-Affinity in Physics. Springer, New York, 1999. Michael McGuire. An Eye for Fractals. Addison-Wesley Publishing Company, Redwood City, California, 1991. Alain Le M´ehaut´e. Fractal Geometries: Theory and Applications. CRC Press, Inc., Boca Raton, 1990. Translated by Jack Howlett. Edgar E. Peters. Chaos and Order in the Capital Markets. John Wiley and Sons, Inc., New York, 1991. James W. Pitman. Basic properties of brownian motion. Lecture notes. Available on: http://stat-www.berkeley.edu/users/pitman/s205s03/lecture15.pdf, 2003. Donald L. Turcotte. Fractals and Chaos in Geology and Geophysics. Cambridge University Press, second edition, 1997. Richard F. Voss. Random fractal forgeries. In Rae A. Earnshaw, editor, Fundamental Algorithms for Computer Graphics. Springer-Verlag, Berlin, 1985. Dennis D. Wackerly, William Mendenhall, and Richard L. Scheaffer. Mathematical Statistics with Applications. Duxbury, United States, sixth edition, 2002. Eric Weisstein. Mathworld. Available on: http://mathworld.wolfram.com, 2004.

Fractional Brownian Motion Simulation

Apr 21, 2004 - area of the square, or the volume of the cube. ...... was able to simulate time series that mirrored his observations of natural processes ..... SimFBM displays the plots and the collection of statistics in separate windows. If.

1MB Sizes 1 Downloads 337 Views

Recommend Documents

Introduction to Fractional Brownian Motion in Finance
Applications of Wick-Ito Stochastic Calculus in Finance. 14 ..... and apply Theorem 9 to obtain the result. For detail see ...... Princeton University Press, 2002.

Type I and Type II Fractional Brownian Motions
Benoit Mandelbrot (commodity and asset prices). Consider a stationary .... What matters, in the choice of linear representation, is how best to represent the joint distribution of the (finite) ... Frequency Domain Simulation: When ut is i.i.d. ...

Brownian motion on the Sierpinski carpet
Co-authors. This is joint work with. Martin Barlow, University of British Columbia. Takashi Kumagai, Kyoto University. Alexander Teplyaev, University of Connecticut. Richard Bass (University of Connecticut). Brownian motion on the Sierpinski carpet.

pdf-1870\brownian-motion-cambridge-series-in-statistical-and ...
Try one of the apps below to open or edit this item. pdf-1870\brownian-motion-cambridge-series-in-statistical-and-probabilistic-mathematics.pdf.

Fluctuations of Brownian Motion with Drift
FLUCTUATIONS OF BROWNIAN MOTION WITH DRIFT by. Joseph G. Conlon* and. Peder Olsen**. Department of Mathematics. University of Michigan. Ann Arbor, MI 48109-1003. * Research partially supported by the U.S. National Science Foundation under grants. DMS

Simulation of circularly-symmetric fractional Gaussian ...
If you want the first highly composite number larger than 2n − 1, type mt=NULL which is the default. > out=simCFGN(n=1000,H=.8,eta=.3,mt=NULL,. +. allowTruncation=FALSE). ## No entry for mt...use the first highly composite integer larger than 2n-1,

Memory-Based Human Motion Simulation
motor program views on the planning of human movements, this paper ... system consists of three components: a motion database (memory), a motion search.

MEMORY-BASED HUMAN MOTION SIMULATION by ...
enhancement of computer hardware performance, currently, in the year 2003, it is ..... The main ideas in this generalized motor program (GMP) theory (Schmidt, 1975; ... Greene (1972) suggested that at the highest level of the system, the global .....

Validation of the Human Motion Simulation Framework
Proceedings of the SAE Digital Human Modeling for Design and Engineering ... object transfer tasks are compared to data from 20 subjects with a wide range of ...

Optical Motion Tracking in Earthquake-Simulation ...
analysis may also be used to propose design enhancements. Sensors such as .... an(ξ) embed the energy (intensity in this context) of an im- age's region ..... for building interiors - part II: Image data analysis and results,” IEEE. Trans. on ...

PDF Online Motion Simulation and Mechanism Design ...
Motion, an add-on module of the SOLIDWORKS software family. ... development stage could prevent costly redesign due to design defects found in the physical testing ... These concepts are introduced using simple, yet realistic examples.

FREE Download Motion Simulation and Mechanism ...
spreadsheet data. These concepts are introduced ... System 6. A Slider-Crank. Mechanism 7. A Rail-Carriage. Example 8. A Compound Spur. Gear Train 9.

Read PDF Motion Simulation and Mechanism Design ...
Online PDF Motion Simulation and Mechanism Design with SOLIDWORKS Motion 2016, Read PDF Motion Simulation and Mechanism Design with SOLIDWORKS Motion 2016, Full PDF Motion Simulation and Mechanism Design with SOLIDWORKS Motion 2016, All Ebook Motion