TEL-AVIV UNIVERSITY RAYMOND AND BEVERLY SACKLER FACULTY OF EXACT SCIENCES

Dynamic structures of neuronal networks Thesis submitted in partial fulfillment of the requirements for the degree of M.Sc. in Tel-Aviv University The Department of Physics

by

Uri Barkan The research work for this thesis has been carried out at Tel-Aviv University under the supervision of Prof. David Horn

February 2005

Acknowledgments First, I would like to express my sincere gratitude to my supervisor, Prof. David Horn, for his constant guidance and patience. It is hard for me to describe how much I learned from him, academically and otherwise. His moral support enabled me to do all the work described here, and for that I am grateful and owe him so much. Second, I would like to thank Prof. Eshel Ben-Jacob and his staff members, Itay Baruchi and Nadav Raichman, for sharing the data and for fruitful discussions. I’d also like to thank Roy Varshavsky from David’s group for his collaboration and good advice, and for being available for my harassments. Liat Segal, Vered Kunik and Zach Solan helped a lot by becoming great social environment, and their ability to tolerate me is a miracle worths a thesis by itself. All my friends gave me a shoulder in my efforts on the last two years, but I’d like to mention especially Jonathan Rubin and Asi Cohen for their good ideas and for inspirational talks. And last but not least, I’d like to thank my wife Yemima, without whom all of it would not be worth it.

1

Abstract In vitro neuronal networks display Synchronized Bursting Events, with characteristic temporal width of 100-500 ms and frequency of once every few seconds. These events can be registered over a period of many hours. Applying SVD (or PCA) to the PSTHs, i.e. vectors of neuronal activities per burst have demonstrated characteristic changes that take place over time scales of hours. This was done by simple clustering applied to the data in the reduced dimensions of the first few principal components. Here we extend this investigation in two directions. The elements of our analysis will be the raster plots of all bursts, i.e. each burst will be represented on a spatio-temporal template. After applying SVD as a dimensional reduction tool, we investigate the results using the quantum clustering method to reveal underlying structures. The data we analyze are from eight experiments carried out in the laboratory of Prof. E. Ben Jacob. The experiments consist of registering the electrical activity of in vitro neuronal networks that are derived from cortical regions of rats, and are allowed to self-assemble into an active neuronal network for about a week, which is when the synchronized activity is observed. We have analyzed all eight experiments to first select the bursts. On the average we have 2000 bursts in each experiment. All bursts were fit into a spatio-temporal template determined by the burst with the longest time span, such that all peaks are set at the same time and zeros are added to the prefix and suffix of each temporal sequence defining a burst. The quantum clustering method assigns a potential function to all data points. Data points that fall within different valleys of the potential are assigned to different clusters. To assure good separability of the different clusters we have selected points within the bottom half of each valley, regarding all others as outliers. Then we have measured the Pearson correlations between their original raster plots. The results demonstrate that correlations within the clusters are significantly higher than correlations between bursts that belong to different clusters, i.e. the clustering selection is biologically meaningful. Whereas clustering of the PSTHs corresponds to some temporal ordering of the bursts during the many hours of their recording, this is not true for the clusters of the spatio-temporal raster plots. Bursts corresponding to different spatio-temporal clusters occur all throughout the experiment. Thus these clusters exhibit different classes of spatio-temporal behavior that are characteristic of the network. For example, the average activity of a specific neuron within the bursts of the different clusters can change drastically. Clearly the profile is strongly cluster dependent. Moreover, we observe different inter-neuron relations in different clusters.

2

Contents 1 Background

4

2 Preprocessing

5

2.1

The data

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

2.2

Removal of erroneous points . . . . . . . . . . . . . . . . . . . . .

5

2.3

Bursts detection . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

2.4

Building the spatio-temporal and spatial matrices . . . . . . . . .

7

3 Methods - SVD and quantum clustering

9

3.1

SVD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

3.2

Quantum clustering . . . . . . . . . . . . . . . . . . . . . . . . .

9

4 Results 4.1

4.2

11

Spatial presentation . . . . . . . . . . . . . . . . . . . . . . . . .

11

4.1.1

Demonstration of the analysis . . . . . . . . . . . . . . . .

11

4.1.2

Temporal distribution . . . . . . . . . . . . . . . . . . . .

13

4.1.3

Results summary . . . . . . . . . . . . . . . . . . . . . . .

13

Spatio-temporal presentation . . . . . . . . . . . . . . . . . . . .

13

4.2.1

Demonstration of the analysis . . . . . . . . . . . . . . . .

13

4.2.2

Temporal Distribution . . . . . . . . . . . . . . . . . . . .

15

4.2.3

Correlation between spatio-temporal clusters . . . . . . .

17

4.2.4

Results Summary . . . . . . . . . . . . . . . . . . . . . . .

17

4.2.5

Characteristics of different clusters . . . . . . . . . . . . .

19

4.2.6

Relations between spatial and spatio-temporal data . . .

20

4.2.7

Sequential clusters in 231000 . . . . . . . . . . . . . . . .

21

4.2.8

Correlations in the SVD space . . . . . . . . . . . . . . .

23

5 Discussion

25

3

1

Background

Production of in-vitro cell cultures was performed in Prof. Eshel Ben-Jacob’s laboratory, in collaboration with the laboratory of Dr. Morris Benveniste from the Tel Aviv University School of Medicine. The process was described in detail in [5]. Two-dimensional cell cultures were composed of both neurons and glial cells from the cortex of one-day-old rats. The neurons’ electrical activity was extracellularly recorded by a square array of electrodes put under the network, covering about 1 mm2 . The networks’ sizes were of an order of 1 cm2 , and the number of neurons recorded ranged from 12 to 60, out of the millions of neurons that assembled the network. Sampling rate was 12000 Hz, and each experiment lasted between dozens of minutes to a few hours. Previous work (see [6]) showed that the networks exhibited periodic synchronized bursting: after periods of sporadic firing by a low number of neurons, came a short time interval (of about 100-500 ms) of synchronized bursting event, in which most of the neurons fired a few times. The breaks between the bursts were usually of magnitude of a few seconds. In this work we wish to explore these bursts and present new properties of the networks.

4

2

Preprocessing

This chapter describes the preprocessing algorithm we used in order to organize the data, so we could run the algorithms and tools we wanted to use later on SVD and quantum clustering.

2.1

The data

After being recorded in Eshel Ben-Jacob’s laboratory, the data was sorted so that each recorded spike is associated to a specific neuron with a high level of certainty (see [3]). The sampling rate of the data, along all experiments, was 12000 Hz. The data we got from the laboratory was made up of a long vector of all the times of the neurons’ spikes, along with a matrix of indices that divided the data in the vector to different neurons. The matrix contained, for each neuron, the indices in the vector that belong to each neuron. This way we had a list of the spikes’ times for each neuron.

2.2

Removal of erroneous points

Once in a long while (not more than once or twice in an experiment), there was a problem in the recording: for a few milliseconds the times recorded jumped to high values, and then back to their right values. Such points were removed from the data: if there had been a gap of more than 10 hours between 2 consecutive recordings, the later was deleted. An example is presented in figure 1. 9

9

x 10

1.8

1.6

1.6

1.4

1.4

1.2

1.2 time

time

1.8

1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0

1

2

3 Spike number

4

5

0.2

6 4

x 10

(a) Before removing erroneous points

x 10

0

1

2

3 Spike number

4

5

6 4

x 10

(b) After removing erroneous points

Figure 1: These are the recordings of neuron number 1 from the experiment dated 270800. On the left, a sharp leap in time can be seen. On the right, the time progresses gradually, after seven points were taken out.

5

60

35

50

30

25

Activity

Neuron

40

30

20

15 20 10 10

0

5

0

2

4

6

8

time

0 0

10 5

x 10

(a) Raster plot of a time segment

10000

20000

30000

40000 50000 time [ms]

60000

70000

80000

(b) Transferring to a smaller time resolution and summing the activity over the whole network

Figure 2: Bursts detection. Six bursts can be seen in figure (a), all revealed in figure (b). Data are taken from experiment dated 231000.

2.3

Bursts detection

In order to reveal the bursts hiding in the data, we first made a raster plot of the spikes (see figure 2(a)). We treated the data as if it had been binary: for each time point in which a spike was recorded, we placed ”one” in the appropriate place in the raster plot. Due to computational and memory limitations, we divided the time axis to blocks of 83 seconds. The raster plots we got were very sparse, because the time resolution was too high. We decided to sum over each 120 adjacent time points. Since the original sampling frequency was 12000 HZ, the new resolution was 10 ms, and this will be the resolution in all our work, unless otherwise is explicitly mentioned. Then we counted the number of spikes in the whole network for each time bin of 10 ms, and set a threshold: every time the count was larger than a certain number, we declared it as a burst (see figure 2(b)). The number depended on the network’s size, and usually ranged between one third and one fifth of the total number of neurons recorded. In order to find the burst’s limits, we set a threshold, for each experiment, at about one fifth of the activity at the burst’s peak. We scanned the the activity level graph (such as the one in figure 2(b)) 300 ms backwards, and looked for the earliest time where the activity was more or equal to the threshold. Then we scanned it 1000 ms forwards, and looked for the latest time where the activity was higher or equal to the threshold.

6

Spatio−temporal

Spatial

10

20 Neuron #

Summation 30

40

50

60 100

200

300

400 time [ms]

500

600

700

0

10 Spikes #

20

Figure 3: Left: a spatio-temporal representaion of a burst from 231000. Right: A spatial representation of the same burst.

2.4

Building the spatio-temporal and spatial matrices

We wanted to form two representations of the data. The first, spatio-temporal representation: an instance is composed of a raster plot of a burst. The number of spikes each neuron fired is counted in each time bin. The matrix of all bursts is constructed by reshaping each raster plot into a long vector, putting each neuron’s vector next to the other: oo

activity //

n1

A

B

C

n2

D

E

F

%%

A

B

C

oo

n1

//

The second representation is the spatial representation: an instance is a PST histogram of the raster plot, i.e. a total count of the spikes of each neuron in each burst. An example of both representations can be seen in figure 3. In the following we apply SVD to the data (See section 3.1, page 9). SVD can be applied only to matrices composed of vectors of constant length (rectangular matrices), and its results are meaningful only if the matching entries of the different vectors have the same meaning. Therefore we had to standardize the bursts. We did it the following way: first, we found the longest burst. Then we filled all the other bursts with zeros, so all vectors would have the same 7

D

E

F

oo

n2

//

length. Finally, we matched all the bursts to the mean burst of the experiment, so that the bursts’ peaks will all be located in the same place within the burst. An examination of 300 bursts from 270800 discovered that this way of standardization maximized the cross correlation between each burst and the mean burst.

8

3

Methods - SVD and quantum clustering

Our goal was to find groups of bursts in the spatial and spatio-temporal data, which are of very high dimensionality: The dimension of the spatial data is of the order of dozens, the spatio-temporal is in hundreds. Both for computational and visualization reasons, we decided to use SVD as a dimensions-reduction method. For the clustering we used the quantum clustering method. I bring here only a short description of the methods.

3.1

SVD

SVD (Singular Value Decomposition) [4] is a known algorithm for ”diagonalizing” rectangular matrices. The diagonalizing process is such that it promises that the diagonalizing matrices will be orthonormal. Let X denote an m × n matrix of real-valued data. The equation for singular value decomposition of X is the following: X = U SV T

(1)

where S is the diagonal matrix, and U and V are the orthonormal matrices. Figure 4 is a graphical depiction of SVD. The dimensions of U and V are mentioned there. SVD can be used for a variety of purposes: noise reduction, information retrieval, compression, and patterns detection. We used it as a method for dimensional reduction: consider a column vector in X. Clearly, it can be reconstructed from a linear combination of column vectors of U. However, since the eigenvalues in S are proportional to the variance along the axis that the matching vector in U defines, one can look at the first dimensions in U, and get a reliable representation of the data, in less dimensions. We chose to work with the first three dimensions. We computed the projection of each column vector in X on the first three dimensions in U. Afterwords, we normalized the projections to the unit sphere, so the squares of the three projections would sum to one.

3.2

Quantum clustering

Quantum clustering is an unsupervised clustering method (see [2]). Let {xi } denote a set of N data points of dimension d. One can associate a Gaussian with each of the data points in a Euclidean space, and sum over all of them: ψ(x) =

X

e−(x−xi )

i

9

2

/2σ 2

(2)

Figure 4: A grphical depiction of SVD. The figure is taken from [4]. Now, one can solve the Schr¨odinger equation and compute a ”potential” for each point in data-space: V (x) = E −

2 2 1 X d 2 + 2 (x − xi ) e−(x−xi ) /2σ 2 2σ i

(3)

where E stands for the initial Energy of the system, and its value is set to generate min(V ) = 0. Regions with dense distribution of points have lower potential values. This leads to Valleys of potential are created, and one can check, for each data point, to which valley it belongs (e.g., by gradient descent). A cluster is made up of all the points that fall in the same valley. The valleys’ sizes are determined by the parameter σ.

10

4

Results

In this section we review the results of the methods we applied to eight experiments conducted on seven dates: • 130300 • 270800 • 231000 • 271100 a • 271100 b • 091002 • 191103 • 131004 Experiment 271100 was especially long. In its first part there are more than 25000 bursts, and in its second there are more than 40000 bursts. For computational reasons, we sampled every 10th burst from the first part, and every 20th burst from the second. All experiments will be labeled by their dates.

4.1 4.1.1

Spatial presentation Demonstration of the analysis

We show an example of SVD and quantum clustering applied to the spatial representation of the experiment 231000. In figure 5 the eigenvalues of S are ordered by their magnitude. Clearly, the first eigenvalues contain most of the variance. We looked at all the possible couples among the first 20 dimensions. Looking at the projection of the data points on the first three dimensions, the naked eye can discern roughly three clusters (figure 6). In order to find these clusters we applied quantum clustering to the normalized three main components of U. First, we plotted the potential of the data points, where three valleys can be seen (in figure 7(a)). The potential is displayed in a plane spanned by the second and the third principal components of SVD, as we found those two dimensions to be the ones which play an important role in the clustering process in all experiments. Then we omitted half of the data points whose potential values were the largest, in order to focus on the cores of the clusters (figure 7(b).). In figure 8 it is possible to see that the clusters found in the SVD picture (figure 6) match those extracted by the quantum 11

231000 Spatial picture − SVD − Eigenvalues 3000

2500

Eigenavalues

2000

1500

1000

500

0

0

10

20

30

40

50

60

Figure 5: Most of the information is located in the first dimensions of the matrix U.

231000 SVD picture of the spatial representation 1 0.8 0.6

dimension 3

0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 −1 0 −0.5 −1 dimension 1

1

0

0.5

−0.5

−1

dimension 2

Figure 6: SVD picture of the spatial data. The three clusters were manually circled.

12

231000 − Quantum clustering potential of spatial representation

231000 − quantum clustering potential before gradient descent, colored by clusters 1.4

1.4

1

1

0.8

0.8

cluster 1 cluster 2 cluster 3 outliers

V

V

1.2

1.2

0.6

0.6

0.4

0.4

0.2

0.2

0 −1

−0.5

0

0.5

−1 1

0

1

0 −1

dimension 3

dimension 2

−0.5

0

0.5

−1 1

1 0 dimension 3

dimension 2

(a) Potential before clustering.

(b) Clustered data in the potential space.

Figure 7: Potential of the data points in the spatial representaion. clustering method. All the data points that were not associated to a cluster’s core were regarded as outliers. 4.1.2

Temporal distribution

Previous work [1] indicated that there is correlation between the place of the data point on the unit sphere of the normalized U (like in figure 6) and the time of the burst along the experiment. Such correlations were found in some of the files examined (for an example see figure 9). However, not all experiments exhibited such behavior. 4.1.3

Results summary

Table 1 summarizes the findings from the different experiments. While temporal distribution was found in all experiments checked in previous work [1], we found it is not generally true: in two experiments we found the data to be composed of one large cluster, and in another two experiments we found no temporal distribution, in spite of the clusters found.

4.2 4.2.1

Spatio-temporal presentation Demonstration of the analysis

We show now an example of the SVD and quantum clustering methods applied to the spatio-temporal data collected on 270800. The eigenvalues of S are presented in figure 10. As in the spatial representation, the eigenvalues of the first dimensions were significantly larger than the others. Here too we looked at all the possible couples among the first 20 dimensions, and once again, we found 13

231000 − Quantum Clustering on SVD, σ = 0.5, 100 steps 1 Cluster 1 Cluster 2 Cluster 3 Outliers

0.8 0.6

dimension 3

0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 −1 0 −0.5 −1

−0.5

0

0.5

1

−1

dimension 2

dimension 1

Figure 8: SVD picture of the spatial data after clustering. Compare to figure 6.

Temporal distribution Outliers

Cluster

3

2

1 0

200

400

600

800 1000 Burst number

1200

1400

1600

Figure 9: Temporal Distribution of the spatial data: different clusters are divided to different time segments along the experiment, as was previously shown in [1].

14

Experiment date 130300 270800 231000 271100-1 271100-2 091002 191103 131004

Number Of bursts detected 2748 6100 1507 2580 2042 2695 2086 1634

Number of clusters found 3 3 3 3 1 2 1 2

Temporal Distinction Yes Yes Yes No No Yes No No

Table 1: Summary of findings about the spatial representation of all experiments. Not in all experiments we found clusters, and in two experiments that we did, no temporal distribution was seen. Hence temporal distribution is not a general characteristic of all networks. Spatio-temporal cluster 1 2 Outliers

Spatial cluster

1

2

Outliers

2 258 168

628 0 292

344 116 887

Table 2: In 091002, an accordance was found between the spatial clusters and the spatio-temporal clusters: cluster 1 from the spatio-temporal presentation matches cluster 2 from the spatial presentation, and vice versa. This is the only experiment such an accordance was seen. clusters only at the first three. The projection of the data points on these three dimensions is plotted in figure 11, this time before and after the clustering side by side (figures 11(a) and 11(b) respectively). Three groups can be observed. Two potential plots (with and without the clusters tag, as was explained in 4.1.1, page 13) are presented in figure 12. 4.2.2

Temporal Distribution

Unlike in the spatial representation, in the spatio-temporal representation usually no correlation was found between the cluster a given burst is associated to, and its time of occurrence along the experiment, with the exception of 091002, where the two spatio-temporal clusters appeared in separate times. Table 2 shows that the two classifications (i.e. spatial and spatio-temporal) were very much alike.

15

231000 Spatio−Temporal − SVD − Eigenvalues

350

Eigenavalues

300 250 200 150 100 50 0 0

10

20

30

40

50

Figure 10: Spatio-temporal representation - the eigenvalues in S, zoomed on the first 50 dimensions.

231000 − Quantum Clustering on SVD, σ = 0.5, 100 steps

1

1

0.8

0.8

0.6

0.6

0.4

0.4 dimension 3

dimension 3

231000 − SVD picture of the spatio−temporal representation

0.2 0 −0.2 −0.4

0.2 0 −0.2 −0.4

−0.6

−0.6

−0.8 −1 1

Cluster 1 Cluster 2 Cluster 3 Outliers

−0.8

0

−1 1

−0.5 0.5

0 dimension 2

−0.5

−1

−1 dimension 1

0 −0.5 0.5

0 dimension 2

−0.5

−1

−1 dimension 1

(a) SVD picture of the spatio-temporal data. (b) SVD picture, clustered data. Compare to The three clusters were manually circled. 11(a).

Figure 11: The spatiotemporal data presented in the space spanned by the three first dimensions of the SVD.

16

231000 − Quantum clustering potential before gradient descent colored by clusters

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

V

V

231000 − Quantum clustering potential of spatiotemporal representation

0.4

0.4

0.3

0.3

0.2

0.2

0.1 0 1

Cluster 1 Cluster 2 Cluster 3 Outliers

0.1 0.5

0

−0.5

−1

−1

−0.5

0.5

0

0 1

1

dimension 2

dimension 3

0.5

0

−0.5

−1

−1

dimension 3

(a) Potential before clustering.

−0.5

0

0.5

1

dimension 2

(b) Clustered data in the potential space.

Figure 12: Potential of the data points in the spatiotemporal representation. 4.2.3

Correlation between spatio-temporal clusters

¯ and vector Y with Pearson’s correlation1 r between vectors X with mean X, ¯ mean Y , is defined as follows:   ¯ Y − Y¯ X −X r = qP  q  ¯ 2 P Y − Y¯ 2 X −X P

(4)

The denominator in equation 4 ensures that r does not go beyond -1 or +1. The correlation reflects the degree of linear relation between the two variables: r > 0 means there is a positive linear relation, while r < 0 means there is a negative linear relation. r =1 and r =-1 mean perfect positive or negative linear relation, respectively. From each cluster of the spatio-temporal data we picked randomly 100 bursts. If the cluster had less than 100 bursts, we picked them all. Then we measured Pearson’s correlation between each two bursts. Results are presented in figure 13. In most cases, correlation between bursts that belonged to the same cluster were significantly higher than correlations between bursts that belonged to different clusters (for full results see table 3). 4.2.4

Results Summary

Table 3 summarizes the findings from the different experiments. Apart from one experiment (191103), all experiments were divided into at least two clusters, and in all experiments we also found structures in the Pearson correlation matrix. 1 for a short introduction to some of Pearson’s http://davidmlane.com/hyperstat/A34739.html.

17

correlation

properties,

see

231000 − Pearson Correlation 0.28814

0.16462

0.051612

0.098334

0.041927

0.45

0.18911

0.029922

0.070697

0.4

50 100

0.35 0.16462

0.23484

Burst number

0.041927

0.10101

0.072002

0.1488

0.039736

0.3

0.052452

150 0.25 200

0.098334

0.10101

0.029922

0.26379

0.039736

0.11617

0.071815

0.2

0.053405

250 300

0.15

0.18911

0.1488

0.070697

0.11617

0.052452

0.1

0.15293

0.053405

0.066733

0.05

350 400

0 50

100

150

200 250 Burst number

300

350

400

Figure 13: From each cluster we picked 100 bursts. This figure presents the Pearson correlation for each couple of bursts. The bursts are ordered by clusters: the diagonal squares are associated with the bursts that belong to the same cluster. The down-right square is associated with the outliers. The off-diagonal squares represent the Pearson correlation measured between bursts that belong to different clusters. The big number in each box is the mean Pearson correlation of this box. The small number under it is the standard deviation.

Experiment date 130300 270800 231000 271100-1 271100-2 091002 191103 131004

Number Of clusters found 3 3 3 3 2 2 1 2

Structures in Pearson’s correlation Yes Yes Yes Yes Yes Yes – Yes

Table 3: Summary of findings about the spatio-temporal representation of all experiments. Every time clusters were found, structures in the Pearson correlation matrix appeared.

18

270800 Cluster 1

270800 Cluster 2 0.25

5

0.25 5

0.2

0.2

0.15

0.15

10

10 0.1

0.05

15

Neuron

Neuron

0.1

0 −0.05

20

0.05

15

0 −0.05

20

−0.1 25

−0.1 25

−0.15

−0.15

−0.2 30

150

−0.2 30

−0.25 200

250

300

350

400

150

−0.25 200

250

300

time [ms]

time [ms]

(a)

(b)

270800 Cluster 3

350

400

270800 Outliers 0.25

5

0.25 5

0.2

0.2

0.15

0.15

10

10 0.1

0.05

15

Neuron

Neuron

0.1

0 −0.05

20

0.05

15

0 −0.05

20

−0.1 25

−0.1

−0.15

25

−0.15

−0.2 30

150

−0.2

−0.25 200

250

300

350

400

30

150

time [ms]

−0.25 200

250

300

350

400

time [ms]

(c)

(d)

Figure 14: The mean raster plot of each cluster and the outliers, after the mean burst of the whole experiment was subtracted form all bursts. 4.2.5

Characteristics of different clusters

After extracting the different clusters, one may ask how the differences between the clusters are expressed in the original space (as opposed to the SVD space), i.e. what are the physical differences between bursts that belong to different clusters. We measure the mean raster plot of each cluster. In order to make the difference between the clusters easier to see, we reduce from each burst the mean burst of the whole experiment, and plot the mean reduced burst of each cluster. Large differences are discovered between the plots in figure 14. On the neuron’s level, two main differences are exemplified in figure 15: in figure 15(a), we plot the average activity of a specific neuron in each cluster, and show that the profile clearly changes; in figure 15(b), the profiles of two neurons in two different clusters is plotted, showing synchrony in one cluster, and out-of phase behavior in the other.

19

231000 −Neuron # 9

270800 − cluster 2

0.12

0.4 neuron # 4 neuron # 6 Activity level

0.3 0.1 Cluster 1

0.1

0.08 Level of activity

0.2

0 250

260

270

280

290

0.06

300 310 time [ms]

320

330

340

350

270800 − cluster 3 0.25 neuron # 4 neuron # 6

Cluster 2

0.04

Activiy level

0.2

0.02 Cluster 3

0 0

100

200

300

400 time [ms]

0.15

0.1

500

600

700

0.05 250

800

(a)

260

270

280

290

300 310 time [ms]

320

330

340

350

(b)

Figure 15: Characteristics of different clusters. (a) Activity characteristics of a particular neuron in the SBEs belonging to the three different clusters display clearly different profiles. (b) Profiles of two neurons show different relative phases depending on the clusters. 4.2.6

Relations between spatial and spatio-temporal data

From what has been demonstrated in sections 4.1.2 (page 13) and 4.2.2 (page 15), it is clear that the two representations yield different divisions of the data, since on the time axis, the bursts are divided differently between the clusters. In order to have a better understanding of this difference, we picked a cluster from the spatial representation, and applied to its spatio-temporal bursts SVD and quantum clustering all over again. We compared two classifications: the original spatio-temporal classification of this cluster (the results from the general run), and the the new spatio-temporal classification (the results from the rerun). We plotted the data twice in the normalized new SVD space (i.e. after the re-run): first tagged by the original classification, and then tagged by the new classification. For each classification we also measured the Pearson correlation matrix (see 4.2.3, page 17 for explanation of Pearson’s correlation) to decide which classification yields better results. In most cases, the decision was inconclusive. Either the two runs created similar division to clusters, or the Pearson correlation matrices were equivalent, or with a small advantage to the new classification. However, in other cases it was clear that the re-run presented the data in new light: sometimes it succeeded in capturing new clusters and yielded finer division, and sometimes it splitted one homogeneous cluster into two identical clusters. In figures 16 and 17 two such examples are shown. Figure 16 shows an 20

270800, spatial cluster 2, original classification

270800, cluster 2 − Pearson correlation of original classification

1

0.7

Cluster 1 Cluster 2 Cluster 3

0.8 0.6

20

dimension 3

0.6

40

0.4

60

0.5

0.2 80 0

0.4 100

−0.2 −0.4

120

−0.6

140

−0.8

160

−1 0

0.3

0.2

0.1

180 −0.5

200 −1

1

0.5

0

−0.5

−1

0 20

40

60

80

100

120

140

160

180

200

dimension 2

dimension 1

(a) The original clustering

(b) Pearson’s correlation of the original clustering

270800, spatial cluster 2, new classification

270800, cluster 2 − Pearson correlation of new classification

1

0.9 Cluster 1 Cluster 2 Cluster 3

0.8 0.6

0.8 50 0.7

dimension 3

0.4

0.6

100

0.2 0

0.5 150

−0.2

0.4 −0.4 0.3

200

−0.6 −0.8

0.2 250

−1 0

0.1

−0.5 −1 dimension 1

1

0.5

0

−0.5

300

−1

0 50

100

150

200

250

300

dimension 2

(c) The new clustering

(d) Pearson’s correlation of the new clustering

Figure 16: In figure (a), cluster 2 of the original classification is clearly composed out of two clusters. Re-running the the data through the SVD and quantum clustering extracts those two clusters, as shown in figure (c). The Pearson’s correlation matrices (figures (b) and (d)) show advantage to the new classification. example of re-run SVD and quantum clustering that yields a better division, while figure 17 shows how re-runnning might unjustifiably split a cluster into two. 4.2.7

Sequential clusters in 231000

As shown in 4.2.2 (page 15), no correlation was found between the cluster a given burst is assigned to, and the burst’s time along the experiment. Interestingly, in one experiment, 231000, correlations between the clusters’ times were discovered. In this experiment, 495 bursts were associated with cluster 1 (the largest cluster), 140 bursts belonged to cluster 2, and 117 bursts were assigned to cluster 3. None of these 117 bursts occurred after a burst from the large cluster 1, while

21

231000, spatial cluster 1, original classification

231000, clister 1 − Pearson Correlation of original classification

1 Cluster 1 Cluster 2 Cluster 3

0.8 0.6

dimension 3

0.4

20

0.35

40

0.4

0.3

0.2

60

0

0.25 80

−0.2

0.2

−0.4

100

−0.6

0.15

−0.8

120

−1 0

140

0.1 0.05 −0.5

160 −1

1

0.5

0

−0.5

−1

20

40

60

80

100

120

140

160

dimension 2

dimension 1

(a) The original clustering

(b) Pearson’s correlation of the original clustering

231000, spatial cluster 1, new classification

2310003 − Pearson Correlation

1 Cluster 1 Cluster 2 Cluster 3

0.8 0.6

0.4 50

0.35

dimension 3

0.4 0.3

0.2 100

0

0.25

−0.2 0.2

150

−0.4 −0.6

0.15

−0.8

200

0.1

−1 0

0.05 −0.5

250 −1

dimension 1

1

0.5

0

−0.5

−1

50

100

150

200

250

dimension 2

(c) The new clustering

(d) Pearson’s correlation of the new clustering

Figure 17: Figure (a): Cluster 1 that was found by quantum clustering after applying SVD, is seemingly composed out of two smaller clusters (circled manually). The re-run splits it into two clusters (figure (c)), but the Pearson correlation matrices (figures (b) and (d)) do not really justify this split. Hence re-running the processing does not necessarily guarantee better results.

22

96 came after bursts from cluster 2, only 2 came after bursts from cluster 3, and 19 came after outlier bursts. Looking at the first 90 bursts of cluster 3, the picture looks even more impressive: 83 followed bursts from cluster 2, and only 7 took place after outlier bursts. None were consecutive to bursts from clusters 1 nor 3. We looked further to see which bursts came before the 83 bursts (out of the first 90) that started at cluster 2 and were followed by bursts from cluster 3. 52 of them (63%) followed bursts from cluster 1 (compared to 48% from the total number of bursts from cluster 2 that followed bursts from cluster 1), only 1 and 2 bursts came after bursts from clusters 2 and 3 respectively, and only 28 (34%, compaerd to 44% from the total number) came after outlier bursts. To conclude, out of the first 90 bursts of cluster 3, 52 (58%) were of the sequential type: cluster1 → cluster2 → cluster3. 4.2.8

Correlations in the SVD space

In all experiments, SVD has proved itself to be a useful tool for dimensions reduction. But after truncating the dimensions of higher order, one may ask oneself, what information was lost, if any? Are important properties of the data lost in the process, or maybe SVD reduces noise in these examples, thus making clustering much easier? We measured Pearson’s correlation (for details see 4.2.3, page 17) of the normalized three dimensions of the matrix U. Two main properties were discovered. First, the data in the truncated space (the projected data on U ) emphasizes the clusters. The differences between the on-diagonal correlations and the offdiagonal correlations were significantly larger in the truncated space than in the original space. Thus it is easier to distinguish between clusters in the truncated space. For an example, see figure 18. But nevertheless, this does not necessarily mean that the lost information is all noise. In some of the experiments, the correlations between different clusters (i.e. off-diagonal boxes in the Pearson matrix) were not equal one to the other. High correlations between clusters may indicate a similarity between clusters. On the other hand, low correlations lead us to think that the clusters are significantly different from one another. Again in figure 18 there is an example of an experiment where this information, about the similarity between clusters, was lost in the SVD space.

23

091002 − Pearson Correlation 0.27443

0.07392

0.092048

0.20112

0.068675

0.6

0.076034

50 0.5 100 0.07392

0.15934

0.068675

0.4

0.092437

0.068474

0.075328

150

0.3

200 0.20112

0.092437

0.076034

0.2

0.28432

0.075328

0.092253

0.1

250

0

300

50

100

150

200

250

300

(a) Pearson’s correlation in bursts space 091002 − Pearson Correlation of the data in SVD space 0.82874

0.34269

0.24841

−0.65224

0.53182

0.8

0.24602

50

0.6 0.4

100 0.34269

0.73298

0.53182

0.23123

0.33698

0.2

0.53149

150

0 −0.2

200 −0.65224

0.23123

0.24602

−0.4

0.908

0.53149

0.22531

−0.6

250

−0.8 300

50

100

150

200

250

300

(b) Pearson’s correlation in SVD space

Figure 18: SVD truncation enables to distinguish between bursts, as shown by the Pearson correlation measured in SVD space (figure (b)) compared to the original space (figure (a)), since the differences between on-diagonal and off-diagonal are significantly larger in the truncated space. Nevertheless, in original space one can see the high correlation between clusters 1 and 3, and the low correlation between cluster 2 and other clusters. On the other hand, in SVD space clusters 1 and 2 are highly-correlated, whereas cluster 3 is the exceptional one. While SVD emphasizes the clusters, the amount of similarity between different clusters is lost.

24

5

Discussion

Previous work [1] showed correlation between the spatial cluster a given burst was associated with, and its time of occurrence in the experiment. We have redone the analysis on more experiments. In four out of eight experiments we discovered such a correlation. In two experiments all the bursts were concentrated in one cluster to begin with, and in another two experiments no temporal correlation was found. Our check rules out the notion that the observed correlation [1] is generally true. In some cases we did not find groups in the spatial representation, and in others we did, but the groups were spread uniformly along the experiment. The spatio-temporal presentation revealed new features of the data. First, the classification of the spatial presentation and the spatio-temporal presentation were different, as the temporal distributions along the experiment proved. Experiment 091002 is an exception, as the two representation yielded similar classifications. Second, Pearson’s correlation showed real structures in the data. Similarity between bursts which belonged to the same cluster was bigger than the similarity between bursts from different clusters. This means that the clusters we found were biologically meaningful. Since all clusters were usually spread along the whole time axis, we conclude that the firing pattern in the network is not a property that changes slowly along the experiment. We could not find any statistical property that would predict to which cluster a given burst should be assigned, except for experiment 231000, where a specific sequence of clusters appeared relatively many times. The differences between the spatial and the spatio-temporal classifications imply that they do not hold the same information. While the spatial representation regards only the dominant neurons in the burst, the spatio-temporal representation takes into consideration the pattern of firing within the burst. Apparently, those two are not linearly correlated. This may imply that the network’s activity is not determined by one simple mechanism. We showed two examples of characteristics that were significantly different in different clusters. Either a specific neuron’s profile within the burst changes, or two neurons keep their profiles but replace places (temporally). We also found that re-running SVD and quantum clustering on a specific spatial cluster does not necessarily reveal new features and structures in the data. In some cases, rerunning may unjustifiably split a cluster into two smaller clusters. Re-running the algorithms on a spatio-temporal cluster was found to be totally ineffective: only rarely new sub-structures in the data were found, and most of the times the Pearson correlation matrices before and after the re-run looked very much the same. In general, SVD and quantum clustering were found to be useful tools for handling this kind of data. SVD enabled us to reduce dimensions, and also emphasized the clusters by dimensional reduction, though the amount of sim-

25

ilarity between clusters was lost. The first three dimensions were found to be the most important ones. Quantum clustering captured the clusters correctly. The potential plots showed valleys in the correct places, and choosing from each valley the half of the points with the lowest potential helped us to focus on the clusters’ cores.

26

References [1] Anat Elhalal and David Horn. In-vitro neuronal networks: evidence for synaptic plasticity. Neurocomputing, proceedings of CNS04, 2004. [2] David Horn and Assaf Gottlieb. Algorithm for data clustering in pattern recognition problems based on quantum mechanics. PHYSICAL REVIEW LETTERS, 88, January 2002. [3] Eyal Hulata, Ronen Segev, Yoash Shapira, Morris Benveniste, and Eshel Ben-Jacob. Detection and sorting of neural spikes using wavelet packets. PHYSICAL REVIEW LETTERS, 85:4637–4640, November 2000. [4] Daniel P.Berrar, Werner Dubitzky, and Martin Granzow. A Practical Approach to Microarray Data Analysis, chapter 5, pages 91–109. Kluwer Academic Publishers, 2003. [5] Ronen Segev. Self-Wiring of Neural Networks. PhD thesis, School of Physics and Astronomy, The Raymond and Beverly Sackler Faculty of Exact Sciences, Tel-Aviv University, 2000. [6] Ronen Segev and Eshel Ben-Jacob. Spontaneous synchronized bursting in 2d neural networks. PHYSICA A, 302:64–69, 2001.

27

Dynamic structures of neuronal networks

The quantum clustering method assigns a potential function to all data points. Data points that ... 4.2.6 Relations between spatial and spatio-temporal data . . . 20.

1MB Sizes 2 Downloads 250 Views

Recommend Documents

Bioelectronical Neuronal Networks
Aug 4, 1999 - those wonderful SEM pictures and for performing the EDX measurements. Many thanks ... to reconsider the sheer endless possibilities, and to find the (hopefully) best approaches. ...... Tools for recording from neural networks.

Study of hypothermia on cultured neuronal networks ...
trolled investigations from the basic micro-level cell network up to the macro-level of the ... in order to establish a base line for normal neuronal activity. .... the same general trend. .... mia may support and may help explain recent experiments

Study of hypothermia on cultured neuronal networks using multi ...
a School of Social Sciences, Tel-Aviv University, Tel-Aviv 69978, Israel b School of ... Application of hypothermia to the network resulted in a reduction in ..... therapeutic hypothermia after traumatic brain injury in adults: a systematic review.

Dynamic Bayesian Networks
M.S. (University of Pennsylvania) 1994. A dissertation submitted in partial ..... 6.2.4 Modelling freeway traffic using coupled HMMs . . . . . . . . . . . . . . . . . . . . 134.

Learning Methods for Dynamic Neural Networks - IEICE
Email: [email protected], [email protected], [email protected]. Abstract In .... A good learning rule must rely on signals that are available ...

Topology Control of Dynamic Networks in the ... - Semantic Scholar
enabling technology for future NASA space missions such ... In mobile sensor networks, there ... preserving connectivity of mobile networks [5], [9], [14],. [15], [20] ...

Topology Control of Dynamic Networks in the ... - Semantic Scholar
planets outside our own solar system, will rely on FF to achieve ... energy consumption in the network. .... the consumption of fuel and energy is also crucial for.

Dynamic Embedding of Virtual Networks in Hybrid ...
problem of embedding virtual networks on a hybrid datacenter, which translates to the joint ... Illustration of a hybrid optical-electrical datacenter network be created on-demand, ...... cation Academic Research Fund Tier 2 Grant No. MOE2013-.

A New Technique to Control the Architecture of Neuronal Networks in ...
complex structure of these systems results in experimental limitations: ..... to prepare custom masks in the lab, especially if uncommon patterns need to be ... In this case, any desired pattern is first drawn using standard graphics software and ...

Vulnerability of the developing brain Neuronal mechanisms
About 300,000 low birth weight neonates are born in the United States each year [1], and 60,000 of them are classified as very low birth weight (< 1500 g). An overwhelming majority of these children are born preterm, at a time when the brain's archit

ICESCA'08_Nabil_chouba_Multilayer Neuronal network hardware ...
Abstract— Perceptron multilayer neuronal network is widely ... Index Terms— neuronal network, Perceptron multilayer, ..... Computers, C-26(7):681-687, 1977.

Morphological characterization of in vitro neuronal ...
Aug 14, 2002 - ln(N)/ln( k , solid line and regular graphs lreg n/2k, open triangles. Data for our networks are also compared to other real networks open dots, data taken from Albert and Barabasi 26,. Table I. b The network's clustering coefficient c

Learning Methods for Dynamic Neural Networks
5 some of the applications of those learning techniques ... Theory and its Applications (NOLTA2005) .... way to develop neural architecture that learns tempo-.

DWDM-RAM: enabling grid services with dynamic optical networks ...
an OGSI/OGSA compliant service interface and will promote greater convergence between dynamic optical networks and data intensive Grid computing. 1.

Sensorimotor coupling via Dynamic Bayesian Networks
hand movements and a high degree of voluntary attentional control in directing fixations .... a function α(d) that associates the data with an hypothesis. A ..... Computer Science Division (2002). [6] A. De ... movements. Science, 265 540-543, 1994.

A Novel Dynamic Query Protocol in Unstructured P2P Networks
There are three types of architecture for peer-to-peer net- ... values. Controlled-flooding based algorithms are widely used in unstructured networks such as wireless ad hoc networks and sensor networks. Expanding Ring (ER) is the first protocol [3]

“Wireless Networks Without Edges”: Dynamic Radio ...
Department of Electrical and Computer Engineering, The University of Texas at Austin .... To address the large number of degrees of freedom ..... characteris cs.

Predicting Synchrony in a Simple Neuronal Network
of interacting neurons. We present our analysis of phase locked synchronous states emerging in a simple unidirectionally coupled interneuron network (UCIN) com- prising of two heterogeneously firing neuron models coupled through a biologically realis

Neuronal activity regulates the developmental ...
Available online on ScienceDirect (www.sciencedirect.com). ..... Multi-promoter system of rat BDNF .... data provide the additional information that deprivation of visual ..... Egan, M.F., Kojima, M., Callicott, J.H., Goldberg, T.E., Kolachana, B.S.,