Breaking the Synaptic Dogma: Evolving a Neuro-inspired Developmental Network Gul Muhammad Khan, Julian F. Miller, and David M. Halliday Electronics Department, University of York, York, YO10 5DD, UK [email protected], {jfm,dh20}@ohm.york.ac.uk http://www.york.ac.uk

Abstract. The majority of artificial neural networks are static and lifeless and do not change themselves within a learning environment. In these models learning is seen as the process of obtaining the strengths of connections between neurons (i.e. weights). We refer to this as the ’synaptic dogma’. This is in marked contrast with biological networks which have time dependent morphology and in which practically all neural aspects can change or be shaped by mutual interactions and interactions with an external environment. Inspired by this and many aspects of neuroscience, we have designed a new kind of neural network. In this model, neurons are represented by seven evolved programs that model particular components and aspects of biological neurons (dendrites, soma, axons, synapses, electrical and developmental behaviour). Each network begins as a small randomly generated network of neurons. When the seven programs are run, the neurons, dendrites, axons and synapses can increase or decrease in number and change in interaction with an external environment. Our aim is to show that it is possible to evolve programs that allow a network to learn through experience (i.e. encode the ability to learn). We report on our continuing investigations in the context of learning how to play checkers.

1

Introduction

Artificial Neural Networks (ANNs), though inspired by the brain have largely ignored many aspects of biological neural systems. Originally, there were good reasons for this. Simple models were required that could be executed on relatively slow computers. However, the computational power of modern computers has made more complex neuro-inspired approaches much more feasible. At the same time, our understanding of neuroscience has increased considerably. In our view two of the most important aspects that we should consider incorporating are neural development and time-dependent morphology. Marcus argues convincingly about the importance of development “mechanisms that build brains are just extensions of those that build the body” [1]. It is also becoming more apparent that sub-processes of neurons are highly time-dependent so that many structures are in a constant state of being re-built and changed [2]. Indeed memory itself is not a static process and the location and mechanisms responsible for X. Li et al. (Eds.): SEAL 2008, LNCS 5361, pp. 11–20, 2008. c Springer-Verlag Berlin Heidelberg 2008 

12

G.M. Khan, J.F.Miller, and D.M. Halliday

remembered information is in constant (though, largely gradual) change. The act of remembering is a process of reconstructing and changing the original structure that was associated with the original event [3]. Various studies have shown that “Dendritic trees enhance computational power” [4]. Neurons communicate through synapses which are not merely the point of connection between neurons [5]. They can change the strength and shape of the signal over various time scales. We have taken the view that the time dependent and environmentally sensitive variation of morphology and many other processes of real neurons is very important and models are required that incorporate these features. In our model a neuron consists of a soma, dendrites, axons with branches and dynamic synapses and synaptic communication. Neurite branches can grow, shrink, self-prune, or produce new branches to arrive at a network whose structure and complexity is related to properties of the learning problem [6]. Our aim is to find a set of computational functions that can allow the above characteristics and in so doing, allow us to build a neural network that is capable of learning through experience. Thus we aim to find a way of giving the network an ability to learn, so that by repeated experience a network can improve its capability to solve a range of problems. Such a network would be very different from conventional ANN models as the network would be self-training and constantly adjust itself over time in response to external environmental signals. From our studies of neuroscience, we have identified seven essential computational functions that need to be included in a model of a neuron and its communication mechanisms [7]. From this we decided what kind of data these functions should work with and how they should interact, however we cannot design the functions themselves. So we turned to an automatic method of program design, namely, Genetic Programming [8] to help us with this problem. In particular, we have used a a well established and efficient form of Genetic Programming called Cartesian Genetic Programming (CGP) in which programs are represented by directed acyclic graphs[9]. In CGP the genotype is a fixed length list of integers, which encode the function of nodes and the connections of a directed graph. In order to evaluate the effectiveness of this approach we have previously applied it to a classic AI problem called wumpus world [10]. We found that the agents improved with experience and exhibited a range of intelligent behaviours. In this paper we have turned our attention to a much more serious problem in AI that has been studied ever since the beginnings of AI, namely computer checkers. The approach taken in AI to obtaining programs that could play such games at a high level was originally developed by Shannon [11]. He developed the idea of using a game tree of a certain depth and advocated using a board evaluation function. This function allocates a numerical score according to how good a board position is for a player. Once this is obtained a method of minimax can be used [12]. This works by propagating scores up the game tree to determine the optimum move for either player. Since we are interested in how a single computational agent can learn how to play checkers in its lifetime we have not used a board evaluation function

Breaking the Synaptic Dogma

13

or minimax. Instead our checkers playing agents start from a small random network at the beginning of each game and through the running of its neural developmental programs builds a network that makes its moves. In other words, our agents grow up playing checkers The plan of the paper is as follows. Section 2 gives a discussion of some important aspects of neuroscience that we have included in our computational network. Section 3 describes the CGP computational network (CGPCNN). In section 4 we present our results and findings and in section 5 our concluding remarks and observations.

2

The CGP Computational Network (CGPCN)

This section describes in detail the structure of the CGPCN, along with the rules and evolutionary strategy used to run the system. In the CGPCN neurons are placed randomly in a two dimensional spatial grid so that they are only aware of their spatial neighbours (as shown in figure 1). Each neuron is initially allocated a random number of dendrites, dendrite branches, one axon and a random number of axon branches. Neurons receive information through dendrite branches, and transfer information through axon branches to neighbouring neurons. The dynamics of the network also change since branches may grow or shrink and move from one CGPCN grid point to another. They can produce new branches and can disappear, and neurons may die or produce new neurons. Axon branches transfer information only to dendrite branches in their proximity. Electrical potential is used for internal processing of neurons and communication between neurons and we represent it as an integer. Health, Resistance, Weight and Statefactor Four variables are incorporated into the CGPCN, representing either fundamental properties of the neurons (health, resistance, weight ) or as an aid to computational efficiency (statefactor ). The values of these variables are adjusted by the CGP programs. The health variable is used to govern replication and/or death of dendrites and connections. The resistance variable controls growth and/or shrinkage of dendrites and axons. The weight is used in calculating the potentials in the network. Each soma has only two variables: health and weight. The statefactor is used as a parameter to reduce computational burden, by keeping some of the neurons and branches inactive for a number of cycles. Only when the statefactor is zero the neurons and branches are considered to be active and their corresponding program is run. The value of the statefactor is affected indirectly by CGP programs. Inputs, Outputs and Information Processing in the Network The external inputs (encoding a simulated potential) are applied to the CGPCN and processed by the axo-synapse program, AS. These are distributed in the network in a similar way to the axon branches of neurons. When AS is executed,

14

G.M. Khan, J.F.Miller, and D.M. Halliday

External Input External output

Fig. 1. On the top left a grid is shown containing a single neuron. The rest of the figure is an exploded view of the neuron is given. The neuron consists of seven evolved computational functions. Three are electrical and process a simulated potential in the dendrite (D), soma (S) and axo-synapse branch (AS). Three more are developmental in nature and are responsible for the life-cycle of neural components (shown in grey). They decide whether dendrite branches (DBL), soma (SL) and axo-synaptic branches (ASL) should die, change, or replicate. The remaining evolved computational function (WP) adjusts synaptic and dendritic weights and is used to decide the transfer of potential from a firing neuron (dashed line emanating from soma) to a neighbouring neuron.

it modifies the potentials of neighbouring active dendrite branches. We obtain output from the CGPCN via dendrite branches. These branches are updated by the AS program of neurons and after five cycles the potentials produced are averaged and this value (Fig 1) is used as the external output. Information processing in the network starts by selecting the list of active neurons in the network and processing them in a random sequence. Each neuron take the signal from the dendrites by running the electrical dendrite branch program. The signals from dendrites are averaged and applied to the soma program along with the soma potential. The soma program is executed to get the final value of soma potential, which decides whether a neuron should fire an action potential or not. If soma fires, an action potential is transferred to other neurons through axosynaptic branches. The same process is repeated in all neurons. CGP Model of Neuron In our model neural functionality is divided into three major categories: electrical processing, life cycle and weight processing. These categories are described in detail below.

Breaking the Synaptic Dogma

15

Electrical Processing The electrical processing part is responsible for signal processing inside neurons and communication between neurons. It consists of dendrite branch, soma, and axo-synaptic branch electrical chromosomes. The dendrite program D handles the interaction of dendrite branches belonging to a dendrite. It take active dendrite branch potentials and soma potential as input and the updates their values. The Statefactor is decreased if the update in potential is large and vice versa. If any of the branches are active (has its statefactor equal to zero), their life cycle program (DBL) is run, otherwise it continues processing the other dendrites. The soma program S determines the final value of soma potential after receiving signals from all the dendrites. The processed potential of the soma is then compared with the threshold potential of the soma, and a decision is made whether to fire an action potential or not. If it fires, it is kept inactive (refractory period) for a few cycles by changing its statefactor, the soma life cycle chromosome (SL) is run, and the firing potential is sent to the other neurons by running the AS programs in axon branches. AS updates neighbouring dendrite branch potentials and the axo-synaptic potential. The statefactor of the axosynaptic branch is also updated. If the axosynaptic branch is active its life cycle program (ASL) is executed. After this the weight processing program (WP) is run which updates the Weights of neighbouring (branches sharing same grid square) branches. The processed axo-synaptic potential is assigned to the dendrite branch having the largest updated Weight. Life Cycle of Neuron This part is responsible for replication or death of neurons and neurite branches and also the growth and migration of neurite branches. It consists of three life cycle chromosomes responsible for the neuron and neurites development. The two branch chromosomes update Resistance and Health of the branch. Change in Resistance of a neurite branch is used to decide whether it will grow, shrink, or stay at its current location. The updated value of neurite branch Health decides whether to produce offspring, to die, or remain as it was with an updated Health value. If the updated Health is above a certain threshold it is allowed to produce offspring and if below certain threshold, it is removed from the neurite. Producing offspring results in a new branch at the same CGPCN grid point connected to the same neurite (axon or dendrite). The soma life cycle chromosome produces updated values of Health and Weight of the soma as output. The updated value of the soma Health decides whether the soma should produce offspring, should die or continue as it is. If the updated Health is above certain threshold it is allowed to produce offspring and if below a certain threshold it is removed from the network along with its neurites. If it produces offspring, then a new neuron is introduced into the network with a random number of neurites at a different random location.

16

3

G.M. Khan, J.F.Miller, and D.M. Halliday

Experimental Setup

The experiment is organized such that an agent is provided with CGPCN as its computational network and is allowed to play five games against a minimax based checker program (MCP). The initial population of five agents (Evolutionary strategy 1+λ, with λ set to 4) each starting with a small randomly generated initial network and randomly generated genotypes. The agents each play five games of checkers against the MCP. So in every game the agent starts with a developed network from previous (unless it is the first game in which case it begin with a random network) game and is allowed to continue to develop during the five game series. The genotype corresponding to the agent with the highest average fitness at the end of five games is selected as the parent for the new population. Four offspring formed by mutating the parent are created. Any learning behaviour that is acquired by an agent is obtained through the interaction and repeated running of the seven chromosomes within the game scenario. The MCP always plays the first move. The updated board is then applied to an agent’s CGPCN. The potentials representing the state of the board are applied to CGPCN using the axosynapse(AS) chromosome. Input is in the form of board values, which is an array of 32 elements, with each representing a playable board square. Each of the 32 inputs represents one of the following five different values depending on what is on the square of the board(represented by I). The values taken by I are as follows: if empty I=0, if king I=Maximum value(M) 232 − 1, if piece I=(3/4)M, if opposing piece I=(1/2)M, and finally if opposing king, I=(1/4)M. The board inputs are applied in pairs to all the sixteen locations in the 4x4 CGPCN grid (i.e. two virtual axo-synapse branches in every grid square). The CGPCN is then run for five cycles. During this process it updates the potentials of the dendrite branches acting as output of the network. Output is in two forms, one of the outputs is used to select the piece to move and second is used to decide where that piece should move. Each piece on the board has an output dendrite branch in the CGPCN. All pieces are assigned a unique ID, representing the CGPCN grid square where its branch is located. Each of these branches has a potential, which is updated during CGPCN processing. The values of potentials determine the possibility of a piece to move, the piece that has the highest potential will be the one that is moved, however if any pieces are in a position to jump then the piece with the highest potential of those will move. Note that if the piece is a king and can jump then, according to the rules of checkers, this takes priority. Once again if two pieces are kings and each could jump the king with the highest potential makes the jumping move. In addition, there are also five output dendrite branches distributed at random locations in the CGPCN grid. The average value of these branch potentials determine the direction of movement for the piece. Whenever a piece is removed its dendrite branch is removed from the CGPCN grid. The game is stopped if either the CGPCN of an agent or its opponent dies (i.e. all its neurons or neurites dies), or if all its or opponent players are taken,

Breaking the Synaptic Dogma

17

or if the agent or its opponent can not move anymore, or if the allotted number of moves allowed for the game have been taken. CGP Computational Network (CGPCN) Setup The CGPCN is arranged in the following manner for this experiment. Each player CGPCN has neurons and branches located in a 4x4 grid. Initial number of neurons is 5. Maximum number of dendrites is 5. Maximum number of dendrite and axon branches is 200. Maximum branch statefactor is 7. Maximum soma statefactor is 3. Mutation rate is 5%. Maximum number of nodes per chromosome is 200. Maximum number of moves is 20 for each player. Fitness Calculation Both the agent and the software is allowed to play a limited number of moves and their fitness is accumulated at the end of this period using the following equation: F itness = A + 200NK + 100NM − 200NOK − 100NOM + NM OV , Where NK represents the number of kings, and NM represents number of men of the current player. NOK and NOM represent the number of kings and men of the opposing player. NMOV represents the total number of moves played. A is 1000 for a win, and zero for a draw. To avoid spending much computational time assessing the abilities of poor game playing agents we have chosen a maximum number of moves. If this number of moves is reached before either of the agents win the game, then A =0, and the number of pieces and type of pieces decide the fitness value of the agent. 3.1

Results and Analysis

From the fitness graph shown in figure 2 (left), it is difficult to assess if any learning has taken place, because the MCP is playing at a much higher level and the evolved agent learns from a random nework in game 1 which is developed by the evolved programs over the five game series. It seems quite difficult to learn from a highly skilled system in general.

Fig. 2. Fitness of CGPCN playing checkers against MCP over the course of evolution (left), and average fitness variation of high evolved agent against less evolved agent

18

G.M. Khan, J.F.Miller, and D.M. Halliday

We tested whether the agent’s level of play got better with evolution or not. We tested them against less evolved agents and found that the well evolved agent almost always beats the less evolved one, in some of the cases it ends up in a draw, but in those cases the well evolved agent ends up with more kings and pieces than the less evolved agent. Figure 2(right) shows the variation in fitness of a well evolved agent (2000th generation) playing white against, less evolved agents from various generations (140 games). From the graph it is clearly visible that the highly evolved agent always beat the less evolved ones. However, the well evolved player appears to play better against players from later generation rather than earlier. The reason for this are as follows: the MCP updates its database after every game and plays a completely different game even against the same opponent. This means that it is difficult for the agent to maintain its previously gained higher value of fitness on a subsequent game. In order to acheive a higher fitness, the agent has to play slightly different game from its earlier games. When an evolved player plays against close ancestral relatives they tend to play in a similar way making it easy for the well evolved agent to beat it as evident from figure 2(right). Whereas they find it difficult to win, when their opponents are distant ancestral relative, since they play in a very different way. In a further experiment we studied the performance of evolved agents over a 500 games sequence. One agent began each game with a random network and the other (more evolved) was allowed to retain its network and develop it over all the games. A well evolved agent from generation 2000 is taken and is allowed to play against an agent from 50th generation. We set the rules of the game such that both the agents are allowed to play 20 moves each, and the 50th generation agent always begin playing each game with a same initial random network on which it is trained, whereas the 2000th generation agent continues with the network it had at the end of the previous game. The 2000th generation agent plays as white while the 50th generation agent plays as black. The genotype of the agents is kept the same throughout this experiment, only the architecture develop (number of neurons and neurites change) and shape (neurite branches continue to shrink and grow) during the course of game. At the end of each game we have calculated the fitness of both the agents, and plotted against each other. Figure 3(left) shows the fitness variations of the two agents calculated at the end of every game, and Figure 3(right) the fitness averaged over five consecutive games. The 2000th fitness represented by undashed line is always above zero, showing clearly that its performance is better throughout 500 games, while developing its network, but not that it improves at all. Figure 4(left) shows the accumulated fitness graph of the well evolved agent over 500 games. From these graphs it is evident that although changes in the network occur during every game, the network sustains its integrity of getting higher fitness over the less evolved agent, demonstrating that it does not forget how to play checkers better. However, figure 3(left) shows some peaks at various stages, these are the cases when the highly evolved agent beat the opponent within 20 moves. This is very interesting since while being evolved the agent was

Breaking the Synaptic Dogma

19

Fig. 3. Fitness variation (left) of a highly evolved agent against less evolved agent, and average fitness over every five games (right)

Fig. 4. Graph showing the accumulated fitness (left), and network statistics (right)

never able to beat its opponent within 20 moves, as it is trained against a high skilled checkers program (MCP). But during the development stage when it is allowed to play more than 5 games it is able to do so. As it continues to develop and play without evolution, its ability to beat the opponent within 20 moves seems to increase as evident by the average fitness staying above the x-axis in Figure 3(right). Figure 4(right) shows the variation in number of neurons and neurites of a well evolved agent (2000th generation) during all the games. From the figure it is evident that initially the network changes quite a lot, at some point reducing to a minimum structure, and then stabilizes to a structure with a fixed number of neurons and neurites. This is quite interesting as the network is still allowed to develop but it stabilizes itself, while its branches continue to move around and the weights of neurites continue to update. The network is not trained to find a small network, but when it plays games continuously it continues to change until it finds a minimal suitable structure that can play well. This is evident from the accumulated fitness graph in figure 4 (left). From a close analysis we found that every time a new game is started the network although updated, repeats its initial moves, these cause it to make two double jumps taking four pieces of the opponent at the start. This is interesting behaviour as the opponent always starts with the same initial structure, and it will repeat the

20

G.M. Khan, J.F.Miller, and D.M. Halliday

same moves if the developed agent (well evolved) also does. A number of games are studied starting from game 100 (when the network appears to stabilize), the agent seems to repeat its first 8 moves almost every time and this causes the agent to take two double jumps over the opponent giving it an extra advantage. The developed agent does not know when one game ends and another begins, yet it makes the same initial moves with a different network, forcing the opponent to repeat the same mistakes, causing the opponent to lose the game. This suggests that the agent responds to changes in board positions and is able to make the same moves with a different network. This demonstrates that stable behaviour can be obtained when the CGPCN is changing.

4

Conclusion

We have described a neuron-inspired developmental approach to construct a new kind of computational neural architectures which has the potential to learn through experience. We found that the neural structures controlling the agents grow and change in response to their behaviour, interactions with each other and the environment, and allow them to learn and exhibit intelligent behaviour. We used a technique called CGP to encode and evolve seven computational functions inspired by the biological neuron. The eventual aim is to see if it is possible to evolve a network that can learn by experience.

References 1. 2. 3. 4. 5. 6. 7. 8. 9.

10. 11. 12.

Marcus, G.: The Birth of the Mind. Basic Books (2004) Smythies, J.: The Dynamic Neuron. BradFord (2002) Rose, S.: The Making of Memory: From Molecules to Mind. Vintage (2003) Koch, C., Segev, I.: The role of single neurons in information processing. Nature Neuroscience Supplement 3, 1171–1177 (2000) Kandel, E.R., Schwartz, J.H., Jessell, T.M.: Principles of Neural Science, 4th edn. McGraw-Hill, New York (2000) Bestman, J., Santos Da Silva, J., Cline, H.: Dendrites: Dendrite Development. Oxford University Press, Oxford (2008) Khan, G.: Thesis: Evolution of Neuro-inspired developmental programs capable of Learning. Department of Electronics, University of York (2008) Koza, J.: Genetic Programming: On the Programming of Computers by Means of Natural selection. MIT Press, Cambridge (1992) Miller, J.F., Thomson, P.: Cartesian genetic programming. In: Poli, R., Banzhaf, W., Langdon, W.B., Miller, J., Nordin, P., Fogarty, T.C. (eds.) EuroGP 2000. LNCS, vol. 1802, pp. 121–132. Springer, Heidelberg (2000) Khan, G., Miller, J., Halliday, D.: Coevolution of intelligent agents using cartesian genetic programming. In: Proc. GECCO, pp. 269–276 (2007) Shannon, C.: Programming a computer for playing chess. Phil. Mag. 41, 256–275 (1950) Dimand, R.W., Dimand, M.A.: A History of Game Theory: From the Beginnings to 1945, Routledge, vol. 1 (1996)

Breaking the Synaptic Dogma: Evolving a Neuro ... - Springer Link

Electronics Department, University of York, York, YO10 5DD, UK ... or decrease in number and change in interaction with an external en- vironment. Our aim is ..... locations in the 4x4 CGPCN grid (i.e. two virtual axo-synapse branches in every.

1MB Sizes 1 Downloads 183 Views

Recommend Documents

Air Quality Forecaster: Moving Window Based Neuro ... - Springer Link
(Eds.): Applications of Soft Computing, ASC 52, pp. 137–145. springerlink. ... To develop the neural network models for PM10, SO2, and NO2. 4. .... no. of moving windows with q no. of windows containing both inputs and the target. 3.4 Model ...

Evolving Neural Networks to Play Go - Springer Link
Go is a difficult game for computers to master, and the best go programs are still weaker than the ... Smaller boards are often used for teaching purposes.

Evolving Neural Networks to Play Go - Springer Link
[email protected] ... On a 9×9 go board, networks that were able to defeat a simple computer opponent ... more than 3,000 years ago, making it one of the oldest .... 2. Eye space determines life and death of a group of stones. The group in (a)

The ignorant observer - Springer Link
Sep 26, 2007 - ... of uncertainty aversion directly related to comparisons of sets of infor- ...... for all f ∈ Acv. Hence, ai ˆVi ( f ) + bi = aj ˆVj ( f ) + bj for all i, j ∈ N, ...

An island-model framework for evolving neuro ...
showing that the aforementioned framework can parallelise the controller design .... will outline ongoing work into the application of migration to more complex ..... http://www.cnes.fr/web/5719-msl-09-at-a-glance.php. [28] D. P. Miller and T. L. ...

Tinospora crispa - Springer Link
naturally free from side effects are still in use by diabetic patients, especially in Third .... For the perifusion studies, data from rat islets are presented as mean absolute .... treated animals showed signs of recovery in body weight gains, reach

Chloraea alpina - Springer Link
Many floral characters influence not only pollen receipt and seed set but also pollen export and the number of seeds sired in the .... inserted by natural agents were not included in the final data set. Data were analysed with a ..... Ashman, T.L. an

GOODMAN'S - Springer Link
relation (evidential support) in “grue” contexts, not a logical relation (the ...... Fitelson, B.: The paradox of confirmation, Philosophy Compass, in B. Weatherson.

Bubo bubo - Springer Link
a local spatial-scale analysis. Joaquın Ortego Æ Pedro J. Cordero. Received: 16 March 2009 / Accepted: 17 August 2009 / Published online: 4 September 2009. Ó Springer Science+Business Media B.V. 2009. Abstract Knowledge of the factors influencing

Quantum Programming - Springer Link
Abstract. In this paper a programming language, qGCL, is presented for the expression of quantum algorithms. It contains the features re- quired to program a 'universal' quantum computer (including initiali- sation and observation), has a formal sema

BMC Bioinformatics - Springer Link
Apr 11, 2008 - Abstract. Background: This paper describes the design of an event ontology being developed for application in the machine understanding of infectious disease-related events reported in natural language text. This event ontology is desi

Thoughts of a reviewer - Springer Link
or usefulness of new diagnostic tools or of new therapy. 3. They may disclose new developments in clinical sci- ence such as epidemics, or new diseases, or may provide a unique insight into the pathophysiology of disease. In recent years much has bee

Candidate quality - Springer Link
didate quality when the campaigning costs are sufficiently high. Keywords Politicians' competence . Career concerns . Campaigning costs . Rewards for elected ...

Mathematical Biology - Springer Link
Here φ is the general form of free energy density. ... surfaces. γ is the edge energy density on the boundary. ..... According to the conventional Green theorem.

Artificial Emotions - Springer Link
Department of Computer Engineering and Industrial Automation. School of ... researchers in Computer Science and Artificial Intelligence (AI). It is believed that ...

Towards a Generic Process Metamodel - Springer Link
In Software Engineering the process for systems development is defined as an activity ... specialised and generalised framework based on generic specification and providing ..... user interfaces, and multimedia, and the World Wide Web;.

A Process Semantics for BPMN - Springer Link
Business Process Modelling Notation (BPMN), developed by the Business ..... In this paper we call both sequence flows and exception flows 'transitions'; states are linked ...... International Conference on Integrated Formal Methods, pp. 77–96 ...

Towards a Generic Process Metamodel - Springer Link
these problems, particularly cost saving and product and process quality improvement ... demanding sometimes, is considered to be the object of interest of ...

Bayesian optimism - Springer Link
Jun 17, 2017 - also use the convention that for any f, g ∈ F and E ∈ , the act f Eg ...... and ESEM 2016 (Geneva) for helpful conversations and comments.

Contents - Springer Link
Dec 31, 2010 - Value-at-risk: The new benchmark for managing financial risk (3rd ed.). New. York: McGraw-Hill. 6. Markowitz, H. (1952). Portfolio selection. Journal of Finance, 7, 77–91. 7. Reilly, F., & Brown, K. (2002). Investment analysis & port

(Tursiops sp.)? - Springer Link
Michael R. Heithaus & Janet Mann ... differences in foraging tactics, including possible tool use .... sponges is associated with variation in apparent tool use.

Fickle consent - Springer Link
Tom Dougherty. Published online: 10 November 2013. Ó Springer Science+Business Media Dordrecht 2013. Abstract Why is consent revocable? In other words, why must we respect someone's present dissent at the expense of her past consent? This essay argu

Regular updating - Springer Link
Published online: 27 February 2010. © Springer ... updating process, and identify the classes of (convex and strictly positive) capacities that satisfy these ... available information in situations of uncertainty (statistical perspective) and (ii) r