Process Theory for Supervisory Control of Stochastic Systems with Data Jasen Markovski

Abstract— We propose a process theory for supervisory control of stochastic nondeterministic plants with data-based observations. The Markovian process theory with data relies on the notion of Markovian partial bisimulation to capture controllability of stochastic nondeterministic systems. It presents a theoretical basis for a model-based systems engineering framework that is based on state-of-the-art tools: we employ Supremica for supervisor synthesis and MRMC for stochastic model checking and performance evaluation. We present the process theory and discuss the implementation of the framework.

I. INTRODUCTION Complexity of high-tech systems is constantly increasing due to higher demands for better quality, performance, safety, and ease of use, bringing up new challenges in control software development. Software engineering techniques that traditionally rely on iterative (re)coding-testing loops are not entirely adequate to handle the challenge as control software constantly evolves due to design changes of the control requirements. This issue gave rise to supervisory control theory, which investigates automated supervisory control software synthesis based on discrete-event models of the uncontrolled system and the control requirements [1], [2]. Supervisory controllers coordinate high-level system behavior by observing the discrete behavior of the system by means of receiving sensor signals from ongoing activities, make a decision which future activities are allowed, and send back control signals to the hardware actuators. The model of the uncontrolled system is referred to as a plant, whereas the model of the supervisory controller is referred to as a supervisor. Under the assumption that a supervisory controller can react sufficiently fast on machine input, one can model this supervisory control feedback loop as a pair of synchronizing processes, which form the supervised plant [1], [2]. Traditionally, the activities of the machine are modeled as discrete events and the plant is modeled by the formed traces. It is typically given as a set of synchronizing processes, whose joint recognized language corresponds to the observed traces. The events are split into controllable events, which model interaction with the actuators of the machine, and uncontrollable events, which model observation of sensors. Therefore, the supervisor has the option to disable controllable events by not synchronizing with them in case they lead to unsafe states identified by the control requirements, but he must always enable available uncontrollable events by always synchronizing with them. The existence of a supervisor under these restrictions is ensured by a set of J. Markovski is with the Department of Mechanical Engineering, Eindhoven University of Technology, The Netherlands [email protected] Supported by Dutch NWO project: ProThOS, no. 600.065.120.11N124.

controllability conditions. To increase modeling convenience and expressivity, several extensions to the theory followed with respect to both functionality and quantitative behavior. State-based control requirements increase the observational power of the supervisor to directly observe the state of the plant [3], [4]. Further extensions deal with parameterized systems comprising variables or data elements, aiming to improve supervisory control theory on two fronts: more concise specifications due to parametrization of systems [5], [6] and greater expressiveness and modeling convenience [7], [8]. The extensions range over the most prominent models of discrete-event systems like finite-state machines [5], labeled transition systems [7], and automata [8]. In addition, depending on the model at hand, the notion of language controllability [1], [2] for deterministic systems is extended to controllability of parameterized languages [5], [7], whereas the notion of state controllability [9], [10], aimed at nondeterministic systems, is extended with data for extended finite automata [6], [11]. On another front, several extensions introduced quantitative aspects, like stochastic delays [12], [13], in order to ascertain that extra performance or reliability requirements are met as well. The problem of optimality has also been tackled in the field of performance evaluation, employing the wide-spread class of Markov decision processes [14]. The control problem is to schedule the control actions such that some performance measure is optimized. Stochastic games problem variants [15] that specify the control strategy using stochastic extensions of temporal logics are also emerging in the formal methods community [16], [17], [18]. In this paper, we propose a Markovian process theory with data, in which controllability of stochastic nondeterministic systems with data is captured by means of an behavioral relation relying on (Markovian) partial bisimulation [19], [20], [21]. The theory presents a sound and complete basis for a model-based systems engineering framework that relies on state-of-the-art tools. We employ Supremica [22] for supervisory controller synthesis, whereas we rely on the stochastic model checker MRMC [23] for validation of performance and reliability requirements. Our framework extends and subsumes prior model-based synthesis frameworks of [4], [21], which deal with separate proposals for statebased supervision and performance evaluation, respectively. II. MARKOVIAN PROCESS ALGEBRA WITH DATA We develop a Markovian process algebra with data, suitable for modeling of supervisory control loops comprising nondeterministic parameterized stochastic plants and supervisors with data-based observations. To this end, the process

algebra encompasses successful termination, which models final or marked states which is needed for nonblocking supervision [1], [2], action prefixes with data assignment, which model discrete-event behavior and cater for data dynamics, Markovian delays, which model stochastic behavior and provide for syntactic treatment [21] due to their memoryless property [14], [24], guarded commands, which condition termination, labeled, and stochastic transitions based on data assignments and are employed to model supervision, sequential composition, which supports the definition of iteration, iteration, which specifies finite recursive processes, and parallel composition with synchronization of [1], [2], which models the coupling of the supervisor and the plant in the supervisory control feedback loop. The set of data elements is given by D, whereas only finite integer and enumerated types are currently supported by the synthesis tool [22]. The set of data variables is denoted by V, and by E we denote data expressions involving standard arithmetical operations [22] evaluated with respect to e : E → D. The guarded commands are given as Boolean formulas, where the logical operators are given by {¬, ∧, ∨, ⇒}, denoting negation, conjunction, disjunction, and implication, respectively, and the atomic propositions are formed by the predicates from the set {<, ≤, =, 6=, ≥, >} between data variables and data elements. We use B to denote the obtained Boolean expressions, evaluated with respect to a given valuation v : B → {ff, tt}, where ff denotes the logical value false, and tt denotes true. We update variables by a partial variable update function f : V * D, which is coupled with the action transitions that are labeled by actions in the set A. The set of process terms P is induced by P , given by P ::= 0 | 1 | af .P | λ.P | φ :→ P | P +P | P ·P | P ∗ | P A kB P , where a ∈ A, f : V * E, λ ∈ R with λ > 0, φ ∈ B, and A, B ⊆ A. Each process p ∈ P is coupled with a global variable assignment environment that is used to evaluate the guards and keeps track of updated variables, notation hp, (σ1 , σ2 )i ∈ P × Σ for Σ = (V → E) × 2V . By σ1 : V → E we denote the assignment of the variables in order to consistently evaluate the guards, whereas the predicate σ2 ⊆ V keeps track of the updated variables, which is needed for correct synchronization. We write σ = (σ1 , σ2 ) for σ ∈ Σ, when the components of the environment are not explicitly required. The initial assignment provides the initial values of all variables that the process comprises. The process theory has two constants: 0 denotes deadlock that cannot execute any action and 1 denotes the option to successfully terminate. The action-prefixed process corresponding to af .p executes the action a, while assigning values to variables that need to be updated according to f , and continues behaving as p. The Markovian prefix operator λ.p enables a time delay that is distributed according to the negative exponential distribution with parameter λ1 , for a positive λ ∈ R and after expiration of the delay continues behaving like p. The guarded command, notation φ :→ p, specifies a guard φ ∈ B that guards a process p ∈ P. If the guard is successfully evaluated, the process continues behaving as p ∈ P or, else, it deadlocks. The sequential

composition p · q executes an action of the first process, or if the first process successfully terminates, it continues to behave as the second. The iteration specifies recursive behavior and it unfolds with respect to the sequential composition. The alternative composition p + q makes a nondeterministic choice between action prefixes and possibly stochastic delays and a Markovian choice [14] between stochastic delays by executing an action or a stochastic delay and continues to behave as the remainder of p or q. The parallel composition p A kB q synchronizes labeled transitions on the actions of A ∩ B, and interleaves for (A \ B) ∪ (B \ A). The Markovian delays are always interleaved. Typical use of the parallel composition involves the alphabets of the processes p and q, given by Ap and Aq , respectively. In that case, we simply write p k q for p Ap kAq q, when clear from the context. We give semantics in terms of Interactive Markov chains [24] coupled with a variable assignment, in order to provide for compositional reasoning with stochastic systems and enable data elements. The states of the Interactive Markov chains with data are labeled by the process terms themselves, and the dynamics of the process is given by a successful termination option predicate ↓ ⊆ P × Σ that plays the role of final or marked states for nonblocking supervision [1], [2], an action transition relation −→ ⊆ (P×Σ)×A×(P×Σ) that models discrete-event behavior, and a Markovian delay transition 7−→ ⊆ (P × Σ) × R × (P × Σ) that models stochastic behavior. We employ infix notation a λ and write hp, σi↓, hp, σi −→ hp0 , σ 0 i, and hp, σi 7−→ hp0 , σ 0 i. To present concisely the updating of the data variables, we write D(f ) for the domain of function f and we write f |C for the restriction of the function f to the domain C ⊆ D(f ). Also, we introduce the notation f {f1 } . . . {fn }, where f : A → B and fi : A * B for 1 ≤ i ≤ n are partial functions with mutually disjoint domains. For every x ∈ A, we have that f {f1 } . . . {fn }(x) = fj (x), if there exists some j such that 1 ≤ j ≤ n and x ∈ D(fj ), or f (x), otherwise. We define ↓, −→, and 7−→ using structural operational semantics, depicted by the operational rules in Fig. 1, where the premise of the rule gives the conditions for the resulting transition to be enabled. The variable assignments are evaluated for the action labeled transitions and they are kept in the global data environment. The underlying behavioral relation that we employ is an extension of the Markovian partial bisimulation [25], which is able to handle the data assignments. We need some preliminary notions. Given a relation R, we will write R−1 , {(q, p) | (p, q) ∈ R}. We require that R is reflexive and transitive. Then, also R−1 and R ∩ R−1 are reflexive and transitive. Moreover, R ∩ R−1 is symmetric, making it an equivalence. We employ this equivalence to ensure that the exiting accumulative Markovian rates of equivalent states to the same equivalence classes in a given valuation coincide as in the standard definition for Markovian bisimulation [24]. For p, p0 ∈ P, σ ∈ Σ, and C ⊆ P, P λ we define rate(p,Pp0 , σ) , {λ | hp, σi 7−→ hp0 , σi} and 0 rate(p, C, σ) , p0 ∈C rate(p, p , σ). Now, we say that a

1

2

h1, σi ↓

haf .p, (σ1 , σ2 )i −→ hp, (σ1 {{X7→e(f (X)) | X∈D(f )}}, D(f ))i a

6 (7)

0

λ

0

hp, σi −→ hp , σ i a

hp + q, σi −→ hp0 , σ 0 i

hp, σi ↓, hq, σi 7−→ hq 0 , σ 0 i λ

λ

13

hλ.p, σi 7−→ hp, σi

hp, σi ↓, hq, σi −→ hq 0 , σ 0 i a

hp · q, σi −→ hq 0 , σ 0 i

λ

hp, σi −→ hp0 , σ 0 i a

hp · q, σi −→ hp0 · q, σ 0 i

hp, σi 7−→ hp0 , σ 0 i

14

λ

hp, σi ↓, hq, σi ↓ hp A kB q, σi ↓

18 (19)

λ

a

15

hp · q, σi 7−→ hp0 · q, σ 0 i

hp∗ , σi ↓

hp, σi −→ hp0 , σ 0 i, a ∈ A \ B a

hp A kB q, σi −→ hp0 A kB q, σ 0 i

a

23

hp, σi −→ hp0 , σ 0 i a

hp∗ , σi −→ hp0 · p∗ , σ 0 i λ

hp, σi 7−→ hp0 , σ 0 i

20 (21)

λ

hp A kB q, σi 7−→ hp0 A kB q, σ 0 i

a

hp, σi −→ hp0 , (σ10 , σ20 )i, hq, σi −→ hq 0 , (σ100 , σ200 )i, a ∈ A ∩ B, σ10 |σ20 ∩σ200 = σ100 |σ20 ∩σ200 a

hp A kB q, σi −→ hp0 A kB q 0 , (σ10 {σ100 |σ200 \σ20 }, σ20 ∪ σ200 )i

hp, σi ↓, v(φ) = tt hφ :→ p, σi ↓ Fig. 1.

hp, σi ↓ hp + q, σi ↓

a

11

a

17

hp∗ , σi 7−→ hp0 · p∗ , σ 0 i

24

4 (5)

λ

hp, σi ↓, hq, σi ↓ hp · q, σi ↓

a

λ

hp, σi 7−→ hp0 , σ 0 i

22

10

hp + q, σi −→ hp0 , σ 0 i

hp · q, σi 7−→ hq 0 , σ 0 i 16

0

hp, σi 7−→ hp , σ i

8 (9)

λ

12

0

λ>0

3

a

a

25

hp, σi −→ hp0 , σ 0 i, v(φ) = tt a

hφ :→ p, σi −→ hp0 , σ 0 i

λ

26

hp, σi 7−→ hp0 , σ 0 i, v(φ) = tt λ

hφ :→ p, σi 7−→ hp0 , σ 0 i

Structural operational semantics that induces Interactive Markov chains with data

reflexive and transitive relation R ⊆ P × P is a Markovian partial bisimulation for the bisimulation action set B ⊆ A if for all (p, q) ∈ R and σ ∈ Σ it holds: 1) hp, σi ↓ if and only if hq, σi ↓; a 2) if hp, σi −→ hp0 , σ 0 i for a ∈ A, then there exist q 0 ∈ P a such that hq, σi −→ hq 0 , σ 0 i and (p0 , q 0 ) ∈ R; b 3) if hq, σi −→ hq 0 , σ 0 i for b ∈ B, then there exist p0 ∈ P b such that hp, σi −→ hp0 , σ 0 i and (p0 , q 0 ) ∈ R. 4) rate(p, C, σ) = rate(q, C, σ) for all C ∈ P/(R∩R−1 ). If R is a partial bisimulation relation such that (p, q) ∈ R, then p is partially bisimilar to q with respect to B and we write pB q. It is not difficult to show that Markovian partial bisimilarity is a preorder for the process terms in P and it is a precongruence for the introduced operators following [25]. To relate to language-based notions of controllability and partial observability, we define a trace transition relation t hp, σi −→∗ hp0 , σ 0 i for some t = x1 . . . xn ∈ (A ∪ R)∗ , where for n = 0 we have the empty trace t =  with x1 x2 x3  hp, σi −→∗ hp, σi, whereas hp, σi 99Khp1 , σ1 i 99Khp2 , σ2 i 99K xn . . . 99K hp0 , σ 0 i for n > 0 and some p1 , . . . , pn−1 ∈ P, x σ1 , . . . , σn−1 ∈ Σ, and x1 , . . . , xn−1 ∈ A ∪ R, where 99K x x denotes −→, if x ∈ A, or 7−→, if x ∈ R. The prefix-closed language generated by hp, σi is L(hp, σi) = {t ∈ (A ∪ R)∗ | t hp, σi −→∗ hp0 , σ 0 i}. III. SUPERVISORY CONTROLLER SYNTHESIS We split the set of actions A to controllable C and uncontrollable U actions such that C∩U = ∅ and C∪U = A. Recall that marked states are modeled by adding a successfully termination option. We note, however, that in the process theory, successful termination plays an additional role of enabling the sequential composition of processes, which is not present in the automata theory of [1], [2]. We can specify the plant by any process p ∈ P. We require, however, that the supervisor is a deterministic process [20], which sends feedback to the plant in terms of synchronizing controllable events. The supervisor should not not alter the state of

the plant in any other way, i.e., it comprises no variable assignments, nor stochastic delays [25]. It relies on data observation to make supervision decisions in the vein of [6]. To summarize, the supervisor observes the state of the plant, identified by the values of the shared variables, and synchronizes on controllable events, while always enabling uncontrollable events. Consequently, the supervisor needs not keep a history of ∗ Pevents, so it can be Pdefined as an iterative process s = φ :→ c.1 + c∈C c u∈U u.1 + ψ :→ 1 , for φc , ψ ∈ B. This supervisor employs data assignment observation to identify the state of the plant and send back feedback regarding controllable P events by synchronizing on self loops, as specified by c∈C φc :→ c.1. Moreover, it always enables the uncontrollable events as specified by P u.1. It can potentially disable undesired termination u∈U options in states identified by ψ ∈ B. The guards φc for c ∈ C and ψ depict the supervision actions [6]. Now, we can specify the supervised plant as p k s. To ensure that no uncontrollable events are disabled by the supervisor, we employ partial bisimilarity and require that p k s U p. This relation subsumes the definitions of standard language-based controllability [1], [2], state controllability with data [6], and controllability of Markovian languages [13]. We consider data-based control requirements, stated in terms of boolean expressions ranging over the shared variables, possibly specifying which events should be enabled/disabled. For a setting with event-based control requirements, we refer the interested reader to [20], whereas for state-based control requirements, we refer to [25]. The set of data-based control requirements, denoted by the set S, a a are induced by S ::= −→ ⇒ φ | φ ⇒ −→ Y | φ, for a ∈ A and φ ∈ B. A given control requirement r ∈ S is satisfied with respect to the process p ∈ P in the assignment environment σ ∈ Σ, notation hp, σi |= r, according to the operational rules depicted in Fig. 2. To ensure that the requirements are satisfied for every reachable state, we extend |= to |=∗ , where p |=∗ r if

a

27

hp, σi |= ¬φ ⇒ −→ Y

28

a

hp, σi |= −→ ⇒ φ a

29

v(φ) = ff

{hp0 , σ 0 i | hp, σi −→ hp0 , σ 0 i} = ∅ a

hp, σi |= φ ⇒ −→ Y Fig. 2.

a

hp, σi |= φ ⇒ −→ Y 30

v(φ) = tt hp, σi |= φ

Satisfiability of state-based control requirements

t

p0 |= r for every p0 ∈ P such that hp, σi −→∗ hp0 , σ 0 i for σ, σ 0 ∈ Σ and t ∈ L(hp, σi). We ensure that the supervised plant respects the data-based control requirements, given by C ⊂ S, by requiring that for the initial V variable assignment σ0 it holds that hp k s, σ0 i |=∗ r∈C r. In addition, a nonblocking supervisor must ensure that every state in the supervised plant can reach a state that has a successful termination option, i.e., for every hp0 , σ 0 i ∈ P × Σ and t t ∈ L(hp k s, σ0 i) such that hp k s, σ0 i −→∗ hp0 , σ 0 i, there exists hp00 , σ 00 i ∈ P × Σ and t0 ∈ L(hp0 , σ 0 i) such that t0 hp0 , σ 0 i −→∗ hp00 , σ 00 i and hp00 , σ 00 i↓ holds. We opt for Supremica [22] as a synthesis tool because it provides great modeling convenience and range of options with respect to specifying plants with data and optimized synthesis procedures [11]. The tool, however, does not support direct specification of the data-based control requirements of Fig. 2. Instead we employ control requirements comprising self loops that are guarded by a propositional formula that relates to the original control requirement, similar to the approach in [21]. Once a supervisor is synthesized, we derive the stochastic supervised plant. The performance of the supervised plant is defined only if the action transitions do not introduce real nondeterministic choices [24]. After elimination of the labeled transitions, the result is a labeled Markov chain [23] with state labels that are carried over from the original specification in Supremica. To specify the performance requirements, we employ Continuous Stochastic Logic [26], which is completely supported by MRMC for continuoustime Markov chains and partially supported for Markov decision processes. IV. CONCLUDING REMARKS We presented a Markovian process theory with data that can specify supervisory control loops involving stochastic plants and data-based observations. The theory served as a basis of a model-based systems engineering framework that couples supervisor synthesis and performance evaluations. It can be employed to derive a continuous-time Markov chain from a supervised stochastic plant, which is subsequently fed to a Markovian model checker. R EFERENCES [1] P. J. Ramadge and W. M. Wonham, “Supervisory control of a class of discrete-event processes,” SIAM Journal on Control and Optimization, vol. 25, no. 1, pp. 206–230, 1987. [2] C. Cassandras and S. Lafortune, Introduction to discrete event systems. Kluwer Academic Publishers, 2004.

[3] C. Ma and W. M. Wonham, Nonblocking Supervisory Control of State Tree Structures, ser. Lecture Notes in Control and Information Sciences. Springer, 2005, vol. 317. [4] J. Markovski, D. A. van Beek, R. J. M. Theunissen, K. G. M. Jacobs, and J. E. Rooda, “A state-based framework for supervisory control synthesis and verification,” in Proceedings of CDC 2010. IEEE, 2010, pp. 3481–3486. [5] Y.-L. Chen and F. Lin, “Modeling of discrete event systems using finite state machines with parameters,” in Proceedings of CCA 2000. IEEE, 2000, pp. 941 –946. [6] S. Miremadi, K. Akesson, and B. Lennartson, “Extraction and representation of a supervisor using guards in extended finite automata,” in Proceedings of WODES 2008. IEEE, 2008, pp. 193–199. [7] C. de Oliveira, J. Cury, and C. Kaestner, “Synthesis of supervisors for parameterized and infinity non-regular discrete event systems,” in Proceedings of DCDS 2007. IFAC, 2007, pp. 77–82. [8] B. Gaudin and P. Deussen, “Supervisory control on concurrent discrete event systems with variables,” in Proceedings of ACC 2007. IEEE, 2007, pp. 4274 –4279. [9] M. Fabian and B. Lennartson, “On non-deterministic supervisory control,” Proceedings of the 35th IEEE Decision and Control, vol. 2, pp. 2213–2218, 1996. [10] C. Zhou, R. Kumar, and S. Jiang, “Control of nondeterministic discrete-event systems for bisimulation equivalence,” IEEE Transactions on Automatic Control, vol. 51, no. 5, pp. 754–765, 2006. [11] S. Miremadi, B. Lennartson, and K. Akesson, “BDD-based supervisory control on extended finite automata,” in Proceedings of CASE 2011. IEEE, 2011, pp. 25 –31. [12] R. Kumar and V. K. Garg, “Control of stochastic discrete event systems: Synthesis,” in Proceedings of CDC 1998, vol. 3. IEEE, 1998, pp. 3299 –3304. [13] R. H. Kwong and L. Zhu, “Performance analysis and control of stochastic discrete event systems,” in Feedback Control, Nonlinear Systems, and Complexity, ser. Lecture Notes in Control and Information Sciences. Springer, 1995, vol. 202, pp. 114–130. [14] R. A. Howard, Dynamic Probabilistic Systems. John F. Wiley & Sons, 1971, vol. 1 & 2. [15] K. Chatterjee, M. Jurdzinski, and T. A. Henzinger, “Simple stochastic parity games,” in Computer Science Logic, ser. Lecture Notes in Computer Science. Springer, 2003, vol. 2803, pp. 100–113. [16] C. Baier, M. Grer, M. Leucker, B. Bollig, and F. Ciesinski, “Controller synthesis for probabilistic systems,” in Proceedings of IFIP TCS 2004. Kluwer, 2004, pp. 493–506. [17] T. Brzdil, V. Forejt, and A. Kucera, “Controller synthesis and verification for Markov decision processes with qualitative branching time objectives,” in Automata, Languages and Programming, ser. Lecture Notes in Computer Science. Springer, 2010, vol. 5126, pp. 148–159. [18] T. Chen, T. Han, and J. Lu, “On the Markovian randomized strategy of controller for Markov decision processes,” in Fuzzy Systems and Knowledge Discovery, ser. Lecture Notes in Computer Science. Springer, 2006, vol. 4223, pp. 149–158. [19] J. J. M. M. Rutten, “Coalgebra, concurrency, and control,” Center for Mathematics and Computer Science, Amsterdam, The Netherlands, SEN Report R-9921, 1999. [20] J. C. M. Baeten, D. A. van Beek, B. Luttik, J. Markovski, and J. E. Rooda, “A process-theoretic approach to supervisory control theory,” in Proceedings of ACC 2011. IEEE, 2011, pp. 4496–4501. [21] J. Markovski and M. Reniers, “Verifying performance of supervised plants,” in Proceedings of ACSD 2012. IEEE, 2012, To appear. Online http://google.sites.com/site/jasenmarkovski. [22] K. Akesson, M. Fabian, H. Flordal, and R. Malik, “Supremica - an integrated environment for verification, synthesis and simulation of discrete event systems,” in Proceedings of WODES 2006. IEEE, 2006, pp. 384 – 385. [23] J.-P. Katoen, M. Khattri, and I. S. Zapreev, “A Markov reward model checker,” in Proceedings of QEST 2005. IEEE, 2005, pp. 243 – 244. [24] H. Hermanns, Interactive Markov Chains and the Quest For Quantified Quantity, ser. Lecture Notes of Computer Science. Springer, 2002, vol. 2428. [25] J. Markovski, “Towards supervisory control of Interactive Markov chains: Controllability,” in Proceedings of ACSD 2011. IEEE, 2011, pp. 108–117. [26] C. Baier, B. Haverkort, H. Hermanns, and J.-P. Katoen, “Modelchecking algorithms for continuous-time Markov chains,” IEEE Transactions on Software Engineering, vol. 29, no. 6, pp. 524 – 541, 2003.

Process Theory for Supervisory Control of Stochastic ...

synthesis and verification,” in Proceedings of CDC 2010. IEEE,. 2010, pp. ... Mathematics and Computer Science, Amsterdam, The Netherlands,. SEN Report ...

283KB Sizes 2 Downloads 201 Views

Recommend Documents

Process Theory for Supervisory Control with Partial ...
Abstract—We present a process theory that can specify supervisory control feedback loops comprising nondeterministic plants and supervisors with event- and ...

A Process-Theoretic Approach to Supervisory Control ...
change during product development. This issue in control software design gave rise to supervisory control theory of discrete-event systems [1], [2], where ...

Scheduling for Human- Multirobot Supervisory Control
April 30, 2007. In partial fulfilment of Masters degree requirements ..... each NT period over time is a good gauge of whether a human supervisor is ... the Human Computer Interaction International Human Systems. Integration ... on information Techno

A Process Algebra for Supervisory Coordination
names induce controllable and uncontrollable actions, respectively, given by AC ... and partial bisimulation The disadvantages of working in the language domain ... p/s≤/0 r and p/s≤AU p, where AU ⊆ A is the set of uncontrollable events [2].

Scheduling for Human- Multirobot Supervisory Control
Apr 30, 2007 - Overview. • Multirobot ..... X. Lu, RA Sitters, L. Stougie, “A class of on-line scheduling. algorithms to minimize ... Control and Computer Networks.

Low Cost Two-Person Supervisory Control for Small ...
Jun 1, 2013 - Associate Chair of the Masters of Aeronautical Science Degree ..... The following acronyms and abbreviations are used within this document.

Scheduling for Humans in Multirobot Supervisory Control
infinite time horizon, where having more ITs than can “fit” ... occurs more than average, on the infinite time horizon one ..... completion time graph of Figure 4a.

Towards Supervisory Control of Interactive Markov ...
with a.(s | pa)≤Ba. ..... volume 2428 of Lecture Notes of Computer Science. ... In Proceedings of FMCO 2010, Lecture Notes in Computer Science, pages 1–27.

Decentralized Supervisory Control with Conditional ...
S. Lafortune is with Department of Electrical Engineering and Computer. Science, The University of Michigan, 1301 Beal Avenue, Ann Arbor, MI. 48109–2122, U.S.A. ...... Therefore, ba c can be disabled unconditionally by supervisor. 1 and bc can be .

Supervisory Pressure Control Report D2.6
MONITOR ... from a tool that will identify the best zone configuration for any network which can be linked to ... distribution network in a supervisory control system.

Decentralized Supervisory Control with Conditional ...
(e-mail: [email protected]). S. Lafortune is with Department of Electrical Engineering and. Computer Science, The University of Michigan, 1301 Beal Avenue,.

Specifying State-Based Supervisory Control ...
Plant in state: Door Open IMPLIES Plant in state: Car Standing Still. For the existing state-based supervisory controller synthesis tool we cannot use this as input,.

Towards Supervisory Control of Interactive Markov ...
O(et + cs + ec3). V. CONCLUSION. Based on a process-theoretic characterization of control- lability of stochastic discrete-event systems in terms of the. Markovian partial bisimulation, we developed a plant min- imization algorithm that preserves bot

Solvability of Centralized Supervisory Control under ...
S/G. In order to account for actuation and sensing limitations, the set of events Σ is partitioned in two ways. ..... (Consistency checking). (Eic,Γic) ∈ Qic,j ...... J. Quadrat, editors, 11th International Conference on Analysis and Optimization

Towards Supervisory Control of Interactive Markov ...
guages, analytical models, discrete-event systems. I. INTRODUCTION. Development costs for control software rise due to the ever-increasing complexity of the ...

Real and Stochastic Time in Process Algebras for ...
of support, as well as tolerance, understanding, and flexibility as much as a ..... products, and a delay of two time units followed by the transition “snd-app”,.

A Cultural Algorithm for POMDPs from Stochastic Inventory Control
CURL pseudo-code. CURL(S,P,pc,pm,α,λ,pl):. ( create population of size P evaluate population using S samples initialise the Q(s, a) while not(termination ...

Real and Stochastic Time in Process Algebras for ...
the best cafés and restaurants. Cecolina has always ..... We start off with modeling a simple testing system using paradigms from formal methods and ..... societally important devices, like mobile phones, Internet protocols, cash machines, etc.

A Relativistic Stochastic Process - Semantic Scholar
Aug 18, 2005 - be a valuable and widely used tool in astro-, plasma and nuclear physics. Still, it was not clear why the application of the so-called Chapman-Enskog approach [4] on this perfectly relativistic equation in the attempt to derive an appr