Tutorial: Verification of Real-time Systems Based on Schedule-Preserved DEVS Moon Ho Hwang Dept. Electrical and Computer Engineering, Wayne State University, Detroit, MI 48202, USA [email protected] Keywords: Discrete Event System Specification(DEVS), Real-time System, Timed Behavior, Qualitative and Quantitative Analysis, Behavior of Coupled DEVS, State Reduction

Abstract Recently, a formal approach has been applied to verification of system behavior with a class of DEVS, called schedule-preserved DEVS (SP-DEVS). In this approach, the behavior of a system is considered as a set of sequences of time-stamped events so quantitative analysis such as cycle time and timed-performance can be possible as well as qualitative analysis such as safetyness and fairness checking. In addition, the behavior of SP-DEVS network can be always described by an atomic SP-DEVS, that is, SP-DEVS is closed under coupling operation. Therefore qualitative and quantitative analysis of SPDEVS networks is also always possible. This tutorial paper introduces recent research results associated with a system verification method which is based on SP-DEVS formalism. For this, the definition of timed language and its operation are introduced in this article. And we define the behavior of atomic SP-DEVS and coupled SP-DEVS by using a general I/O system and timed languages of SP-DEVS. A generation algorithm for the behavior of coupled SP-DEVS is also introduced, which is based on abstract simulation algorithms of SPDEVS. In order to verify huge systems, a state reduction method for SP-DEVS is finally introduced in this paper.

I. Introduction Discrete Event System Specification(DEVS) is a promising formalism for modelling and analysis of discrete event systems and especially it has been regarded as a powerful simulation model [19]. However, simulation is a way of generating one possible trace of system and sometimes repetition of a simulation run is not enough to verify the whole behavior of the target system. To verify the behavior of DEVS, [5],[7],[11] have used the concept of untimed behavior. However, in the untimed approach,

quantitative properties of the real-time systems cannot be identified. To provide quantitative analysis functionality as well as qualitative one, a class of DEVS, called schedulepreserved DEVS (SP-DEVS), has been designed such that it is able to have a set of finite states [9]. The main modification of the proposed DEVS is that the external state transition does not change its next schedule of internal state transition. This kind of restriction leads the DEVS to preserve the schedule so that the remaining times to the next schedule are invariant even if there are external inputs. In addition, finiteness of the state space is guaranteed in the coupled DEVS so we can say that the class of DEVS is closed under coupling operation. [8] introduced an algorithm for generating an atomic SP-DEVS model which is behaviorally equivalent to a given SP-DEVS network. This generation algorithm is based on the abstract simulation algorithms of atomic SP-DEVS and coupled SP-DEVS. Achieving more compact SP-DEVS model in terms of numbers of states and transitions might be needed to analysis in modular and hierarchical model development. For this reason, state reducibility of a modelling tool is a important property of the formalism. This paper introduces a state reduction method consisting of two-step processes, in which the former is compression that eliminates states from which don’t generate output events, and later is clustering that categorizes a group of equivalent states as one cluster. This tutorial is organized as follows. First we introduce timed event, timed event sequence and timed language and its operations in Section 2. Section 3 introduces qualitative analysis such as safetyness and fairness checking and quantitative analysis such as min/max processing time of event sequences using SP-DEVS. Section 4 defines the behavior of SP-DEVS network, shows that SP-DEVS is closed under coupling, and introduces an generation algorithm of behavior of coupled SP-DEVS. Section 5 introduces a two-step state reduction procedure. Finally

B. Operations over Timed Language

Fig. 1.

Event Segments over A = {a, b, c}

conclusion and further research directions are summarized in Section 6.

II. Timed Language and Its Operations A. Timed Language For the definition of a timed event segment, we need to introduce an arbitrary event set A and a totally ordered time base T = R+∞ (the set of non negative real numbers 0 with infinity). We can consider a situation that a event string a ¯ ∈ A∗ occurs when time is t ∈ T where A∗ is the Kleene closure of A [6], [2]. The Kleene closure of an event set A is a set of all finite length of strings consisting of the events in A, for example A = {a, b, c}, then A∗ = {², a, b, c, aa, ab, ac, bb, bc, cc, aaa, . . .} where ² denotes the empty string or the nonevent. Then we can say that the timed event string of A is defined as (t, a ¯) answering event string a ¯ ∈ A∗ at a certain time t ∈ T . A function of the form ω : T → A∗ , called a event trajectory. For restricting ω to a observation interval, we introduce a time range T[ti ,tf ] ⊆ T . So a timed event segment or a timed word has its own observation interval and we write it as ω[ti ,tf ] or ω : [ti , tf ] → A∗ , in which ti is the initial time and tf is the final time. For representing the boundary condition, this paper uses ‘[’ for a closed initial boundary ’(’ for the open, for a final time, ‘]’ for the closed and ‘)’ for the open. In this paper, the timed empty word within [ti , tf ], denoted by ²[ti ,tf ] , is that ω(t) = ² for t ∈ [ti , tf ]. Thus, a timed word is a combination of timed empty words and timed non-empty words. Fig. 1(a) shows a example of timed event segment whose observation interval is [t0 , t3 ) such as   , t = t1 b ω(t) = aab , t = t2   ² , otherwise Finally, a timed language over A in [ti , tf ] is a set of timed words over A in [ti , tf ]. The universal language over A in [ti , tf ] is the set of all possible timed words over A in [ti , tf ], denoted by ΩA[ti ,tf ] .We would omit the time range of a timed word such as ΩA when the time range is the same as [0, ∞).

For each td ∈ T , we define a unary operator on the segment, translation operator T RAN Std such that if there is ω[ti ,tf ] = (t0 , a ¯0 )(t1 , a ¯1 ) ∈ ΩA then T RAN Std (ω[ti ,tf ] ) = ω[ti +td ,tf +td ] = (t0 + td , a ¯0 )(t1 + td , a ¯1 ). Fig. 1(b) shows a translation example of the segment of Fig. 1(a), in which translation value t = −t1 . A pair of segments ω1 ∈ ΩA[t1 ,t2 ] and ω2 ∈ ΩA[t3 ,t4 ] are said to be contiguous if their observation ranges are contiguous, i.e., t2 = t3 . For contiguous segments ω1 and ω2 we define the concatenation operation ω1 · ω2 : [t1 , t4 ] → A∗ such that   , t ∈ [t1 , t2 ) ω1 (t) ω1 · ω2 (t) = ω1 (t) · ω2 (t) , t = t2   ω2 (t) , t ∈ (t3 , t4 ] where ω1 (t) · ω2 (t) is the concatenation of the event string. If there is no confusion, we will omit ‘·’ so ω1 ω2 is the same as ω1 · ω2 . Fig. (c) shows a concatenation of two segments ω1 = ²[t0 ,t1 ) (t1 , b)²[t1,t2) (t2 , a) and ω2 = (t2 , ab)²[t2 ,t3 ) . For simplicity, we sometimes use ω[t0 ,t3 ) = (t1 , b)(t2 , a)(t2 , ab) = (t1 , b)(t2 , aab) without timed empty words. For concatenation of two discontiguous segments such as ω1[t1 ,t2 ] , ω2[t3 ,t4 ] where t2 6= t3 , we can apply the translation operator to ω2 for making them contiguous. For example, T RAN St3 −t2 (ω2 ) = ω20 = ω2[t2 ,t4 −t3 +t2 ] . And then we can apply the concatenation operation between ω1 and ω20 . All kinds of operation over timed segments can be extended to timed languages. Let LA ⊆ ΩA[t1 ,t2 ] and LB ⊆ ΩB[t2 ,t3 ] . The translation of LA , T RAN Std (LA ) = {T RAN Std (ω)|∀ω ∈ LA }. The concatenation LA and LB , LA LB ={ω1 ω2 ∈ ΩA∪B[t1 ,t3 ] |∀ω1 ∈ LA , ∀ω2 ∈ LB }.

III. Schedule-Preserved DEVS A. Atomic SP-DEVS An atomic SP-DEVS specifies the dynamic behavior. Definition 1 (Atomic SP-DEVS): An atomic model of SP-DEVS is a 9-tuple, M =< X, Y, S, ta, δx , δτ , λ, S0 , SF > where, • X is a finite set of input events. • Y is a finite set of output events. • S is a non-empty and finite states set. S can be partitioned into two sets: rescheduling states set Sτ whose states reschedule their remaining time, and continued scheduling states set Sc whose states preserve their remaining time. Relations between Sτ and Sc are Sτ ∩ Sc = ∅ and Sτ ∪ Sc = S.

TABLE I DEVS VS SP-DEVS X Y

DEVS can be infinite can be infinite

S

can be infinite

ta δx δτ λ

ta : S → R+,∞ 0 δx : Q × X → S δτ : S → S λ:S →Y²

SP-DEVS finite finite finite s.t. S = Sc ∪ Sτ and Sc ∩ Sτ = ∅ ta : Sτ → Q+,∞ 0 δx : S × X → Sc δτ : S → Sτ λ:S →Y²

ta : S → Q0+,∞ is the maximum sojourning time to the next scheduled state where Q+,∞ denotes the set 0 of non-negative rational numbers with infinity. • δx : S × X → Sc is the external transition function. • δτ : S → Sτ is the internal transition function. ² ² • λ : S → Y is the internal output function where Y means Y ∪ {²}. • S0 ⊆ Sτ is a set of initial states. 1 • SF ⊆ S is a set of acceptable states . 2 Table I compares the ordinary DEVS and SP-DEVS where the total states set Q is defined as •

Q = {(s, r)|s ∈ S, 0 ≤ r ≤ ta(s)} SP-DEVS seems to be a restricted version of DEVS in terms following features: (1) finiteness of X, Y , and S; (2) range of ta that is restricted from non-negative real numbers to non-negative rational numbers; (3) external state transition whose domains is from S and X, and it cannot change the schedule. Because some features of SP-DEVS are restricted than those of DEVS, it is true that the modelling power should be reduced than that of DEVS. However, SP-DEVS’s advantage over DEVS is the finiteness of state space. This can be one of most important features when we verify the system not simulate it. Thus the possibility of qualitative analysis and quantitative analysis of SPDEVS and generating behavior with finite states from SP-DEVS networks is due to the finite state space that is compensation for the reduced modelling power.

B. Behavior of Atomic SP-DEVS In SP-DEVS, δx , δτ and λ are partial functions that are not defined as all values of each domain. Given M =< X, Y, S, ta, δx , δτ , λ, S0 , SF >, the state transition function δ : Q × X ² → Q is represented by 1 The reason why we use the terms acceptable states rather than final states is that the final state is usually used in the finite-length language. When considering the infinite length behavior such as liveness or fairness, we need the acceptable states that is compatible to the meaning of infinitely visitable states [1], [16] 2 Let us omit S and S in the Table I because they are not essential 0 F when we compare these two formalisms.

two state transition functions: for (s, r) ∈ Q, x ∈ X ² , δ((s, r), x) = (s0 , r0 ) =   if x ∈ X ∧ δx (s, x) ⊥ (δx (s, x), r) (δτ (s), ta(δτ (s))) if x = ² ∧ r = 0 ∧ δτ (s) ⊥ (1)   (s, r) otherwise where δx (s, x)⊥ and δτ (s)⊥ means that δx (s, x) and δτ (s) are defined, respectively. Notice that the remaining time at the new state generated δx is preserved in SP-DEVS and the new state is a member of Sc . This is the reason why the DEVS is named as schedule-preserved. For describing the dynamics of SPDEVS, we can use a general I/O system formalism [19]. Definition 2 (Behavior of Atomic SP-DEVS): Given M =< X, Y, S, ta, δx , δτ , λ, S0 , SF > is a SPDEVS, then its dynamics is defined as follows. G =< T, XG , YG , Ω, Q, ∆, Λ > where +,∞ (the set of non-negative • The time base T = R0 real numbers with infinity) . • The input (output) segments is the input (output) event segments, that is, XG = X(YG = Y ). • The set of allowable input segments Ω is the set of all timed input sequences over X, that is, Ω = ΩX = (T, X ∗ ). • The states set Q is the total states set of M , that is, Q = {(s, r)|s ∈ S, 0 ≤ r ≤ ta(s)} • The total state trajectory ∆ : Q × ΩX → Q; For (s, r) ∈ Q, ω1 , ω2 ∈ ΩX and x ∈ X ² and t ∈ T , ∆((s, r), ω1 ) = ( δ((s, r), x) for ω1 = (x, t) (2) δ(∆((s, r), ω2 ), x) for ω1 = ω2 (x, t) •

The total output function Λ : Q → Y ² is that for (s, r) ∈ Q then ( λ(s) for r = 0 Λ(s, r) = (3) ² otherwise

Following example help you to understand the SPDEVS and its behavior. Example 1 (SP-DEVS Example): Let’s consider a cross road example shown in Fig. 2(a). In the system, there are two traffic lights, green(G) and walk(W). Initially, green is on and walk is off and they should not be on at the same time (but possible to be off ) for safety. Time for booting the system will be 1 second and then if there is no waiting pedestrian, then G turns on and W turns off. If a pedestrian pushes a button to cross the road, G light turns off within 30 seconds and then 2 seconds later, W

Fig. 2. Cross Road Example Circles denote states(solid: s ∈ Sτ and dashed: s ∈ Sc ), double circles indicate acceptable states, a number inside a state s is ta(s), arcs are state transitions(solid: internal, dashed: external) with ?(!) which indicates an input (output) event.

turns on for safety operation. Similarly, 26 seconds after W turns on, W returns to off and 2 seconds later G comes back on. This cycle repeats. We can model the push button event as an input event ?P and change event of lights as output events for example !G:1 for turn-on of green light, !W:0 for turn-off of walk light, etc3 . Fig. 2(b) illustrates SP-DEVS M =< X, Y, S, ta, δx , δτ , λ, S0 , SF > such that X = {?P }; Y ={!G:0,!G:1,!W:0,!W:1}; Sτ = {gb, wb, g, r, w, d}, Sc = {gr}; 0.5 if s = gb, 0.5 if s = wb     30 if s = g, 30 if s = gr ta(s) =  2 if s = r, 26 if s = w    2 if s = d  !G : 1 if s = gb, !W : 0 if s = wb     ² if s = g, !G : 0 if s = gr λτ (s) =  !W : 1 if s = r, !W : 0 if s = w    !G : 1 if s = d  wb if s = gb, g if s = wb     g if s = g, r if s = gr δτ (s) =  w if s = r, d if s = w    g if s = d δx (s, x) = gr if s = g ∧ x =?P S0 = {gb}; SF = {g}; Fig. 3 illustrates trajectories generated by the SPDEVS model for this example. To check the evolution process of its total state, we can follow Equation (2). For example, s0 = gb and ω[0,0.5) = ²[0,0.5) ∈ ΩX[0,0.5) 3 To distinguish event types, this paper uses ‘?’ before an input event, ‘!’ for an output event

Fig. 3.

Trajectories of the Cross Road System

then ∆((gb, 0.5), ω[0,0.5) ) = (gb, 0). Let’s consider the ² sequence that its last boundary includes 0.5 such as ω[0,0.5] = ω[0,0.5) (0.5, ²). Then the total state reading this sequence is ∆((gb, 0.5), ω[0,0.5] ) = δ((∆(gb, 0.5), ωx[0,0.5) ), ²) = δ((gb, 0), ²) = (δτ (gb), ta(δτ (gb))) = (wb, ta(wb)) = (wb, 0.5). In the same way, we can apply ω[0,34] = (34, ?P ) ∈ ΩX[0,34] to the model. Since ω[0,34] = (34, ?P ) = ω[0,34) (34, ?P ), ∆((gb, 0.5), ω[0,34] ) = δ(∆((gb, 0.5), ω[0,34) ), ?P ) = δ((g, 27), ?P ) = (δx (g, ?P ), 27) = (gb, 27). The middle of Fig. 3 shows the trajectory of its total state from (gb, 0.5) with ω = ²[0,34) (34, ?P )²[34,∞) = (34, ?P ). The top of Fig. 3 illustrates the output sequence generates by the model associated with ω = (34, ?P ). Since we can get the total state for each time the output is calculated by Equation (3). For example, Λ(gb, 0.5) = ² when the time t is 0 and Λ(gb, 0) = λ(gb) =!G:1 at t = 0.5. Notice that λ(s) = ² is allowed for a s ∈ S in Definition 1 such as Λ(g, 0) = λ(g) = ² at t = 31. The bottom of Fig. 3 shows an allowable input sequence ω[0,92) = (34, ?P ). ¥ We can see that (34, ?P ?P ) and (34, ?P )(35, ?P ) are other allowable input sequences but their applying results are identical to those of (34, ?P ) in terms of total trajectories as well as generating output sequences, because δx (gr, ?P ) is not defined, that means, the model would ignores the input ?P at the state gr. These input sequences that don’t contribute to the external state transition may not significant when we consider the language of SPDEVS as a set of execution sequences that contribute the discrete state transition not the passage of the remaining

time. Following definitions of associated languages of SPDEVS is based on this concept.

C. Timed Langues of SP-DEVS Let us extend the state transition function of Equation (1) such that it accommodates the output in it. The extended state transition function δ : Q × X ∪ Y ² → Q describes the dynamics of SC-DEVS as follows. For (s, r) ∈ Q, e ∈ X ∪ Y ² , δ((s, r), e) = (s0 , r0 ) where   for e ∈ X (δx (s, e), r) 0 0 (s , r ) = (δτ (s), ta(δτ (s))) for e = λ(s) ∧ r = 0   (s, r) otherwise (4) Then we would restrict our interests of I/O sequences into active I/O event sequences whose every single event contributes to a state transition. Then the execution trajectory δˆ : Q × ΩX∪Y → Q is a partial function such that for q ∈ Q, ω = (t, e) where e ∈ X ∪ Y ² and t ∈ T, ( if δ(q, e) 6= q ˆ ω) := δ(q, e) δ(q, (5) undefined otherwise ˆ ω2 ) = q 0 for q ∈ Q and ω1 , ω2 ∈ ΩX∪Y Suppose δ(q, such that for ω1 = ω2 (t, e), e ∈ X ∪ Y ² and t ∈ T, ˆ ω1 ) := δ( ˆ δ(q, ˆ ω2 )(t, e))) = δ(q ˆ 0 , (t, e)) δ(q,

(6)

Based on the execution sequence function, we define two languages as in [2]. One is generated by M and the other marked by M . The generated language from q of M , denoted by L(M, q) ⊆ ΩX∪Y , is that ω ∈ ΩX∪Y , ˆ ω) is defined.} L(M, q) = {ω|δ(q,

(7)

Therefore an element of the generated language from q is an active I/O sequence which contributes the state transition path from q. Among elements of a generated langauge, trajectories which can reach to acceptable states might be specially considered. Thus the marked language from q of M , denoted by Lm (M, q) ⊆ L(M, q), is that for ω ∈ L(M, q)) ˆ ω) ∈ QF } Lm (M, q) = {ω ∈ L(M, q)|δ(q,

(8)

where QF = {(s, r)|s ∈ SF , r ∈ [0, ta(s)]}. We can omit to use the specific q in two languages such as L(M ) and Lm (M ) in the following contexts. [ L(M ) := L(M, (s, ta(s))) (9) s∈S0

and Lm (M ) :=

[

Lm (M, (s, ta(s)))

(10)

s∈S0

For example, the I/O sequence illustrated in Fig. 4 is an element of the generated language of M

Fig. 4.

An Active IO Sequence of Cross-Road System

Fig. 5.

Unsafe SP-DEVS

shown in Fig. 2(b). For example, the marked language of M shown in Fig. 2(b) is Lm (M ) = {abn |0 ≤ n < ∞, a = (0.5,!G:1)(1,!W:0), b = (t1 , ?P )(t2 ,!G:0)(t3 ,!W:1)(t4 ,!W:0)(t5 ,!G:1)} where t1 ≥ 1, 0 ≤ t2 − t1 ≤ 30, t3 − t2 = 2, t4 − t3 = 26, t5 − t4 = 2.

D. Qualitative Analysis 1) Safetyness Analysis: Before introducing a qualitative analysis method, let’s define an operation over a timed language. For ω1 , ω2 ∈ ΩA , ω1 is called a prefix of ω1 ω2 ∈ ΩA . Let L ⊆ ΩA , then L := {ω1 ∈ ΩA |∃ω2 ∈ ΩA , (ω1 ω2 ∈ L)}. We call L prefix-closure of L consisting of all the prefixes of all the words in L. For example, let L = {(t1 , b)(t2 , aab)} of Fig. 1(a), then L = {(t1 , b), (t1 , b)(t2 , a), (t1 , b)(t2 , aa), (t1 , b)(t2 , aab)}. Then it is clear that Lm (M ) ⊆ L(M ) for any SPDEVS M because there is execution trajectories that they cannot reach any acceptable state. Using this property, we can define the safetyness of SP-DEVS as follows. Definition 3 (Safetyness): A SP-DEVS M is safe if Lm (M ) = L(M ) while a SP-DEVS M is unsafe if Lm (M ) ⊂ L(M ). Let’s consider M shown in Fig. 2(b). Notice that from every states reaching from the initial state gb can reach the acceptable state g. In other words, Lm (M ) = L(M ) and this system is safe. However, if M is SP-DEVS as shown in Fig. 5, thee state wd and d cannot reach g, thus, Lm (M ) ⊂ L(M ) and M is unsafe. This safetyness of a system is important because if the system is safe then all states on state transition paths from its initial states can reach an acceptable state. If we

Fig. 6.

Kernel Directed Acyclic Graphes Fig. 7.

regard the path that cannot reach an acceptable state as the bed behavior, then the bad behavior cannot occur in a safe system. Following theorem shows that there is a polynomial-time algorithm to identifying safetyness of a given SC-DEVS M . Theorem 1: Checking if a SP-DEVS M is safe is decidable in polynomial time. Proof: For checking if Lm (M ) = L(M ), we can use the algorithm constructing the kernel directed acyclic graph (KDAG)[13] whose nodes are strongly connected components. A strongly connected component(SCC) B is a maximal subgraph such that every state node in B is reachable from every other node in B along a directed path entirely contained within B. It is clear that the KDAG is a directed acyclic graph (DAG) because there is no cycle between SCCs. B is a leaf node in KDAG with SCCs iff there is no state transition from B to other SCC. If B is leaf node and there is no acceptable state in B then M is unsafe because from the groups of states in B there is no way reaching to any acceptable state. Computational complexity of constructing the KDAG is O(|S| + |E|) where |S| is the number of states and |E| is the number of state transitions [3] [15]. Thus, its computation is completed in polynomial time. For example, Fig. 6(a) is a KDAG for the SP-DEVS shown in Fig. 2(b) while Fig. 6(b) is that for Fig. 5. In Fig. 6(a), the leaf SCC node B has an acceptable state g so it is safe. In contrast, B is no longer leaf node in Fig. 6(b) and its the leaf node u is not acceptable state thus it is unsafe. 2) Fairness Analysis: Let us consider the path that reaches an acceptable state as the good behavior. Even there is an good behavior ω from s0 ∈ S0 , it might not to repeat forever. For example, the safe SP-DEVS M1 shown in Fig. 7(a) which has been identified as a safe system, cannot repeat the state transitions once it reach the state u. For achieving the system properties in which its good behavior never stop, we introduce following fairness. Definition 4 (Fairness): A safe M is fair if ∀ω ∈

Unfair SP-DEVS and Fair SP-DEVS

Lm (M ), Lm (M, (sf , rf )) 6= ∅ where (sf , rf ) = ˆ 0 , r0 ), ω) and s0 ∈ S0 . δ((s The fairness of a system is an another important property because it is independent from its safetyness, and it guarantees that endless repeatability of the good behavior of the system. Following theorem shows that there is efficient algorithm identifying if a system is fair. Theorem 2: Checking if a SP-DEVS M is fair is decidable in polynomial time. Proof: Given a SP-DEVS M , let G be its KDAG. If a SCC B of G is acceptable if B has an acceptable state s ∈ SF of M . M is fair iff in G, every leaf node B contains at least a acceptable state and it has either more than one states or one acceptable state with self-loop of a state transition. This checking is also based on the algorithm constructing KDAG graph whose complexity is O(|S| + |E|). SP-DEVS shown in Fig. 7(b) is fair by Definition 4 as well as M shown in Fig. 2(b). We can find that the fairness itself is not enough to impose the required behavior on the system. If we want to check if the Cross-Road system behaves the sequence of (t1 , ?P )(t2 , !G:0)(t3 , !W :1)(t4 , !W :0)(t5 , !G:1) forever, the quantitative analysis of the next section will give you answer.

E. Quantitative Analysis Measuring the qualitative properties of a system are possible in many ways. This paper restricts our interests of qualitative analysis to the processing time of a event sequence. 4 1) Min/Max Processing Times of an Execution See¯0 e¯1 ˆ ω) = q −→ quence: Let a state trajectory δ(q, q1 −→ e¯n q2 . . . qn −→ qn+1 where e¯i = (ti , ei ), ti ∈ T and ˆ ω), ei ∈ X ∪ Y ² for i = 0 to n and n > 0. Given δ(q, ˆ δ(q, ω(i)) is a sub-execution sequence to i-th event such 4 One of possible indexes for quantitative properties might be utilization.







0 1 i ˆ ω(i)) = q −→ as δ(q, q1 −→ q2 . . . qi −→ qi+1 where 0 ≤ i ≤ n. The minimum processing time to the i-th event ˆ ω), denoted by M inT (δ(q, ˆ ω(i))) is the of δ(q, fastest processing time to its i-th event as follows. ˆ ω(i))) = 0 and M inT (δ(q,

( ˆ ω(1))) = 0 M inT (δ(q, ta(s1 )

for ei ∈ X otherwise

(11)

from es to ee , denoted from Π(es , ee ) is defined such that ˆ ω)| ∃ω ∈ L(M, q) : i(ω) = es , l(ω) = ee } Π(es , ee ) = {δ(q, (15) where i(ω) and l(ω) are the first and the last event of ω, respectively. The minimum processing time of es and ee , denoted M inT (es , ee ), is defined as M inT (es , ee ) = ∞ for Π(es , ee ) = ∅ ˆ  M inimum M inT (δ(q, ω)) otherwise

For 1 < i ≤ n, ˆ ω(i))) = M inT (δ(q,  ˆ ω(i − 1))) M inT (δ(q,   M in (δ(q, ˆ ω(i − 1))) T     ˆ ω(i − 1))) + ta(si ) M inT (δ(q,

for ei ∈ X for ei ∈ Y ² ∧ ej ∈ X, j = 0 to i − 1 otherwise (12) ˆ ω) is Obviously, the minimum processing time of δ(q, ˆ ω)) := M inT (δ(q, ˆ ω(n))). defined as M inT (δ(q, In contrast to the minimum processing time that reflects the most optimistic case, there should be the pessimistic case. The maximum processing time to the ˆ ω), denoted by M axT (δ(q, ˆ ω(i))) is i-th event of δ(q, the slowest processing time to its i-th event as follows. ˆ ω(0))) = 0 and M axT (δ(q, ˆ ω(1))) = ta(s1 ) M axT (δ(q,

ˆ δ(q,ω)∈Π(e s ,ee )

(16) Theorem 3: Given a event pair es and ee of M , checking M inT (es , ee ) is decidable in polynomial. Proof: Given a state s of M , searching the shortest paths to the rest of states is bounded in O(|E|log|S|) when using Dijkstra’s algorithm with priority-queue implementation[13]. Let Ss = {s|δ(q, es ) = s}. Thus the complexity of this problem is less then all pair shortest path problem whose complexity is bounded in O(|Ss ||E|log|S|). To calculating the pessimistic processing time, we need to consider other three cases in which the processing time might be pessimistically infinite. First, Π(es , ¬ee ) is a set of execution sequences starting from →ª→

(13)

For 1 < i ≤ n, ˆ ω(i))) = M axT (( δ(q, ˆ ω(i − 1))) M axT (δ(q, for ei ∈ X ˆ ω(i − 1))) + ta(si ) otherwise M axT (δ(q, (14) ˆ ω) is Obviously, the pessimistic processing time of δ(q, ˆ ω)) := M axT (δ(q, ˆ ω(n))). defined as M axT (δ(q, For example of the Cross-Road shown in Fig. 2(b), let q = (g, r) where r ∈ [0, ta(g)] and ω = (t1 , ?P )(t2 , !G:0)(t3 , !W :1)(t4 , !W :0)(t5 , !G:1). ˆ ω)) = 0 + 2 + 26 + 2 = 30 and Then M inT (δ(q, ˆ ω)) = 30 + 2 + 26 + 2 = 60. M axT (δ(q, 2) Min/Max Processing Times of a Set of Execution Sequences: Practically, we might be interested in the performance of sequences from a es to the other ee . For example, in a typical manufacturing example, es and ee might be the input-raw-material event and the output-product event, respectively. Generally speaking, there are more than one execution sequences triggered by the starting event es and reaching to the ending event ee . Given two events es and ee , a set of execution sequences

es but cannot generate ee . Second, Π (es , ee ) = ˆ ω) ∈ Π(es , ee )|ω has a loop. }. Because the execu{δ(q, →ª→

tion in Π (es , ee ) has a loop its pessimistic processing time to ee should be ∞. There is another execution sequence that we should 6∞ ˆ ω) ∈ Π(es , ee )|@ω 0 : treat it special is Π(es , ee ) = {δ(q, 0 0 ˆ δ(s, ˆ ω), ω ) s.t. i(sub(ω )) = es ∧ l(sub(ω 0 )) = ee } δ( where sub(ω) is a substring of ω. That is, an element in 6∞

Π(es , ee ) cannot repeat the execution sequence starting es to ee forever. Fig. 8(a)(b) and (c) show Π(es , ¬ee ) 6= ∅, →ª→

6∞

Π (es , ee ) 6= ∅ and Π(es , ee ) 6= ∅, respectively. However, Fig. 8(d) illustrates the case that Π(es , ¬ee ) = ∅,

6∞

→ª→

Π(es , ee ) = ∅, and Π (es , ee ) = ∅. Considering these cases, the pessimistic processing time of Π(es , ee ), denoted M axT (Π(es , ee )), is defined as M ax T (Π(es , ee )) = ∞ ˆ ω))  M aximum M axT (δ(q, ˆ δ(q,ω)∈Π(e s ,ee )

for Cond∞ otherwise (17)

Fig. 8.

Categorization of Π(es , ee ) Fig. 9.

SP-DEVS Network for Cross-Road System

where Cond∞ is the cases of Π(es , ¬ee ) 6= ∅ ∨

→ª→

6∞

Π (es , ee ) 6= ∅ ∨ Π(es , ee ) 6= ∅. Theorem 4: Given a event pair es and ee of M , checking M axT (es , ee ) is decidable in polynomial. Proof: Let Ss = {s|δ(q, es ) = s}. Since depthfirst search requires time proportional |S|+|E| [13], findˆ ˆ ing δ((s, ta(s)), ω) ∈ Π(es , ¬ee ) and δ((s, ta(s)), ω) ∈ →ª→

Π (es , ee ) for all s ∈ Ss is also bounded in O(|Ss |(|S| + |E|)). If a path ω from es and ee without cycle from s ∈ Ss is found, the path should be tested if it is connected to a state that is transited to es and it leads another path to ee . For this test, we need the transitive closure of M which requires time proportional to |E| + |SK |2 + |SK ||EK | where |SK | and |EK | are numbers of states and transitions of KDAG of M , respectively [13]. A sub-graph not screened out by the



which can be atomic SP-DEVS or coupled SP-DEVS models. It is assumed that all of the leaf nodes are atomic SP-DEVS models and hierarchical depth to all leaf notes are S finite. Xi is the external input coupling Zxx ⊆ X ×



relation. S S Xi is the internal coupling Yi × Zyx ⊆



relation. S Yi → Y is the external output coupling Zyy =

Mi ∈C

Mi ∈C

Mi ∈C

Mi ∈C

The example of minimum and maximum processing time of a set of sequences will be illustrated in Section IV-E.

function. C • select : 2 − ∅ → C is a tie-breaking function which is used to resolve the situation that there are more than one sub-models whose remaining times are identical. Example 2 (Network for Cross-Road System): Fig. 9 shows a SP-DEVS network modelling the cross road introduced in Example 1. We can see a coupled model is illustrated by interface events (indicated as triangle with port:values), sub-components (blocks) and coupling relation (directed arc with order(number)). In this modeling, G stands for the green light model and W represents the walk light, respectively. Some maximum sojourning times of states in 9 are described as ta(s) such as r and rg of G and d and dw of W. Here we consider them as 1.

IV. SP-DEVS Network

B. Behavior of SP-DEVS Network

A. Definition of SP-DEVS

Definition 6 (Behavior of SP-DEVS Network): Given a SP-DEVS network, N =< X, Y, C, Zxx , Zyx , Zyy , select >, the global behavior of N called a coupled SP-DEVS model is

→ª→

6∞

previous testes of Π(es , ¬es ), Π (es , ee ), Π(es , ee ), is an acyclic graph whose longest path can be calculated by the topological shorting with O(|S| + |E|) [13]. Since every complexity of each step is bounded in polynomial time the whole process can be done in polynomial time.

Definition 5 (Structure of SP-DEVS Network): A SP-DEVS network is a 7-tuple, N =< X, Y, C, Zxx , Zyx , Zyy , select > where • X(Y ) is the finite set of input (output) events. • C = {Mi |Mi =< Xi , Yi , Si , tai , δxi , δτ i , λi , S0i , SF i >} is the finite set of sub-component SP-DEVSs

MN =< X, Y, S, ta, δx , δτ , λ, S0 , SF > where • X(Y ) is the set of input events (output events) defined in N

• •

S = {(. . . , (si , ri ), . . .)|(si , ri ) ∈ Qi , Mi ∈ C}. ta : S → Q+,∞ ; ta(s) = ta((. . . , (si , ri ), . . .)) = 0 M inimum ri Mi ∈C

Let the set of imminents, IM M (s) = {Mi |Mi ∈ C ∧ ri = ta(s)}. Imminents are the set of components that have minimum remaining time qi .r and Mi∗ = select(IM M (s)). ² • λ : S → Y is defined by λ((. . . , (si , ri ), . . .)) = ∗ ∗ Zyy (λi (si )). • δx : S × X → S is defined as follows. 0 0 δx ((. . . , (s( i , ri ), . . .), x) = (. . . , (si , ri ), . . .) where (δxi (si , xi ), ri ) if (x, xi ) ∈ Zxx (s0i , ri0 ) = (si , ri ) otherwise • δτ : S → S is defined as follows. δτ ((. . . , (si , ri ), . . .)) = (. . . , (s0i , ri0 ), . . .) where (s0i , ri0 ) =   (δτ i∗ (si∗ ), tai∗ (δτ i∗ (si∗ ))) if Mi = Mi∗ (δxi (si , xi ), ri ) if (λi∗ (si∗ ), xi ) ∈ Zyx   (si , ri ) otherwise • S0 = {(. . . , (si , tai (si )), . . .)|∀Mi ∈ C, si ∈ S0i }. • SF = {(. . . , (si , ri ), . . .)|∀Mi ∈ C, (si , ri ) ∈ QF i }. Unfortunately, the number of states in the behavior of a SP-DEVS network is infinite because its state consists of its sub-models’ total states. But fortunately, we can achieve the behaviorally equivalent model for every SPDEVS network when applying time abstraction as follows. Definition 7: Suppose that N =< X, Y, C, Zxx , Zyx , Zyy , select > is a SP-DEVS network and MN =< X, Y, S, ta, δx , δτ , λ, S0 , SF > is its global behavior. Let (si , ri ) ∈ Qi , Mi ∈ C. Then s = (. . . , (si , ri ), . . .), s0 = (. . . , (s0i , ri0 ), . . .) ∈ S, s is equivalent to s0 in terms of their relative schedules, denoted by s ∼ = t s0 , 0 0 0 if ∀Mi ∈ C, si = si and ri − r = ri − r where r = M inimum ri and r0 = M inimum ri0 . Mi ∈C

simulation algorithms for atomic DEVS and couple DEVS have been proposed by [17] and [18] such that algorithms or controls can be described separately and independently from associated DEVS models or data. Since SP-DEVS is a kind of DEVS, we might think that abstract simulation algorithms of DEVS can be used for SP-DEVS. However, it is partially true. The internal schedule of the next event is preserved by any external events, algorithms that is called when an external event occurs should be different from the origin. Thus this paper introduces just the different part in the each algorithm. The reader can refer to [18] for the rest part of algorithms such as initialization and internal schedule. 1) Simulator of atomic SP-DEVS: Algorithm 1 illustrates a class, called Simulator, which is for abstract simulation of atomic SP-DEVS. Algorithm 1 Simulator of Atomic SP-DEVS SP-DEVS-Simulator: public DEVS-Simulator 1 when receive x-msg (x, t) 2 if not (tl ≤ t ≤ tn) then 3 ERROR: Bad Synchronization 4 s ← δx (s, x); 5 when receive s-msg (s0 , r0 , tl0 , t) 6 s0 ← s and r0 ← tl + ta(s) − t; 7 tl0 ← tl; 8 when receive r-msg (s0 , r0 , tl0 , t) 9 s ← s0 and r ← r0 ; 10 tl ← tl0 and tn ← t + r; End SP-DEVS-Simulator

Theorem 6: Algorithm 1 simulates behavior of atomic SP-DEVS M =< X, Y, S, ta, δτ , δx , λ, S0 , SF >. Proof: See [8]. 2) Coordinator of Coupled SP-DEVS: Algorithm 2 illustrates a class, called Coordinator, which is for abstract simulation of coupled SP-DEVS.

Mi ∈C

m m m m m MN =< X, Y, S m , tam , λm τ , δx , δτ , S0 , SF > is said to be the relative-schedule abstraction of MN if S m = {s ∈ S|∀s0 ∈ S s.t. s ∼ =t s0 , ta(s) ≥ 0 m m ta(s )}. ta := ta|S m →Q+,∞ ; λτ = λ|S m →Y ² ; δxm = 0 δx |S m ×X ² →Scm where Scm = S m ∩ Sc ; δτm = δτ |S m →Sτm m m m where Sτ = S ∩ Sτ ; S0 = S0 ∩ S m , SFm = SF ∩ S m Theorem 5: The behavior of a SP-DEVS network is equivalent to the atomic SP-DEVS model which is timeabstracted one. Proof: See [9].

C. Generating the behavioral model from SPDEVS networks For getting the atomic SP-DEVS which is timeabstracted one from a SP-DEVS network, we use the concept of abstract simulation algorithm. The abstract

Algorithm 2 Coordinator of Coupled SP-DEVS SP-DEVS-Coordinator: public DEVS-Coordinator 1 when receive x-msg (x,t) 2 if not (tl ≤ t ≤ tn) then 3 ERROR: Bad Synchronization 4 ∀Mi ∈ {Mj |Zxx (x, xMj )} 5 send x-msg(xMi , t) to simulatori 6 when receive s-msg ((. . . , (s0i , ri0 , tli0 ), . . .),t) 7 ∀Mi ∈ C 8 send s-msg (s0i , ri0 , tli0 , t) to simulatori 9 when receive r-msg ((. . . , (s0i , ri0 , tli0 ), . . .)) 10 ∀Mi ∈ C 11 send r-msg (s0i , ri0 , tli0 ) to simulatori 12 tl ← M ax {tli } and tn ← M in {tni }; Mi ∈C

End SP-DEVS-Coordinator

Mi ∈C

Theorem 7: Algorithm 2 simulates the behavior of coupled SP-DEVS N =< Y, Y, C, Zxx , Zyx , Zyy , select >. Proof: See [8]. 3) Generating Behavior of SP-DEVS Network: Basedon the abstract simulation algorithms of SP-DEVS, we can generate the behavioral model of a given SP-DEVS network. The model generated to be behaviorally equivalent to the SP-DEVS network is the time-abstracted atomic SP-DEVS defined in Definition 7. Algorithm 3 introduces a class generating the behavioral model of a SP-DEVS network. The main function is make_behavior and two subsidiary functions are update_s and when_receive_y-msg. Following theorem proves that class SP-DEVS-Behavior-Generator makes the behavior model.

TABLE II G ENERATING T RAFFIC -L IGHT B EHAVIOR QP 1/ta(s) c(si ) |S| |E| Time 1 7.963E+3 197 259 3.0e−5 10 2.015E+5 1,817 2,419 4.9e−4 100 2.015E+7 18,017 24,019 6.0e+0 1,000 2.015E+9 180,017 240,019 1.0e+2

Theorem 8: make_behavior of Algorithm 3 m =< generates the time-abstracted SP-DEVS MN m m m m m m m > of coupled X, Y, S , ta , λτ , δx , δτ , S0 , SF SP-DEVS N =< X, Y, C, Zxx , Zxy , Zyy , select >. Proof: See [8].

D. Experiments of Behavior Generation

To implement the environment of modeling, simulation(execution) and formal verification of SP-DEVS, we used the C++ programming language [14]. In particular, for the basic data structure such as vector, list, ordered set, map, multi-map, hash and so on, we used standard Algorithm 3 Generating Behavior of SP-DEVS Network template library (STL) [4]. And we implemented it with SP-DEVS-Behavior-Generator Microsoft VC++.Net [12]. 1 variables: Our hardware platform was Presario, X1000, Compaq 2 N =< X, Y, C, Zxx , Zyx , Zyy , select >; 3 child // Coordinator of N ; with 1 GHz Intel centrinoTM CPU and 1 GByte RAM. 4 make behavior() Using the machine, we tested two examples to see per5 M =< X, Y, S, ta, δτ , δx , λ, S0 , SF >; formance of generating behavior for each case. 7 ∀Mi ∈ C, ∀si0 ∈ Si0 1) Cross-road Traffic Light: Let’s take a look at Fig. 6 B // a set of (. . . , (si , ri , tli ), . . .) 9 in which there are two atomic SP-DEVS models, green 8 send i-msg((. . . , (si0 ), . . . ), 0) to child; 9 update s(M, B, (. . . , (si0 , tai (si0 ), 0), . . .),0) light (G) and walk light (W ), are connected each other. 10 add (. . . , (si0 , tai (si0 )), . . .) to S0 ; To evaluate the generated number of states and tran11 while(|B| > 0) sition (edges) in the global behavior of the SP-DEVS 12 (. . . , (si , ri , tli ), . . .) ← B.pop front(); network as the response time of W and G are changed, 13 send r-msg((. . . , (si , ri , tli ), . . .)) to child; we try to increase the scanning speed of d and r of W 14 tn = child’s tn; 15 if tn 6= ∞ then and G, respectively. Let’s speed up the scanning time of 16 send *-msg (*, tn) to child; d of W and r of G as 10 times (0.1 time unit), 100 times 17 update s(M, B, (. . . , (s0i , ri0 , tli0 ), . . .), tn); (0.01), 1000 times (0.001), respectively. 18 δτ ((. . . , (si , ri ), . . .)) ← (. . . , (s0i , ri0 ), . . .); Table II illustrates the experimental results of gener19 λ((. . . , (si , ri ), . . .)) ← y; ating 20 ∀x ∈ X then Q P behavior of cross-road Q P traffic system. In the table, 21 send r-msg((. . . , (si , ri , tli ), . . .)) to child; c(si ) and units of time and c(si ) means

22 tl = child’s tl; 23 send x-msg (x, tl) to child; 24 update s(M, B, (. . . , (s0i , ri0 , tli0 ), . . . , ), tl); 25 δx ((. . . , (si , ri ), . . .), x) ← (. . . , (s0i , ri0 ), . . .); 26 return M ; 27 update s(M, B, (. . . , (si , ri , tli ), . . . , ), t) 28 send s-msg((. . . , (si , ri , tli ), . . . )),t) to child; 29 if (. . . , (si , ri ), . . .) 6∈ S then 30 add (. . . , (si , ri ), . . .) to S; 31 ta((. . . , (si , ri ), . . .)) ← M in {ri } Mi ∈C

32 if ∀Mi ∈ C, si ∈ SFi then 33 add (. . . , (si , ri ), . . .) to SF ; 34 add (. . . , (si , ri , tli ), . . .) to B; 35 when receive y-message (yd , t) 36 y ← Zyy (yd ); End SP-DEVS-Behavior-Generator

Mi ∈C si ∈Si

memory (Mem.) required in the computation are second and KByte, respectively. Time and memory required are proportional to the number of states |S| and the number of state transitions |E|. Fig. 10 shows graphs of |S| and |E| according to 1/ta(s) and they look linear lines. 2) Machining Center: Let’s consider a machining center which consists of an input buffer, a working table, an output buffer, and a controller. Between the input buffer and the working table as well as the working table and the output buffer, there are material flows in which time delay exist. Fig. 11 shows a SP-DEVS network for the machining center. Like in the cross-road traffic system, we evaluate the performance of its behavior generation as varying time advance values of some states. In Fig. 11,

Fig. 10.

1/ta(s)

1 5 10 20

|S| and |E| in Traffic-Light Behavior

TABLE III G ENERATING M ACHINING C ENTER B EHAVIOR QP c(si ) |S| |E| Time Mem. 702,000 3,877 6,130 3 903.7 40,625,712 33,053 48,606 26 7,294.8 1,572,655,392 102,058 145,036 126 21,959.3 33,578,883,072 348,518 482,346 917 73,539.8 Fig. 11.

circles with ta(s) denotes the set of states whose ta(s) varies in this test. Table III summarizes the results of test varying 1/ta(s) as 1, 5, 10 and 20. All convention and units are the same as the cross-road traffic system. In this experiment, we can see that the changing trend of |S| and |E| are more drastical than linear as shown in Fig. 12.

E. Qualitative and Quantitative Analysis of SPDEVS Networks As we mentioned before, qualitative and quantitative analysis of a SP-DEVS network can be performed identically in those of an atomic SP-DEVS model introduced in Section III-D and III-E once its behavior is generated as one atomic SP-DEVS. For example, the traffic-light system N shown in 9 in which ta(s) = 0.01 is safe m and faire because KDAG(MN ) has no leaf node in which m there is no acceptable state or state transitions where MN is the behavior model of N . And we can address the processing time from pushing the button to the greenlight on again such as M inT (?P, !G:1) = 30.01 while M axT (?P, !G:1) = 60.01.

SP-DEVS Network for Machining Center

and SP-DEVS. Every single state transition of FSA is invoked by an external event so all state transitions are observable. In SP-DEVS, however, there is an internal state transition which occurs according to the time schedule. Moreover, when an internal state transition happens there may be no output generated so the transition is unobservable outside. Since the behavior of SP-DEVS is defined as a set of observed event sequences, such an unobservable internal event is obstructive. Thus we propose a two-step procedure for state reduction as shown in Fig. 13. We define the behavioral equivalence of two states in SP-DEVS as follows. 5 Definition 8: Given a SP-DEVS M =< X, Y, S, ta, δx , δτ , λ, S0 , SF >, let s1 , s2 ∈ S. Then s1 is said to be behavioral equivalent to s2 , denoted by s1 ∼ =b s2 if L(M, (s1 , ta(s1 ))) = L(M, (s2 , ta(s2 ))) and Lm (M, (s1 , ta(s1 ))) = Lm (M, (s2 , ta(s2 ))).

A. Two-Step Reduction

V. State Reduction of SP-DEVS

1) Compression: An internal state transition without generating any output can not be observed outside so it should be eliminated as far as the behavior of SP-

When trying to identifying if given two models are behaviorally equivalent, there is one big difference between two approaches base-on finite state automata (FSA)[6]

5 As illustrated in Section IV-B and IV-C, the behavior of coupled SP-DEVS can be described by atomic SP-DEVS so we would like to pay our attention to atomic SP-DEVS in this section.

Thus following is an algorithm to compress all compressible state of given SP-DEVS M =< X, Y, S, ta, δx , δτ , λ, S0 , SF >. Algorithm 4 Compression SP-DEVS Compression(M ) −1 1 ∀s ∈ Sτ − S0 , ∀sx ∈ R(s), ∀s−1 x ∈ δτ (sx ), −1 −1 2 if (αx (sx ) = αx (sx ) ∧ λ(sx ) = ² ∧ 3 s−1 x ∈ SF ⇔ sx ∈ SF ) 4 Compression(M, sx ); 5 return M ;

Fig. 12.

Fig. 13.

|S| and |E| in Machining Center Behavior

Two Steps Procedure of State Reduction

DEVS can be preserved. Compression is merging two states connected with an internal transition with ² output. Given M =< X, Y, S, ta, δx , δτ , λ, S0 , SF >, M is proper if (1) if ta(s) = 0, for s ∈ Sτ then λ(s) 6= ² and (2) using ta(s) = ∞ instead of an internal self-looped state s ∈ Sτ such that δτ (s) = s ∧ λ(s) = ² ∧ ∀x, δx (s, x) is undefined. Definition 9 (Compression): Given a proper M =< X, Y, S, ta, δx , δτ , λ, S0 , SF > and s ∈ S, δτ−1 : S → 2S is a set of internal source states from one state to others such that δτ−1 (q) = {p ∈ S|δτ (p) = q}. Then Compression(M, s) is that ∀s−1 ∈ δτ−1 (s) s.t. s−1 6= s (1)δτ (s−1 ) := δτ (s); (2)ta(s−1 ) := ta(s−1 ) + ta(s); (3)λ(s−1 ) := λ(s); (4) remove s from S (and SF if s ∈ SF ); Definition 10 (Compressibility): Let M =< X, Y, S, ta, δx , δτ , λ, S0 , SF > and for s ∈ S, αx (s) = {x|∃x ∈ X, δx (s, x)⊥}. Then s ∈ S, s 6∈ S0 and ta(s) > 0 is −1 said to be compressible, if ∀sx ∈ R(s), ∀s−1 x ∈ δτ (sx ) −1 −1 −1 (1)αx (sx ) = αx (sx ); (2)λ(sx ) = ²; (3)sx ∈ SF ⇔ sx ∈ S F ; Theorem 9: Suppose that M is SP-DEVS and Mc is achieved by Compression(M,sx ) ∀sx ∈ R(s). Then L(M ) = L(Mc ) and Lm (M ) = Lm (Mc ) if s is compressible. Proof: See [10].

Notice that After Compression(M) there is no more compressible states and it is terminated in polynomial time [10]. We call the result of Compression of SP-DEVS compressed SP-DEVS or non-compressible SP-DEVS. 2) Clustering: A bunch of states whose behaviors are equivalent to each other is said to be a cluster. Thus states in a cluster will be merged into one state after clustering. Before introducing a new algorithm, called Find CLS, we first define small procedures and functions those are used Find CLS. Procedure Make CLS(f, S) of Algorithm 5 categorizes state s ∈ S into a set of clusters of states according to the value of f (s) so it stores the pair of (f (s), s) into HM whose data type is hash multimap which categorize values with key [4]. After inserting of all (f (s), s) into HM, each cluster having states with identical value f is extracted from HM and stored to cluster set C. Similarly, procedure Make CLS2(c−1 , c) categorizes s ∈ c−1 according to the cluster containing δ(s, x) (line 3 of Make CLS2(f, S))where x is a event invoking transition from c−1 to c (line 2) 6 . Before taking a closer look at mc function, let’s see characteristics of a cluster. Since a cluster is a set of states, boolean operations between two clusters c and c0 are defined as follows: intersection c ∩ c0 := {s|s ∈ c and s ∈ c0 } and union c ∪ c0 := {s|s ∈ c or s ∈ c0 }. And if C is a set of clusters over S then it is proper if c, c0 ∈ C such that c ∩ c0 = ∅, S and it is total if c = S. Given a proper and total C c∈C

over S, mapping cluster function mc : S → C is defined such that for s ∈ S, mc(s) = c if s ∈ c. Therefore, Make CLS2 splits the c−1 cluster into several clusters consisting of s with identical mc(δ(s, x)). A procedure Split CLS of Algorithm 5 provides splitting functionality which needs two sets of proper clusters, C and C 0 as its arguments. As we can see in Algorithm 5, Split CLS splits each cluster in C such 6 we can consider all x ∈ α (s) ∪ {²} but considering x ∈ {x ∈ x X ² |∃s ∈ c−1 s.t. mc(δ(s, x)) = c} is much efficient.

that if intersection between c ∈ C and c0 ∈ C 0 is a subset of c then c is divided into c∩c0 and c−c0 and they become elements of C and CN . Here, CN is to contain a set of newly generated clusters and it will be the return value of Split CLS. Algorithm 5 Algorithm finding a set of clusters ClusterSet Make CLS(f, S) 1 hash multimap HM ; ClusterSet C; 2 ∀s ∈ S, insert (f (s), s) to HM ; 3 ∀(f (s), s) ∈ HM , insert s to C; 4 return C; ClusterSet Make CLS2(c0 , c) 1 hash multimap<{s}, s> HM ; ClusterSet C; 2 ∀x ∈ {x ∈ X ² |∃s0 ∈ c0 s.t. mc(δ(s0 , x)) = c}, 3 ∀s0 ∈ c0 , insert (mc(δ(s0 , x)), s0 ) to HM ; 4 ∀({s}, s) ∈ HM , insert s to C; 5 return C; ClusterSet Split CLS(C 0 , ↑ C) 1 ∀c0 ∈ C 0 , ∀c ∈ C, 2 if c ∩ c0 ⊂ c, 3 remove c from C; 4 add c ∩ c0 , c − c0 to C and CN ; 5 return CN ; ClusterSet Find CLS(M ) 1 C = {S}; 2 Split CLS(C, Make CLS(α, S)); 3 if(|C| = |S|) return C; 4 Split CLS(C, Make CLS(ta, S)); 5 if(|C| = |S|) return C; 6 Split CLS(C, Make CLS(IsSF , S)); 7 if(|C| = |S|) return C; 8 QC ← C; // QC : a queue of clusters 9 while(|C| < |S| and QC 6= ∅) 10 c ← QC .pop front(); 11 δ −1 (c) ← {c0 |mc(s0 ) = c0 , s0 ∈ δ −1 (s), ∀s ∈ c}; 12 C 0 ← ∅; 13 ∀c0 ∈ δ −1 (c), C 0 += Make CLS2(c0 , c); 14 QC += Split CLS(C 0 , C); 15 return C;

Now we are ready to take a look at the main procedure of clustering, named Find CLS. Initially, a set of cluster C has only one cluster S in itself (see line 1 of Find CLS in Algorithm 5). Clusters in C are partitioned according to their active events α, maximum sojourning time ta, and a boolean function IsSF : S → {0, 1} indicating whether they are acceptance states or not such that IsSF (s) = 1 if s ∈ SF , 0 otherwise. If |C| = |S| after calling Split CLS at each line 3, 5 and 7 then no more splitting needed so return C which is S itself. Otherwise (|C| < |S| case) we construct a queue of clusters QC by pushing

back all clusters in C (at line 8). Here QC and C is proper and total. While the number of clusters is less than that of whole states (|C| < |S|) and there are clusters which should be tested (QC 6= ∅), we perform statements from line 10 to 14. At first we pick c a cluster from the front of cluster queue QC (line 10). And construction of the inverse clusters set of c which is defined by δ −1 (c) = {c0 |mc(s0 ) = c0 , s0 ∈ δ −1 (s), ∀s ∈ c} is performed (line 11). For each cluster c−1 in δ −1 (c), we split c−1 using Make CLS2(c−1 , c) and attach the result back to C 0 which is gathering all newly-split clusters caused by c (lines 12 and 13). After all, we split C by C 0 and the newly-generated clusters returned by Split CLS(C 0 , C) is attached to QC (line 14). This loop continues until |C| < |S| and QC 6= ∅ and finally the algorithm returns C. Theorem 10: Let M =< X, Y, S, ta, δx , δτ , λ, S0 , SF > and be a compressed SP-DEVS model. Then s1 ∼ =b s2 if s1 , s2 ∈ c ∈ C of Find CLS(M ). Proof: Initially, all states are clustered according to their active events α, their life time ta, and existence in the acceptance state set IsSF as we can see lines 1 to 7. During this procedure, if the states are clustered as individual, that is, |C| = |S| then no more testing needed so this procedure stops. In this case there is no equivalent states. Otherwise, the initial QC is set by C such that s1 , s2 ∈ c ∈ C, α(s1 ) = α(s2 ), ta(s1 ) = ta(s2 ) and s1 ∈ SF ⇔ s2 ∈ SF . For each cluster c from C, δ −1 (c) gathers a set of clusters whose elements are transited to a state in c. As we can see at line 13 of Find CLS, for each c0 ∈ δ −1 (c) is partitioned according to their mapping cluster is identical (see Make CLS2). Therefore C 0 at line 14 consists of clusters whose element states are transited to the same cluster. This C 0 is used for splitting C by calling Split CLS and its return value that is newly generated cluster is pushed back of QC . This test stops while |C| < |S| or QC 6= ∅. This algorithm should be stopped because |C| becomes identical to |S| as splitting of line 13 and 14 is continued or nor more cluster is generated, that is, QC = ∅. By testing from line 9 to 14, we achieve C such that for s1 , s2 ∈ c ∈ C and e ∈ α(s1 )(= α(s2 )), mc(δ(s1 , e)) = mc(δ(s2 , e)). It is also true that ˆ 1 , ta(s1 )), ω)) = mc(δ((s ˆ 2 , ta(s2 )), ω)) for ω ∈ mc(δ((s ΩX∪Y because ta(s1 ) = ta(s2 ). Moreover, since s1 ∈ SF ⇔ s2 ∈ SF for s1 , s2 ∈ c ∈ C, we can say that L(M, (s1 , ta(s1 ))) = L(M, (s2 , ta(s2 ))) and Lm (M, (s1 , ta(s1 ))) = Lm (M, (s2 , ta(s2 ))). Since we can find the clusters from a SP-DEVS M by Find CLS, the procedure for clustering based on

Definition 11 is so straight-forward, that is omitted here and it will be easily designed in O(|C|) where |C| is the number of clusters. Definition 11 (Clustered SP-DEVS): Suppose SP-DEVS M =< X, Y, S, ta, δx , δτ , λ, S0 , SF > is a compressed SP-DEVS. Then M m =< X, Y, S m , tam , λm , δ m , S0m , SFm > is clustered from M if C = Find CLS(M ) and S m = C, ta = ta|S m →Q+,∞ ; τ 0 λm = λ|S m →Y ² ; δxm = δx |S m ×X→S m ; δτm = δτ |S m →S m ; m m m m S0 = S ∩ S0 ; SF = S ∩ SF . 3) Experimental Results: This section shows two experimental results of two-step reduction applied to the traffic-light system and the machining center introduced in Section IV-D. As a performance index, we use the reduction ratio that is a percentage of eliminating ration of states or transitions over the original input numbers. In other words, the reduction ratio of states in each step is (1-|S o |/|S i |) ∗ 100 where |S o |(|S i |) is the number of states after (before) operation. Similar concept is used to define the reduction ratio of transitions in each step. Another performance index used in this section is the computational time in second for each step. Table IV summarizes performance of reduction of the traffic light system. We collected the performance indices by varying the scanning time of G and W as done in generation of global behavior of Section IV-D. This example shows the identical results of each case of 1/ta(s), the number of state (transitions) remaining after compression is 12 (14) and the that of clustering is 7 (8). Thus as increasing 1/ta(s), reduction ratios of states and transitions of compression step are increasing while those of clustering step are constant. As a result, state (transition) reduction ratios of the two-step procedure are at least 96.45 (96.91)%. Performance indexes of the machining center is summarized in Table V. The reduction ratio of compression is ranging from about 71% to 74% while that of clustering is quit low and ranging from 0.38% to 0.64% when considering both of states and transitions and as a result, the total compression ratio is raging from about 72% to 74%. Here we can found that the computational time of clustering is increasing even though the reduction ratio of that is almost constant and quite low.

VI. Conclusion and Further Research This paper introduced a class of DEVS, called SPDEVS, which has restricted modelling power than that of the ordinal DEVS so that it has advantages in analysis. Since SP-DEVS has finite states space so its qualitative property such as safetyness and fairness checking is decidable as well as quantitative property such as min/max processing time evaluation of event sequences. In addition to atomic SP-DEVS, analysis of coupled SP-DEVS is

also possible because SP-DEVS is closed under coupling operation. Moreover, we could reduce the number of states and transition of SP-DEVS that has the equivalent behavior to a given model. The following are considered as further research directions. •







Model Checking: Still we can have a question whether a SP-DEVS model satisfies the given specification or not. If the specification is modelled by SP-DEVS as well as the model, then it seems to be possible to couple them and do qualitative and quantitative analysis in them for checking the specification is satisfied by the model. State Minimization: Even there was research for decidablility of state minimization of SP-DEVS [10], recently counter example has been found. Thus the state minimization of SP-DEVS is still open problem. State Reduction with Tolerant Behavior for Scalability: Even though we can reduce the states of SP-DEVS without changing of its behavior, it has still too many states to apply the next level in the hierarchical analysis. To solve this problem, we might employ the behavioral tolerance concept in which some less-important behavior can be different so that the states can be much more reduced in it. Clarifying formal verification applicability toward ordinary DEVS: We found that between SP-DEVS and ordinary DEVS, there is a middle class of DEVS whose modelling power is grater than SP-DEVS and whose analysis power can be grater than ordinary DEVS if some condition is satisfied. This mixed model might be much more practical in terms of modelling as well as analysis.

References [1] J.R. B¨uchi. On a decision mdethod in restrited sceond order arithmetic. In E. Nagel et al., editor, Proceedings of the International Congress on Logic, Methodology and Philoshophy of Science, pages 1–11, Stanford, CA, 1960. Stanford University Press. [2] C.G. Cassandras and S. Lafortune. Introduction to Discrete Event Systems. Kluwer Academic Publishers, second edition, 1999. [3] H.N. Gabow. Path-based depth-first search for strong and biconnected components. Information Processing Letters, 74(3-4):107– 114, May 2000. [4] Silicon Graphics. Standard Template Library Programmer’s Guide. http://www.sgi.com/tech/stl, 1993-2004. [5] G.P. Hong and T.G. Kim. A Framework for Verifying Discrete Event Models whithin a DEVS-based System Development Methodology. Transactions of The Society for Computer Simulation, 13:19–34, 1996. [6] J.E. Hopcroft, R. Motwani, and J.D. Ullman. Introduction to Automata Theory, Languages, and Computation. Addison Wesley, second edition, 2000. [7] M.H. Hwang. Identifying equivalence of DEVSs: Language approach. In In A. Bruzzone and M. Itmi, editors, Proceedings of 2003 Summer Computer Simulation Conference, pages 319–324, Montreal, Canada, 2003. SCS.

TABLE IV P ERFORMANCE OF M INIMIZING T RAFFIC L IGHT S YSTEM Origin

1/ta(s) S E S E S E S E

1 10 100 1,000

No. 197 259 1,817 2,419 18,017 24,019 180,017 240,019

No. 12 14 12 14 12 14 12 14

Compression ratio Time 93.91 1.00e−5 94.59 99.34 1.32e−4 99.42 99.93 2.00e+0 99.94 99.99 9.00e+1 99.99

Two-Step Reduction Clustering No. ratio Time 7 41.67 1.00e−6 8 42.86 7 41.67 1.00e−6 8 42.86 7 41.67 1.00e−6 8 42.86 7 41.67 1.00e−6 8 42.86

Total ratio 96.45 96.91 99.61 99.67 99.96 99.97 99.99 99.99

Time 1.10e−5 1.33e−4 2.00e+0 9.00e+1

TABLE V P ERFORMANCE OF M INIMIZING M ACHINING C ENTER Origin

1/ta(s) 1 5 10 20

S E S E S E S E

No. 3,877 6,130 33,053 48,606 102,058 145,036 348,518 482,356

Compression No. ratio Time 1,099 71.65 3.90e−4 1,689 72.45 8,959 72.90 5.00e+0 12,991 73.27 26,732 73.81 8.00e+1 37,686 74.02 92,318 73.51 1.46e+3 128,720 73.31

[8] M.H. Hwang. Generating Behavioral Model of SP-DEVS Networks. In Proceedings of 2005 DEVS Integrative M & S Symposium. SCS, 2005. Accepted. [9] M.H. Hwang and S.K. Cho. Timed Analysis of Schedule Preserved DEVS. In A.G. Bruzzone and E. Williams, editors, 2004 Summer Computer Simulation Conference, pages 173–178, San Jose, CA, 2004. SCS. [10] M.H. Hwang and F. Lin. State Minimization of SP-DEVS. In T.G. Kim, editor, Lecture Note in Computer Science: AI, Simulation, and Planning, Jeju Island, Korea, 2004. SCS. [11] Tag G. Kim, S.M. Cho, and W.B. Lee. Framework for systems development: Unified specification for logical analysis, performance evaluation and implementation. In H.S. Sarjoughian and F.E. Cellier, editors, Discrete Event Modeling and Simulation Technologies, chapter 8, pages 131–166. Springer-Verlag, New York, 2001. [12] Microsoft. Visual C++. Net. http://msdn.microsoft.com/visualc/, 2002.

Two-Step Reduction Clustering No. ratio 1,092 0.64 1,681 0.47 8,912 0.52 12,943 0.37 26,590 0.53 37,543 0.38 91,803 0.56 128,182 0.42

Total Time 5.30e−4 2.70e+1 2.57e+2 2.25e+3

ratio 71.84 72.56 73.04 73.37 73.95 74.11 73.66 73.43

Time 9.20e−4 3.20e+1 3.37e+2 3.71e+3

[13] R. Sedgewick. Algorithms in C++, Part 5 Graph Algorithm. Addison Wesley, Boston, third edition, 2002. [14] Bjarne Stroustrup. The C++ Programming Language. Addison Wesley, New York, third edition, 2002. [15] R.E. Tarjan. Depth first search and linear graph algorithms. SIAM J. Computing, 1(2):146–160, 1972. [16] W. Thomas. Automata on Infinite Objects, volume B of Handbook of Theoretical Computer Science, chapter 4, pages 133–191. Elsevier Science Publishers B.V., 1990. [17] B.P. Zeigler. Multifacetted Modeling and Discrete Event Simulation. Academic Presse, London,Orlando, first edition, 1984. [18] B.P. Zeigler. Hierarchical, modular discrete-event modelling in an object-oriented environment. SIMULATION, 49(5):219–230, 1987. [19] B.P. Zeigler, H.Praehofer, and T.G. Kim. Theory of Modelling and Simulation: Integrating Discrete Event and Continuous Complex Dynamic Systems. Academic Press, London, second edition, 2000.

Tutorial: Verification of Real-time Systems Based on ...

Electrical and Computer Engineering,. Wayne State ... I. Introduction. Discrete Event System Specification(DEVS) is a promising formalism for modelling and analysis of dis- crete event systems and especially it has been regarded as a powerful ... the behavior of SP-DEVS network, shows that SP-DEVS is closed under ...

607KB Sizes 2 Downloads 196 Views

Recommend Documents

Transducer-based Algorithmic Verification of ...
As an example, consider a sender which gets a message M from its client. The sender transmits it and in return, gets an acknowledgement a from the receiver.

A Modular Verification Framework Based on Finite ...
strongly connect components that we will talk about from now on. ..... 0: no push button; 1: using push button1; 2: using push buttons 1 &. 2; using push buttons 1 ...

An Author Verification Approach Based on Differential ...
We propose a machine learning approach based on a number of different features that characterize documents ... The latter index is computed as c@1 = nc n. + nunc n2 ..... Chang, C.C., Lin, C.J.: Libsvm: a library for support vector machines.

Quantitative Verification of Reconfigurable Manufacturing Systems ...
Min and Max processing times as quantitative verification indices th,at reflect the .... quantitative analysis to the processing time of an activity that starts and ends with ..... [2] E.W. Endsley and M. R. Lucas and D.M. Tilbury, “Software Tools.

Exploitation on ARM-based Systems - Troopers18 - GitHub
Mar 12, 2018 - Sascha Schirra. Ralf Schaefer. • Independent Security. Consultant. • Reverse engineering. • Exploit development. • Mobile application security. • Embedded systems. • Twitter: @s4sh_s. • Security Analyst ...... Ask the Ker

Realtime Experiments in Markov-Based Lane Position Estimation ...
where P(zt) has the purpose of normalizing the sum of all. P(v1,t = la,v2,t = lb|zt). .... laptops was made through the IEEE 802.11b standard D-Link. DWL-AG660 ...

design of mechatronic actuation systems based on ...
small amount of wear, direct connection with the system, easy to replace, can operate in ... (fiber structure) creating a three-dimensional grid structure (Fig. 1). ... accelerations and high force are required, like sorting, speed cutting, manipulat

Evaluating the Survivability of SOA Systems based on ...
Abstract. Survivability is a crucial property for computer systems ... Quality of Service (QoS) property. ... services make it difficult to precisely model SOA systems.

Evaluating the Survivability of SOA Systems based on ...
While SOA advances the software development as well as the resulting systems, the new ... building dynamic systems more adaptive to varying envi- ronments.

Realtime Experiments in Markov-Based Lane Position Estimation ...
C. M. Clark is an Assistant Professor at the Computer Science Depart- ment, California Polytechnic State University, San Luis Obispo, CA, USA ..... Estimated vs. actual lane positions for computer 1 (top) and computer 2 (bottom). be explained ...

An approach for automating the verification of KADS- based expert ...
Our study indicates that in order to verify an expert system, it is necessary to have a ... provides principled mapping to the symbol level necessary for V&V (2). 4. .... the user, database when the value is queried from a database, derived when the

Automatic Compositional Verification of Timed Systems
and challenging. Model checking is emerging as an effective verification method and ... To alleviate this problem, we proposed an automatic learning-based compositional ... To the best of our knowledge, our tool is the first one supporting fully auto

Failure-aware Runtime Verification of Distributed Systems
35th International Conference on Foundations of Software Technology and Theoretical Computer Sci- ..... sage the monitor gains knowledge about the in-.

Graded structure and the speed of category verification: On the ...
For non-social categories (e.g., BIRD), participants were faster to classify typical instances than atypical .... testable propositions, both of which received support.

Verification of Transformations on Array-Intensive ...
Array-Intensive Source Code by Equivalence Checking. K.C. Shashidhar∗ ... tive, array-intensive programs are common. The critical ... formations like (1) loop transformations, (2) expres- ... [1] K. C. Shashidhar, M. Bruynooghe, F. Catthoor and.

(International Series on Microprocessor-Based and Intelligent Systems ...
INTELLIGENT SYSTEMS ENGINEERING. VOLUME 23. Editor ..... John Harris (auth.)-An Introduction to Fuzzy Logic Applications-Springer Netherlands (2000).pdf.

Text-dependent speaker-recognition systems based on ...
tems based on the one-pass dynamic programming (DP) algo- rithm. .... Rsil. R 52. R 54. R 53. R 51. Rsil. Rsil. RsilR 11. R 14. R 13. R 12. Forced Alignment ..... help increase the robustness of the system to arbitrary input noise conditions and ...