Exploratory Performance Evaluation using dynamic and parametric Petri nets Robert Esser

J orn W. Janneck

Department of Computer Science

Computer Engineering and

University of Adelaide

Swiss Federal Institute of Technology (ETH)

Adelaide, Australia

Z urich, Switzerland

[email protected]

[email protected]

Networks Laboratory (TIK)

KEYWORDS: exploratory simulation, performance evaluation, Petri nets, parameterization, dynamic structures ABSTRACT

An approach, called exploratory simulation, is presented that facilitates the automatic exploration through the con guration and design alternatives of a system. The approach builds upon a few extensions to an otherwise conventional high-level Petri net formalism. As an example, a generic and fully parametric simulation environment is created with this modeling language and subsequently used to automatically evaluate the performance of a simple system. 1

INTRODUCTION

Simulation, and in particular performance evaluation, is often used to support the decision process in situations where a choice has to be made between a number of design alternatives, collectively called the design space of the system. In many cases, the design space can be very large, and setting up a new simulation model for each alternative is a cumbersome, time-intensive, and error prone process. Instead, one would like to automate this procedure, by providing a parametric model template and having the simulation system set up a run for each di erent combination of parameters by instantiating the template for each parameter combination. This approach, which we will call exploratory simulation, builds on a set of modeling concepts including parametric components. These parameters may be

very di erent in nature { ranging from simple scalars describing a delay or capacity constraint in some part of the model, to algorithms that describe alternative computations taking place inside the model, to complete 'active' sub-models that serve, e.g., as embedded strategies that control the behavior of (parts of) the model. In this paper we will show how a concept of parametric and dynamic Petri nets developed in [6, 7] and implemented in the Moses Tool Suite developed in the Moses Project [1] can be used to construct parametric model components. Furthermore, we will demonstrate how the same concept can be employed to not only construct the model, but also the simulation environment itself. Being able to express the actual test environment in the terms of the modeling language provides full control over the structure of explorative simulation experiments, while reusing the same small set of basic constructs. This paper is structured as follows: First we will give a short overview of our concept of dynamic and parametric Petri nets. Then we will discuss the general structure of exploratory simulation in the terms of the modeling constructs provided by this class of Petri net and give an example of a test environment. This will then be applied to the performance evaluation of a very simple example system. Finally, we discuss the results and give an outlook for further work in this area.

Figure 1: A parametric binary operator component.

2

DYNAMIC AND PARAMETRIC PETRI

Figure 2: An iterative stream processor.

NETS

Petri nets [11, 13] are a notation for modeling concurrent systems which combine a solid formal underpinning with an intuitive visual appearance. They have been extended in various ways in order to make them more suitable for real-world modeling tasks, in particular by adding concepts such as time (e.g. [10, 2, 8, 12, 4]) and compositionality (e.g. [8, 9, 3, 4]. Our approach takes a high-level time Petri net formalism similar to the one de ned in [4], and adds three features (see [6, 7] for a more detailed treatment):



A concept of components that are connected to their environment using interfaces over which tokens may ow1 .



Components can have parameters that can be bound to any type of object including functions and components.



Components are objects and can be treated as tokens residing within other components. While residing on a container place, they are connected via their interfaces to container place interfaces.

The rst two concepts are illustrated by the component BinaryOperator(f ) (see g. 1). In this gure a component is shown with two input interfaces, in1 and in2 { (triangular icons having a line close and parallel to one side) and one output interface, out { (triangular icon having an extra line at an apex). The component BinaryOperator (f ) takes a parameter f , in this case a binary function. Tokens may be accepted by a component via input interfaces and placed on the place(s) (circular icon) connected to the particular input interface. 1 This concept borrows heavily from the one already de ned in [4].

Once each place has at least one token, the transition (rectangular icon) becomes enabled. As is the case in many high-level Petri nets (e.g. [8, 4]), this binds the tokens to the variable names on the arcs (a and b), and nally the transition res. Its action in turn binds the result of applying f to the input tokens to the variable c. This value is then sent via the output interface to all connected components or places. Figure 2 I terator(init) shows a component with one parameter init. This parameter is expected to be a component and is used as the initial marking of the container place CP1 (double edged circular icon). Note that there are two ways in which arcs can be connected to a container place: either to input or output interfaces, denoted by smaller version of input and output interface icons distributed along the perimeter of the container place icon, or with the place itself (like a standard place). For instance, an input to the I terator's ininterface is directly fed to the in-interface of the container place. The e ect of this connection is that a token arriving at the I terator's in-interface will be sent to the correspondingly named input interface of all components (tokens) residing on the place { initially the value of the init parameter. This component can then act on the input token and produce outputs at its next and out interfaces. Tokens emitted from the container place out interface are immediately sent out of the I terator itself, but tokens emitted from the container place next output interface (which are expected to be components) are sent to place P1. This activates transition T1, and upon ring it exchanges the current contents of the container place with whatever component was emitted from the container places next output interface. In other words, the embedded component computes its own replacement, which handles

Figure 3: The interface of iterable components.

Figure 4: An iterative adder. the next input to the I terator and so on. To signal the completion of this step, transition T1 also produces a token at the Ack output interface. This process relies on the assumption that the actual parameter of I terator, as well as all the subsequent components residing on the container place, have a set of appropriately named input and output interfaces, their signature, and that these interfaces behave as described above, i.e. they conform to a protocol. The syntactic part of this protocol can be expressed as in the component I terable() (see Fig. 3), which shows the interfaces an embedded component must provide. Suppose for example we are dealing with sequences of numbers, and we would like to de ne a component that iteratively adds all numbers in the stream and outputs the current sum at each step. This could be achieved by instantiating the I terator component with its init parameter bound to an instance of the S umI terable(s) component depicted in Fig. 4. First note that the component provides the I terable signature in Fig. 3. It has one parameter, s, which is used to hold the current sum of the adding process (similar to accumulator variables in functional programming). The behavior of this component is very simple: When a token enters through its I n interface, the transition becomes enabled and upon ring does two things: (a) It computes the sum of s (the 'current sum') and k and writes this to its Out output interface. (b) It instantiates a new S umI terable component, with its current sum s set to s + k , and writes this to its N ext output interface. Since this happens while the S umI terable component resides inside the container place of an I terator

component, the new S umI terable instance is temporarily held as a token on place P1 of the I terator. The 'old' component residing as a token on the container place will be replaced with the new S umI terable component when the I terator transition T1 res. By setting the initial current sum to 0, the complete stream adder can be instantiated by the expression new I terator (new S umI terable(0)). This example illustrates our main extension to standard high-level Petri net formalisms { the introduction of places that can have interfaces (container places ), and arcs which can connect to these interfaces. If a token is a component, its input and output interfaces are matched to the input and output interfaces of the containing place it resides on (by name correspondence). Tokens owing into the input interfaces of the container place are sent to the inputs of the contained components, while tokens owing out of contained components are sent out of the corresponding output interfaces of the container place.2 Note, in general, dynamic protocol assumptions of components should be made as precise as possible, however in the following, for the sake of brevity, we will make do with an informal 'behavioral' description similar to that given above. 3

EXPLORATORY SIMULATION

In this section we will show how the above concepts can be used to construct highly versatile simulation environments. These environments automate the design space exploration process for parametric systems. Before proceeding, we will brie y describe the general structure of exploratory simulation. 3.1

The structure of exploratory simulation

Assume that we have a system S with n parameters Pi , each of which takes values from a set of possible values Vi . A concrete instance of this system can be described as (

S p1 ; :::; pn

)

with

pi

2

Vi

The design space therefore is some subset of V1  :::  Vn , though for simplicity we will assume that it is the full cartesian product. In many cases (in particular when real-valued parameters are involved) the design space will be in nite. 2 Tokens leaving a place output interface must be sent to a place in the surrounding environment, therefore arcs leaving container place output interfaces must be connected directly or indirectly to places.

Figure 5: The test run signature.

As a decision support in systems design, simulation is used to evaluate instances of this design space with respect to a set of criteria. This is achieved by setting up a system instance, running it once or, if the system is not deterministic, a number of times and creating and aggregating the relevant data from the run(s). Once this is completed, a new set of parameters is created and the corresponding system instance is created. When automating this process, two activities should be distinguished. First, the actual experiment on a single element of the design space { setting it up, running it a number of times, providing the appropriate input data and collecting and aggregating the output (the test run). These activities depend on the concrete system to be evaluated in addition to the system characteristics of interest (speed, resource needs, costs, etc.). Secondly, we need to have a mechanism that instantiates a test run for each element of the design space of interest, collects the output, and associates it to the pi used to create the system (the test environment). In contrast to the test run this mechanism is generic in the sense that it can be designed to be independent of the actual system or the parameters or their sets of possible values. This mechanism embodies the strategy used for design space exploration. The next section will demonstrate how this is achieved using the concepts introduced in section 2. 3.2

A generic test environment

Before we create such a test environment, we will rst de ne the signature of an individual test run. As can be seen in Fig. 5, we assume one input and one output interface, with the following implied protocol: the test run starts only after we have sent it an input token at its start input interface. It is completed by producing only one token holding the result of the run at its result output interface3 . Depending on the goal of the experiment, this can be anything, from a simple 3 We assume that nothing relevant takes place in the test run component after emission of the result token.

Figure 6: A generic test environment. number in the case of a simple performance analysis (such as the one we will look at below) to a complex data structure containing information on many aspects of system behavior. Abstracting a single test run like this, a generic test environment could look like the one in Fig. 6. Note that it has four parameters: init, step, cond, runF actory . The rst parameter, init, is a list of length n (the number of parameters of the model) de ning the initial values for each of the parameters. S tep is a function that, given a list of values, computes the next list of parameter values. C ond is a predicate that determines whether a given list of values is in the design space. For the traversal of a rectangular design space this means that we compute the set of possible values for the i-th parameter to be the smallest set Vi such that initi

2 ^ Vi

((cond(v ) ^ v 2 Vi ^ cond(step(v ))) ) step(v ) 2 Vi ) Assuming we have enumerated an element v = (vi )i=1:::n with vi 2 Vi , we need to construct the proper test run component for this parameter tuple. As we desire the test environment to be generic, we need to factor out the actual creation of the test run and this is handled by the runF actory function. Hence runF actory (v ) is a test run component for the parameter tuple v . Although in the example in section 4 we will use this mechanism only for stepping through ranges of scalars, it also facilitates much more general enumeration, such as for collections of functions or even compo-

nents. In addition this mechanism not only allows rectangular design spaces be used but also design spaces of any shape. Moreover techniques for a guided traversal such as gradient descent, simulated anealing etc. can also be supported. The operation of the T estBed component (Fig. 6) is straightforward: Starting with a list of initial values on place P1, transition T1 will re placing the list on place P2. When transition T2 res it will execute the runFactory function instantiating a new system and placing it on container place CP1. In addition the list of values will be stored on place P3 while place P4 will contain a token enabling transition T3 to re. When transition T3 res, a token is sent via the start input interface of the container place CP1 { enabling the test system. Once the test system has completed execution a result token will have been generated and sent to place P5. Transition T4 is now enabled and will generate a token representing the result of the test run and send it via the out output interface. Also it will place a token on place P1 with either the next list of values calculated using the step and cond functions or a null indicating that the test is complete. If a null token has been generated transition T5 will re and the system will deadlock. Note the way this model builds on the ability to instantiate new parametric components at runtime, connect them with their environment, and remove them when they are no longer needed. Hence, a dynamic and parametric concept of components is essential for any mechanism implementing this technique. The key to the generality of this procedure lies in its parameters { not only do its own step, init, and cond parameters make it independent from the kind of parameters the system to be evaluated depends on, it also employs a factory pattern (similar to the likenamed design pattern, see [5]) by encapsulating the creation of the actual test run in the runF actory parameter, a function computing components. This is one of the places where we draw on the exible parameterization requirement described above. Note, in particular, that most of these parameters are functions. Also mechanisms that make use of functions and components as rst-class objects of the modeling language facilitate the de nition of much more general and reusable components. 4

A SMALL EXAMPLE

Now we will use the test environment constructed above to evaluate a small example system. First we

Figure 7: A system depending on two parameters.

Figure 8: A simple test run for the system in Fig. 7. introduce the system itself, then the particular characteristic of the system of interest is de ned and an appropriate test run is constructed along the lines described in section 3.2. Finally, we will conduct the experiment and present the results. 4.1

The example system and design goal

The example system shown in g. 7 models two time dependent synchronized activities such as used to model manufacturing systems etc. It has two parameters, d1 and d24 , which are delays. It describes the processing of an incoming token by two parallel activities (called A and B for brevity), with places and transitions named APx /ATx and BPx /BTx, respectively, coordinated by control transitions and places CTx /CPx, situated in the center of the model. First, directly after the arrival of a token, activity A has AT1 ring with a delay of d1 producing 2 tokens at each ring, while B has BT1 ring with a delay d2 producing 5 tokens at each ring. After a delay of 1 CT1 res and then CT2, activating the second stage in 4 For the sake of clarity, this example deals only with the special case of a system that depends only on scalar parameters. In general, parameter values will be much more complex comprising of functions and active components.

Figure 9: The parametric experiment.

The test run component in Fig. 8 performs 10 runs and returns the average duration. When a token arrives via the start input interface onto place P1, transition T1 is enabled. When transition T1 res it places 10 tokens onto place P2 in addition to placing a token holding the current time on place P3. Transition T2 sends a token to the example system residing on container place CP1. When a token is received from the example system on place P4, transition T3 will be enabled and res placing a token each on places P5 and P6. This cycle is continued until all tokens on place P2 are consumed at which time place P6 will hold 10 tokens enabling transition T4. The ring of transition T4 will produce a token holding the value of the average duration of the example system for the parameters d1 and d2. Note the model uses the fact that the example system is stateless where previous runs do not in uence subsequent runs. Hence it is possible to instantiate a parametric subcomponent inside the container place as an initial token6 . 4.3

Figure 10: Simulation parameters dialog. the two activities: this time, AT2 and BT2 remove, one by one, the tokens created in the previous step, with a delays of d2 and d1, respectively. Once all tokens are removed, the activities end with the ring of AT3 and BT3, and when both activities have ended, CT3 is activated, res, and produces an output token. Assuming that possible values for d1 and d2 are in the range of 0:1:::0:9, our task will be to employ simulation to identify a combination of values for the delays, so that the overall processing time of an incoming token is as short as possible.5 4.2

Constructing a test run

When designing the test run, it is important to note that the system may not be completely deterministic for all values of d1 and d2. In particular, if either can exactly divide 1, a con ict can occur between CT2 and AT1 or AT2. A test run must be designed to take this into account by conducting multiple runs of the system to be simulated. 5 A system of this complexity might also be amenable to formal analysis. However, the procedure we present is independent of the actual complexity of the system and ignores the fact that more complex systems tend to take longer to simulate.

Experiment and results

In principle, the experiments could now be run. However, providing the proper parameters to the T estBed component (initial value, step function and termination condition for each parameter as well as the test run factory) is somewhat cumbersome, and one may wish to further process the output of the T estBed component (e.g. writing them into a database or a suitable exchange format, automatically identifying a minimum etc.). Fig. 9 shows a component of six parameters (start and end values in addition to an increment value for each of the two delays) that generates the proper test environment from this information. Incidentally, it shows how function parameters are passed to components, and how they are put into data structures, just like any other object. It also includes a transition that processes and formats the output. Starting this component brings up the dialog in Fig. 10. The rst simulation experiment evaluated the system at 9  9 points in the design space, in order to gain a general overview of its structure. The result is shown in Fig. 12a. Two further experiments narrowed in on the area around the minimum. Note that the result of the second experiment considerably improves upon the minimum found in the rst experiment. Finally, 6 If there is interference between runs then the example system should be instantiated before every run using an approach similar to that used in the TestBed (Fig. 6) component

d1 d2 step test runs minimum delay at (d1, d2)

Exp 1 0.1 - 0.9 0.1 - 0.9 0.1 81 3.4 (0.4, 0.6)

Exp 2 0.3 - 0.6 0.5 - 0.7 0.01 600 3.04 (0.34 - 0.4, 0.51)

Exp 3 0.33 - 0.42 0.50 - 0.53 0.001 2700 3.004 (0.334 - 0.4, 0.501)

Figure 11: Experiment parameters and minima. the third experiment looks closer at the area of minimum delay. See Fig. 11 for details on the experiment parameters and the minima obtained, and Fig. 12 for plots of the three result sets. 5

CONCLUSION

In this paper we demonstrated how a small set of basic modeling constructs, extensions to an otherwise conventional high-level Petri net formalism, could be used to easily set up a sophisticated simulation environment that facilitates the automated exploration of a system's design space. We presented a simple exploratory simulator, and applied it to the performance analysis of a small example system. Design space regions of decreasing size were simulated, with increasing resolution, homing in on the region of optimal performance. The creation of the models and the actual simulations were carried out using the Moses Tool Suite, developed in the Moses Project at the ETH Zurich [1]. In the construction of the simulation environment we used the extensions proposed in [7] { function parameters, parametric model components which were treated as rst-class objects of the modeling language, and dynamic model structures. Being able to formulate the simulation environment in terms of the modeling formalism itself means that users can freely con gure experiments according to their needs, and de ne libraries of custom simulation environments. It also reduces the overall conceptual complexity of the system itself and thus contributes to the orthogonality between the simulation environment and the modeled system. However, the actual test environment, while highly parameterizable and quite appropriate in a large number of situations, remains rather crude { its functionality could be improved in a number of ways, e.g. as follows:



The enumeration and choice of admissible param-

eter combinations could be improved to allow for non-rectangular design spaces as well as design spaces 'with holes'.



Making the enumeration and choice of new parameter combinations depend on previous results leads to general optimization algorithms, from simple hill climbing to genetic algorithms { the test run then becomes an evaluation or ' tness' function of the parameter combination. For complex systems, simulation can be a time-consuming technique of evaluation, and usually makes the trade o between precision and time a diÆcult one, especially when { as in the 'blind' exhaustive technique presented here { the number of simulations is exponential with respect to the number of parameters.



In order to make the decision support provided by the exploratory simulation more readily accessible to users, it might be useful to furnish it with a user interface allowing easier navigation of the state space and presenting results in a more compact and understandable manner.

Finally, it would be useful to have a range of these extensions prede ned. It is important to note that all extensions can be created, enhanced, or modi ed by the user without any concepts beyond the usual repertoire of modeling techniques. We believe that this constitutes an important feature of exible modeling and simulation environments. References

[1] The Moses Project. Computer Engineering and Communications Laboratory, ETH Zurich (http://www.tik.ee.ethz.ch/moses). [2] M. Ajmone Marsan, G. Balbo, G. Conte, S. Donatelli, and G. Franceschinis. Modelling with Gen-

eralized Stochastic Petri Nets. Wiley Series in Parallel Computing. Wiley, 1995.

[3] D. Buchs and N. Guel . CO-OPN: A concurrent object-oriented Petri net approach. In Proceedings of the 12th International Conference on the Application and Theory of Petri Nets, 1991.

47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 P 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

[4] Robert Esser. An Object Oriented Petri Net Approach to Embedded System Design. PhD thesis, ETH Zurich, 1996. [5] Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. Design Patterns, Elements of Reusable Object-Oriented Software. AddisonWesley, 1995. [6] Jorn W. Janneck. Compositional Petri net structures. Technical Report 60, Computer Engineering and Networks Laboratory, ETH Zurich, November 1998.

0.1 0.2

0.1

0.2

0.3

0.3

0.4

D2

a)

0.4

D1

0.5

0.5

0.6

0.6

0.7

0.7 0.8

0.9

0.8

6.1 6 5.9 5.8 5.7 5.6 5.5 5.4 5.3 5.2 5.1 5 4.9 4.8 4.7 4.6 4.5 P 4.4 4.3 4.2 4.1 4 3.9 3.8 3.7 3.6 3.5 3.4 3.3 3.2 3.1 3 2.9 0.3

0.32

0.34

0.36

0.38

0.4

0.42

0.44

0.46

0.48

0.5

0.52

0.54

0.56

0.58

0.524

0.512

D2

0.518

D1

0.5

c)

D1

0.53 0.415 0.41 0.405 0.4 0.395 0.39 0.385 0.38 0.375 0.37 0.365 0.36 0.355 0.35 0.345 0.34 0.335 0.33

0.68

0.62

0.64

0.66

0.56

5.2 5.1 5 4.9 4.8 4.7 4.6 4.5 4.4 4.3 4.2 P 4.1 4 3.9 3.8 3.7 3.6 3.5 3.4 3.3 3.2 3.1 3 2.9

[11] Tadao Murata. Petri nets: Properties, analysis, and applications. Proceedings of the IEEE, 77(4):541{580, April 1989.

[13] Wolfgang Reisig. A Primer in Petri Net Design. Springer-Verlag, 1992.

0.58

b)

[10] P. Merlin and D. J. Faber. Recoverability of communication protocols. IEEE Transactions on Communications, 24(9), September 1976.

[12] C. Ramchandani. Analysis of asynchronous concurrent systems by timed petri nets. Technical Report 120, Project MAC, Massachussetts Institute of Technology, February 1974.

0.6

0.5

D2

0.506

[9] Charles A. Lakos. From coloured Petri nets to object Petri nets. In Proceedings of the 15th International Conference on Applications and Theory of Petri Nets, Lecture Notes in Computer Science, 1995.

0.52

[8] Kurt Jensen. Coloured Petri Nets: Basic Concepts, Analysis Methods and Practical Use, volume 1: Basic Concepts of EATCS Monographs in Computer Science. Springer-Verlag, 1992.

0.54

[7] Jorn W. Janneck and Martin Naedele. Modeling hierarchical and recursive structures using parametric Petri nets. In Adrian Tentner, editor, Proceedings of the HPC '99, pages 445{452. Society for Computer Simulation, 1999.

Figure 12: Simulation results.

Exploratory PerFormance Evaluation using dynamic ...

plete 'active' sub-models that serve, e.g., as embedded strategies that control the .... faces, their signature, and that these interfaces behave as described above ...

568KB Sizes 0 Downloads 321 Views

Recommend Documents

Toward Accurate Performance Evaluation using Hardware Counters
PMCs for performance measurement and analysis. ... interfaces include Intel's VTune software for Intel proces- sors [5], IBM's .... Therefore, we set the event-list.

Toward Accurate Performance Evaluation using Hardware Counters
analysis and validation tool. Various drivers ... Performance counters or Performance Monitoring Counters. (PMCs) are ... Library (PCL) [8] and Performance Application Program- ..... best algorithm for quake which produced an estimation error.

TEACHER PROFESSIONAL PERFORMANCE EVALUATION
Apr 12, 2016 - Principals are required to complete teacher evaluations in keeping with ... Certification of Teachers Regulation 3/99 (Amended A.R. 206/2001).

Performance Evaluation of A PHEV Parking Station Using Particle ...
Performance Evaluation of A PHEV Parking Station Using Particle Swarm Optimization.pdf. Performance Evaluation of A PHEV Parking Station Using Particle ...

CDOT Performance Plan Annual Performance Evaluation 2017 ...
48 minutes Feb.: 61 minutes March: 25 minutes April: 44 minutes May: 45 minutes June: 128 minutes 147 minutes 130 minutes. Page 4 of 5. CDOT Performance Plan Annual Performance Evaluation 2017- FINAL.pdf. CDOT Performance Plan Annual Performance Eval

CDOT Performance Plan Annual Performance Evaluation 2017 ...
84% 159% 160% 30% 61% 81%. 113%. (YTD) 100% 100%. Whoops! There was a problem loading this page. Retrying... Whoops! There was a problem loading this page. Retrying... CDOT Performance Plan Annual Performance Evaluation 2017- FINAL.pdf. CDOT Performa

PERFORMANCE EVALUATION AND ...
As uplinks of most access networks of Internet Service Provider (ISP) are ..... event, the TCP sender enters a slow start phase to set the congestion window to. 8 ...

FPGA Implementation Cost & Performance Evaluation ...
IEEE 802.11 standard does not provide technology or implementation, but introduces ... wireless protocol for both ad-hoc and client/server networks. The users' ...

Performance Evaluation of Equalization Techniques under ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue ... Introduction of wireless and 3G mobile technology has made it possible to ...

PERFORMANCE EVALUATION OF CURLED TEXTLINE ... - CiteSeerX
2German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany ... Curled textline segmentation is an active research field in camera-based ...

Performance Evaluation of Equalization Techniques under ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue ... Introduction of wireless and 3G mobile technology has made it possible to ...

Performance Evaluation for Widely Linear ...
WL processing offers a significant performance advantage with a moderate ... IN wireless networks, fading caused by the multi-path signal propagation and ...

Performance Evaluation of Parallel Opportunistic Multihop ... - CiteSeerX
of the IEEE International Conference on Communications, Seattle,. WA, pp. 331-335 ... From August 2008 to April 2009, he was with Lumicomm Inc.,. Daejeon ...

Performance Evaluation of Parallel Opportunistic ...
Department of Computer Science and Engineering, Dankook University, 152 ... Second, computer simulations are performed to verify the performance of the ...

Performance Evaluation of Curled Textlines ... - Semantic Scholar
[email protected]. Thomas M. Breuel. Technical University of. Kaiserslautern, Germany [email protected]. ABSTRACT. Curled textlines segmentation ...

Performance evaluation of QoS routing algorithms - Computer ...
led researchers, service providers and network operators to seriously consider quality of service policies. Several models were proposed to provide QoS in IP ...

DOR Performance Evaluation-04.19.2015.pdf
Operational Measures. Customer Service. Process ... DOR Performance Evaluation-04.19.2015.pdf. DOR Performance Evaluation-04.19.2015.pdf. Open. Extract.

Performance Evaluation of RANSAC Family
checking degeneracy. .... MLESAC takes into account the magnitude of error, while RANSAC has constant .... International Journal of Computer Vision, 6.

distribution systems performance evaluation ... - PSCC Central
approach for electric power distribution systems considering aspects related with ... security, distributed generation, islanded operation. 1 INTRODUCTION.

Correlation and Relative Performance Evaluation
Economics Chair and the hospitality of Columbia Business School. ..... It is easy to see that the principal's program is in fact separable in the incentive schemes:.