A Method for Metric-based Architecture Quality Evaluation Hao Deng, Camilo Mercado, Mia Persson, Mikael Svahnberg, Robert Feldt Blekinge Institute of Technology, Sweden {hao.deng.m, camilo.mercado}@gmail.com, {mip, mikael.svahnberg, robert.feldt}@bth.se

Abstract It is hard to build quality into software late in development, but good architecture evaluation techniques can enhance quality and mitigate risks at a relatively low cost. Metric-based architecture quality evaluation is a quantitative technique that uses architecture metrics to reflect and predict software quality. It usually relies on a proper quality model, a set of related architecture metrics, and a proper technique to analyze metric values. There are only a few articles in the literature discussing metric-based architecture quality evaluation and, to the best of our knowledge, no quality models for architecture quality evaluation have been proposed before. In this paper, we propose a new method called MAQE to be used during the development of a new architecture and software system. MAQE includes a generic two-level quality model, and weighting formulas for analysis. We also define a set of related object-oriented architecture metrics, and build a maintainability quality model.

1.

Introduction

An appropriate software architecture not only reflects the functional requirements, but also the quality requirements [1]. Quality can not be built into systems late in projects [2], but good architecture evaluation techniques can enhance quality and mitigate risks at a relatively low cost. According to Bosch [3], evaluation techniques are grouped in three categories: qualitative assessment, quantitative assessment and assessment of theoretical maximum. In the current level of practice, we have little means to predict theoretical maximum values for an architecture [3]. On the other hand, qualitative assessment often relies too much on experts’ experiences. Quantitative assessment involves making quantitative statements about the quality of the architecture [3]. With good measurement techniques, it can give stable and repeatable results.

Metric-based Architecture Quality Evaluation is a quantitative evaluation method which collects architecture metrics from related architecture artifacts and uses the metric values as quality indicators to evaluate software quality. Most of the existing studies on metric-based architecture evaluation are from the reverse engineering perspective which reconstructs the architecture from the existing source code and performs the evaluation on the reconstructed architecture, for example in [4,5,6,7]. The disadvantage is that potential defects can only be identified at a late stage, when it is costly to correct them. From the forward engineering perspective, as far as we know the evaluations are based on separate metrics without a unifying and proper quality model to support them, as for example in [9,10]. Architecture quality evaluation needs a method which can identify the potential problems at architecture stage, provides an overall evaluation based on multiple architectural views, gives repeatable and relatively consistent evaluation results, and that can be performed in an efficient way. In this paper, we propose such a metric-based architecture quality evaluation method (short for MAQE) which includes a generic quality model and weighting formulas for analysis. In section 2, we introduce our new method. In section 3, we build a maintainability quality model with a set of objectoriented architecture metrics. Section 4 shows a case study on an open-source project. Finally, we give conclusions and future work in section 5.

2.

Metric-based architecture evaluation method (MAQE)

quality

In this section, we propose the metric-based architecture quality evaluation method -- MAQE. Firstly, we discuss the limitations of traditional quality models for architecture quality evaluation, and propose a generic quality model. Secondly, some weighting formulas are built to help the analysis. Finally, based

on the new quality model and weighting formulas we describe the new method.

2.1.

A generic two-level quality model

Quality can be seen as a set of characteristics and the relationship among them, but every software product can have a unique set of characteristics [8]. Software quality models are usually built for depicting the relationship. Previous software quality models can be summarized as traditional three-tier quality models which include Quality Attributes (QA), Quality Characteristics (QC) and metrics as the figure 1 shows.

hierarchy or model for how the levels relate can be difficult to get an overall evaluation of QA. Therefore, a new quality model for architecture quality evaluation is needed to address attributes at different levels. In addition to the above stated limitations, the traditional quality model also needs to improve for ease of evaluation. A metric is the indicator of a QC. Some metrics are direct indicators while others are reverse ones. Direct means the high metric value is good, whereas reverse means the low metric value is good. Usually a weight can be assigned to a metric to indicate the degree of impact on a QC. The weight value is useful for later analysis. In our new model, we can use signs together with the weights to indicate if an indicator is direct or reversed. Based on the above mentioned points, we extend the traditional quality meta-model into two levels of QAs, i.e. system QA that depends on component QA. Figure 3 shows the meta-model of the two-level quality model.

Figure 1. A traditional quality model [8] The traditional quality model can be abstracted into the following meta-model in UML as figure 2 shows.

Figure 2. The meta-model of traditional quality models When it applies to architecture quality evaluation, it has some limitations. Firstly, most of traditional quality models are built for evaluating the final product. At the architecture level, when no or only a little source code has yet been written, some of the QCs are not relevant or have different meanings, and some metrics are not available. Secondly, software architecture is composed of elements on multiple level and relationships between them [1]. A QA may have different focuses at different levels and can be decomposed into different sets of QCs. A proper metaphor is the maintainability of a car and its engine has different considerations. The quality of software architecture depends not only on its overall structure, but also on the quality of each individual architectural element. It is more reasonable to measure the quality of the architecture based on the qualities of its components. Finally, architecture metrics are collected from different levels of architecture elements. To analyze different levels of metrics together without any

Figure 3. The meta-model of the two-level quality model For architecture quality evaluation, we only extend the model down to the component level, because the deeper level extension may involve the low level design elements which are not the key focuses of software architecture. To better model architecture quality, we introduce a graphical notation to enrich the representation of the model. The figure 4 shows the notation.

Figure 4. The Graphical notation for two-level quality model

Figure 5 shows the generic two-level quality model. Evaluators can use the model as a framework to decompose QA into QCs, and define metrics at different levels.

i is the number of direct metrics. j is the number of reverse metrics.

Sys

QA QC1 w1m

w2+ m

m1

QC2 w3-

2. QCn wn-

w4-

m

m

m2

Formula (2) integrates component QCs to component QA. Each QC is given a weight according to its impact on the QA. For each component, we calculate its QA value.

mn

m3 Com

Where:

QA QC1' w1m

w2m

m1'

m2'

QC2'

QCn’

w3-

wn+

m

m

m3'

is the number of QC of system QA.

mn'

Figure 5. The generic two-level quality model

2.2.

3.

Weighting formulas to help analysis

After metric values are collected, we analyze them and relate them to quality characteristics and quality attributes. However, there are some challenges: • The metrics are distributed both at system level and at component level. • Some metrics are direct indicators and others are reverse ones. • The relationship between metric and QCs is many-to-may. • The system QA depends on the component QA. Some weighting formulas can be developed to help the analysis. 1. Formula (1) integrates both system QA and component QA to corresponding QCs. Before applying the formula, each metric value should be normalized to be in the range of 0 and 1. We use the min-max normalization technique.

Where:

is the number of QC of system QA. N is the number of component of the architecture.

For each QC, suppose there are i direct metrics and j reverse metrics. Each metric is assigned with a weight according to its impact on the QC.  

Where:

Formula (3) integrates system QCs and component QA to system QA. We consider the component QA as one of the dimensions of system QA like each QC, and assign each dimension a weight. Each component can also be assigned with a weight according to the importance to the QA.

2.3.

The description of MAQE

MAQE has the following five steps that we further detail below: 1. Decide quality attribute to be evaluated. 2. Choose relevant architecture artifacts. 3. Build a quality model. 4. Use the weighting formulas to evaluate the quality attributes. 5. Analyze the results and draw conclusions.

Step 1: Decide quality attributes to be evaluated, for example maintainability, reusability etc. Choose attributes based on requirements. Step 2: Choose relevant architecture artifacts. Architecture artifacts vary a lot from project to project. Evaluators should choose available architecture artifacts from their projects to collect metrics. Step 3: Build a quality model for a specific QA. The two-level quality model we described in section 2.1 is a generic framework, and it is up to evaluators to specify QCs and metrics relevant to the project. Once a suitable quality model has been set up based on empirical experience, the future projects can reuse the model easily. Particularly to calibrate the weight, we need to combine applying the metrics to interviews and surveys of developer and managers views. Step 4: Use the weight formulas to evaluate the quality attributes and get the results. Step 5: Analyze the results and draw conclusions. The purposes of the analysis are: • Identify outliers (elements which have extreme metric values and might have design/architecture “bad smells”) for later adjustment. The outlier architecture elements should be given special attention. • Provide a quantitative basis for comparison among different architecture candidates.

3.

A maintainability quality model for object-oriented projects

The core feature of MAQE is the two-level quality model in section 2.1. It is a generic model and up to evaluators to specify QCs and metrics relevant to their projects. In this section, we illustrate how to instantiate the generic model and build a maintainability quality model by collecting a set of object-oriented architecture metrics from selected UML models. First, we discuss the selection of UML models. Then, we define metrics from them. Finally, we construct the maintainability quality model.

3.1.

Selection of UML models for collecting metrics

A software architecture should at least have a static view, a dynamic view and an allocation view [1]. UML can be used as a architecture description language and fulfills these requirement. Table 1 shows the UML models we selected to collect architecture metrics. Table 1. Selected UML models to collect metrics View UML model

Static Dynamic Allocation Scenario

3.2.

Class diagram Sequence diagram, component diagram Deployment diagram Use case diagram

A set of object-oriented architecture metrics for maintainability

From the selected UML models, we can define a set of architecture metrics for building the maintainability quality model. 3.2.1. Number of scenarios per use case (NSPUC) NSPUC counts the number of scenarios of a use case. If the number of scenarios of a use case is too large, it is possible that the use case is too complex which requires more effort to understand and maintain. A complex use case can be divided into sub-use cases which implement sub-goals of the use case. 3.2.2. Response for a scenario (RFSC) A scenario contains a sequence of interactions between architecture elements such as components. The RFSC metric counts all the invocation of services in a scenario. If the number of service invocations of a scenario is too large, it is possible that the scenario is complex. 3.2.3.

Number of components per use case (NCOMPUC) The NCOMPUC metric counts the number of components used in all scenarios of a use case. The more components a use case has, the more difficult it is to maintain and reuse. When the use case needs to change, it might have impact on more components and needs more effort to implement the change. 3.2.4.

Number of use cases per component (NUCPCOM) The responsibilities of components should be assigned according to their related services. The metric counts the number of use cases whose scenarios contain the component under consideration. If a component is shown in many use cases, it is possible that some unrelated services are included in the component and leads to its appearance in many use cases. In other words, the component has low cohesion and is difficult to maintain. 3.2.5. Coupling between components (CBCOM) The CBCOM metric counts the number of components involved in couplings regardless of service direction. Given component X and component Y, the possible relations are:



R0: No communication. Component X and component Y have no communication. • R1: Service consumer. Component X consumes the services of component Y. • R2: Service provider. Component X provides the services to component Y. Both R1 and R2 are considered as coupling relation. A high value of this metric means many dependencies between components. Changes on one component might affect other components as well, and it is hard to maintain the component. 3.2.6. Degree of component couplings (DCOMC) As it mentions in metric CBCOM, there are two types of coupling between components. It is not only enough counts the number of couplings, but to get their coupling degrees to understand the impact on quality. We can use the following calculation to get the DCOMC value for component x.

Where: #R1(x) is the number of service consumer. #R2(x) is the number of service provider. Service provider coupling has high weight than service consumer coupling, because to change the interfaces of the service will cause “ripple effects” and all of its consumer components need to change. The proper weight must be learned from past projects.

Metric NSPUC RFSC NCOMPUC NUCPCOM CBCOM DCOMC NCPC RFSER CBN

3.3.

Building model

3.2.7. Number of classes per component (NCPC) A set of classes are built to implement the services within the component. The metric counts the number of classes of a component. Since more classes within a component might mean more efforts to understand and change, this metric can use to measure component maintainability from the size perspective. 3.2.8. Response for a service (RFSER) Similar to metric RFSC, we define response for a service. A service is an interface provided by a component for other components to consume. The metric counts the number of calls which are used in response to a service invocation. The larger number of call is used, the more complex the service is. 3.2.9. Coupling between nodes (CBN) Nodes are the physical entities in which components or other artifacts are deployed. The metric counts the number of nodes which are coupled with a node. If two nodes have communication, we consider that they are coupled. The more nodes a node are coupled, the more difficult it is to deploy, maintain and test. 3.2.10. Summary of OO Architecture Metrics We put all above mentioned OO architecture metrics for maintainability together in table 2.

Table 2. Summary of OO architecture metrics for maintainability Level Category Direction Artifact(s) System Size Reverse Use case diagram, sequence diagram System Complexity Reverse Sequence diagram System Size Reverse Use case diagram, sequence diagram Component Cohesion Reverse Use case diagram, sequence diagram Component Coupling Reverse Component diagram Component Coupling Reverse Component diagram Component Size Reverse Component diagram, class diagram Component Complexity Reverse Sequence diagram System Coupling Reverse Deployment diagram

the

maintainability

quality

With the identified OO architecture metrics, we can build the maintainability quality model. By IEEE standard [12], maintainability is defined as: “Maintainability is the capability of the software product to be modified. Modifications may include corrections, improvements or adaptations of the software to changes in environment, and in requirements and functional specification.” From the

definition, we know maintainability includes changing and bug fixing issues. On the other hand, it also depends on engineers’ understanding of the architecture and its components to maintain the software. Therefore, we can consider maintainability at architecture level from following perspectives: • Ease of changes. • Ease of testing. • Ease of understanding and learning. The figure 6 shows the quality model.

should be pieced into proper modules which can interact with each other to complete the service provision. Testability implies how easy the components can be tested. When new metrics are defined, they can be plugged into the model. The proper weight of each metrics in this model should be calibrated according to different project contexts in future.

Sys M Understandability

1-

1-

Changeability

1-

m

1-

1-

m

NSPUC

1-

1-

CBN

NCOMPUC

1-

2-

m

m

RFSC

1-

Testability

1-

1-

4.

A case study

Com M

Learnability

1- 2m

Modularity

1-

1m

NUCPCOM 1-

1-

1m

CBCOM 1-

Testability

1-

1-

m

DCOMC 1-

m

NCPC

RFSER

1-

1-

Figure 6. A maintainability quality model At system level, maintainability is decomposed into understandability, changeability and testability. Understandability implies how well users can understand the related architecture parts to do maintenance. Changeability implies how easily the software can be modified to implement the changes. The software should be designed to avoid ripple effect. Testability means how easily the software can be tested. The software needs to be tested after the modification. At component level, maintainability is decomposed into learnability, modularity and testability. Learnability implies how well users can learn the interface and internal structure of the component to do maintenance. Modularity implies that the component

In this section, we perform a case study of evaluating maintainability of a Java EE open-source application – Duke’s bank [13]. Duke’s bank is a Java Enterprise Edition application which offers online banking services to customers and services of managing accounts and customers to administrators. The application is well documented and of small size. We have reconstructed the architecture manually in UML from the tutorial. Due to page limitations, we only show part of the architecture views of the application in the following figures. Figure 7 shows the use case of the customer. Figure 8 shows the component diagram.

Figure 7. Use case diagram for customers

Figure 8. Component diagram of Duke’s bank

The metrics we can collect from available UML models are NCOMPUC, NUCPCOM, CBN, CBCOM, DCOMC, and NCPC. The following tables show the metric values of architecture elements. Table 3. Use case metrics Use Case NCOMPUC Create Customer 3 Remove Customer 3 Get CustomerInfo 3 Update Customer 3 Create Account 3 Remove Account 3 Remove Customer from Account 4 Add Customer into Account 4 List Account 3 Get Account History 3 Withdraw Money 3 Deposit Money 3 Table 4. Component metrics CB DCOMC NCPC Component NUCP Admin WebUI AccountCtr CustomerCtr TxCtr Account Customer Tx

COM

COM

9 4 8 5 2 8 8 2

2 3 3 3 1 1 1 1

0.67 1 1.67 1.67 1.33 0.67 0.67 0.67

9 18 2 3 3 1 1 1

Table 5. Node metrics Node CBN Web server 3 Ejb container 1 Application server 1 Database server 1 By applying the maintainability quality model we defined in section 3.3 and the weighting formulas, we show the analysis results in table 6 and table 7. Table 6. Component level analysis results Maintainability Component Learna Modu Testa bility larity bility Admin 0.5 0.37 0.5 0.46 WebUI 0.34 0.52 0.34 0.4 AccountCtr 0.27 0.07 0.27 0.2 CustomerCtr 0.36 0.29 0.36 0.34

TxCtr Account Customer Tx Average

0.81 0.79 0.79 1 0.61

0.84 0.57 0.57 1 0.53

0.81 0.79 0.79 1 0.61

0.82 0.72 0.72 1 0.58

Table 7. System level analysis results Maintainability Understandbility 0.79 Changeability 0.78 0.73 Testability 0.78 Components 0.58 Component AccountCtr and CustomerCtr are both have relatively low value of maintainability. We suggest some adjustments can be taken. For example, use Façade pattern [14] to unify the services of the controllers, therefore reduce the couplings between components. Table 8 and table 9 show the analysis results after applying Façade pattern. Table 8. Component level analysis results after applying Façade pattern Maintainability Component Learna Modu Testa bility larity bility Admin 0.63 0.5 0.63 0.59 WebUI 0.6 0.77 0.6 0.66 AccountCtr 0.4 0.2 0.4 0.33 CustomerCtr 0.36 0.29 0.36 0.35 TxCtr 0.81 0.84 0.81 0.82 Account 0.79 0.57 0.79 0.72 Customer 0.79 0.57 0.79 0.72 Tx 1 1 1 1 Average 0.67 0.59 0.67 0.65 Table 9. System level analysis results after applying Façade pattern Maintainability Understandbility 0.79 Changeability 0.78 0.75 Testability 0.78 Components 0.65 After applying Façade pattern, we can see the maintainability values of the related components and the overall architecture is improved. Thus, our method can be used to compare different architecture candidates.

5.

Conclusion

In this paper, we propose a new metric-based architecture quality evaluation method -- MAQE that includes a generic two-level quality model and weighting formulas for analysis. For object-oriented projects, we instantiate a maintainability quality model using our method and define a set of architecture metrics. MAQE is quantitative architecture evaluation. We perform a case study on Java EE Duke’s bank, and find out that by applying Façade pattern the maintainability value calculated by our method is increasing, which possibly means that our instantiation of the maintainability quality model is effective. We think there are some future works. More case studies are needed to refine our method, particularly for the validation of the two-level quality model, validity of the metrics, and to calibrate the weights In future research we need to verify if the evaluations and aggregate values calculated from the model really corresponds to the qualitative views of people that have been involved in the use of the architecture. We need to combine applying the metrics to interviews and surveys of developer and managers views on the actual maintainability of the architecture. We could also correlate by showing the amount of maintenance that has been done on two architectures with different metrics according to the model. The quality models for other quality attributes such as portability, reusability, and integrability etc. can be built in the future based on our generic model. Manually evaluating architectures is costly and time-consuming, and it is almost impossible for large projects. An automated analysis tool can be built for projects using UML case tools. It is convenient to import XML Metadata Interchange (XMI) file to analysis tool and automate the evaluation. However, for projects using sketch UML which are informally drew on paper, whiteboard only for communication between stakeholders, it is costly to perform manually evaluation. A lightweight version of our method can be developed for these projects.

6.

References

[1]. L. Bass, P. Clements, and R. Kazman, Software Architecture in Practice, Second Edition, AddisonWesley Publishing Co., 2003. [2]. S. Albin, The Art of Software Architecture – Design Methods and Techniques, Wiley, 2003. [3]. J. Bosch, Design and Use of Software Architectures, Adopting and Evolving a Product-Line Approach, Addison-Wesley, 2000.

[4]. Th. Panas, R. Lincke, J. Lundberg, and W. Löwe, “A Qualitative Evaluation of a Software Development and Re-Engineering Project,” in the 29th NASA Software Engineering Workshop, 2005. [5]. R.T. Tvedt, M. Lindvall, and P. Costa, “A process for software architecture evaluation using metrics,” in 27th Annual NASA Proceedings of Software Engineering Workshop, 2003, pp. 191-196. [6]. M. Lindvall, R. Tesoriero and P. Costa, “Avoiding architectural degeneration: an evaluation process for software architecture,” in Proceedings of Software Metrics, 2002, pp. 77-86. [7]. R.T. Tvedt, P. Costa and M. Lindvall, “Does the code match the design? A process for architecture evaluation,” in Proceedings of Software Maintenance, 2002, pp. 393-401. [8]. N. Fenton, and S. Pfleeger, Software metrics. A rigorous and practical approach, Second edition, PWS Publishing Company, Boston, MA, 1997. [9]. J.S. Alghamdi, R.A. Rufai and S.M. Khan, “OOMeter: A Software Quality Assurance Tool” in ninth European Conference Software Maintenance and Reengineering, CSMR, 2005. [10]. J. Muskens, M. Chaudron and C. Lange, “Investigations in applying metrics to multi-view architecture models”, in Proceedings of the 30th Euromicro Conference, 2004, pp. 372 – 379. [11]. S. R. Chidamber and C. F. Kemerer, “Towards a metrics suite for object-oriented design,” in Transactions on Software Engineering, 1994, pp. 476 –493. [12]. IEEE std 610.12-1990. “IEEE Standard Glossary of Software Engineering Terminology,” 1990. [13]. Sun Microsystems. “Duke’s bank application”, http://java.sun.com/javaee/5/docs/tutorial/doc/bncma.ht ml. [14]. Gamma, Erich; Richard Helm, Ralph Johnson, and John Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley, 1995.

A Method for Metric-based Architecture Quality Evaluation

metric counts the number of calls which are used in .... Publishing Company, Boston, MA, 1997. [9]. ... Conference Software Maintenance and Reengineering,.

262KB Sizes 0 Downloads 289 Views

Recommend Documents

A General and Simplified Method for Rapid Evaluation ...
the measured capacitance obtained by an RF S- parameter measurement. Fig. 2. Example of Dektak trace: scan path (bottom) and surface profile (top). Next, a low-frequency CV measurement was ... Using Coventorware [10] as FEM software, the same 'experi

Evaluation of a Personalized Method for Proactive Mind ...
Learner models are at the core of intelligent tutoring systems (ITS). The development ..... When are tutorial dialogues more effective than reading?. Cognitive Science ... International Journal of Artificial Intelligence in Educa- tion (IJAIED), 8 ..

Evaluation of a Personalized Method for Proactive Mind ...
1Deparment of Computer Science and 2Department of Psychology, .... were standardized by school to alleviate any large discrepancies due to demographic.

A Novel Method for Objective Evaluation of Converted Voice and ...
Objective, Subjective. I. INTRODUCTION. In the literature subjective tests exist for evaluation of voice transformation system. Voice transformation refers to the.

incremental software architecture a method for saving ...
incremental software architecture a method for saving failing it implementations contains important information and a detailed explanation about incremental ...

Development of new evaluation method for external safety ... - Safepark
Under Responsible Care companies follow these six principles: .... In this mobile centre the involved fire chiefs (or police chiefs) can plan how best to deal with ...

Development of new evaluation method for external safety ... - Safepark
A fascinating description of the development of Responsible Care to a world wide ... checked by a call from the emergency response centre to each control room.

an integrated evaluation method for module- based ...
education, we have proposed a module-based curricular model that facilitates the development of an array of IR ... and evaluation procedure for IR education. IR-BASE [1], a basic object oriented ... example, students may either over-estimate or under

A Computer-Aided Method to Expedite the Evaluation ...
treated too heavily even if they are highly curable with a small dosage or less chemical treatment. .... mercial systems including Magiscan (Joyce Loebl, Gateshead. UK), Cytoscan (Image ... number of chromosomes is equal to 46; (b) (f) Cat- egory II:

PREPARATION AND QUALITY EVALUATION OF NUTRITIOUS.pdf ...
Page 1 of 6. 50. PREPARATION AND QUALITY EVALUATION OF NUTRITIOUS. INSTANT BABY FOOD FROM INDIGENOUS SOURCES. Saeeda Raza ...

Method of improving dough and bread quality
Sep 20, 2007 - 641-666. Verenium Corporation lea?et Puri?ne® Enzyme“Convert Gums to ...... aurantiacus endoglucanase: completing the structural picture of sub families in ...... www.chem.qmul.ac.uk/iubmb/enzyme/EC3/1/4/3.html), p. 1.

Method of improving dough and bread quality
Sep 20, 2007 - NCBI protein accession code CACO 1477.1 GI:9716139. NCBI's Genbank ...... Sorensen, H.R., et al., “Effects of added enzymes on the physico chemical ..... www.chem.qmul.ac.uk/iubmb/enzyme/EC3/1/4/3.html), p. 1. PreSens ...

A method for the evaluation of meaning structures and its application ...
A method for the evaluation of meaning structures and its application in conceptual design.pdf. A method for the evaluation of meaning structures and its ...

A method for the evaluation of meaning structures and its application ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. A method for the evaluation of meaning structures and its application in conceptual design.pdf. A method for

Bonus play method for a gambling device
Mar 14, 2006 - See application ?le for complete search history. (Us). (56) ... Play and/0r apply ..... As shoWn, there are numerous variations on the theme that.

Overlay Method and Knowledge Evaluation Using ...
field is heading for designing Learning Management System (LMS) [1] and. Intelligent ... new nodes that are not in {Expert} knowledge base. With links are ...

The Propensity Score method in public policy evaluation
When the data, the type of intervention and the assignment criterion allow it, a quasi- ... The evaluation of policies carried out by using quantitative tools is a tangible ..... the unobservable characteristics is also guaranteed if the size of the

Physically-based Grasp Quality Evaluation under Pose Uncertainty
uncertainty into the static grasp quality analysis by computing the probability of .... Refer to [7] to see how to incorporate soft contact into a force- closure based ...

Evaluation of VoIP Quality over WiBro
Voice over IP (VoIP) calls, play online games, and watch streaming media. These real-time applications have stringent Quality .... have conducted our measurement experiments on subway line number 6. It has. 38 stations over a total distance of 35.1 k