DESIGN AND IMPLEMENTATION OF A COMBINATORIAL TEST SUITE STRATEGY USING ADAPTIVE CUCKOO SEARCH ALGORITHM

A Thesis

Submitted to the Council of the College of Commerce, University of Sulaimani in Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Technology

By Taib Shamsadin Abdulsamad

Supervised By Lecturer Dr. Bestoun S. Ahmed May 2015

Jozerdan 2715

Acknowledgements

First and foremost, I would like to thank Almighty God for giving me patience and with his grace he helped me to complete this thesis.

I want to express my deep thanks to my supervisor, Dr. Bestoun S. Ahmed, who has given great effort so as to fulfill this research. Without his help and guidance I could not write this thesis. I want to express my gratitude to all the staff of my college especially the head of Statistic and computer department, Dr. Rezan Hama Rashid. I would also like to thank my family for their support and love.

v

Dedication

I dedicate to: My lovely family and kids, Matin and Blnd. My dear parents, without their guidance I wouldn't be where I am today. My loving brother and my sisters. Those who taught me even one word.

vi

Abstract

Nowadays, software has become an innovative key for many applications and methods in science and engineering. Ensuring the quality is challenging because of the different configurations and input domains of each program. Ensuring the quality of the software requires evaluating all possible configurations and input combinations against their expected outputs. However, this exhaustive test is impractical because of time and resource constraints that result from the large domain, of input and configurations. Thus, different sampling techniques have been used to sample these input domains and configurations. Combinatorial testing can be used to effectively detect faults that are undetectable by other techniques. This technique uses combinatorial optimization concepts to systematically minimize the number of test cases by considering the combinations of inputs. This research proposes a new strategy to generate the combinatorial test suite using Cuckoo Search concepts. Cuckoo Search is used for the design and implementation a strategy to construct optimized combinatorial sets. The strategy consists of different algorithms for construction in which they are combined to serve the Cuckoo Search. The new technique proved its efficiency and performance through different experiment sets. For those experiments that have been undertaken for evaluation, the strategy obtained the minimum results for 50 configurations out of 95. In addition, the effectiveness of the strategy is assessed by applying the generated test suites on a real-world case study for the purpose of functional testing. Results show that the generated test suites can detect faults effectively. Moreover, the strategy opens a new direction for the application of Cuckoo Search in the context of software engineering.

vii

Table of Contents Supervisor’s Report ........................................................................................................ ii Linguistic Evaluation Certification ................................................................................ iii Examining Committee Certification................................................................................iv Acknowledgements .........................................................................................................v Dedication ......................................................................................................................vi Abstract.........................................................................................................................vii Table of Contents ........................................................................................................ viii List of Tables...................................................................................................................x List of Figures and Illustrations ......................................................................................xi List of Abbreviations and Nomenclature...................................................................... xiii CHAPTER 1....................................................................................................................1 INTRODUCTION...........................................................................................................1 1.1 Overview ...............................................................................................................1 1.2 Problem Statements ...............................................................................................5 1.3 Aim and Objectives ...............................................................................................6 1.4 Related Works .......................................................................................................7 1.5 Thesis Organization ...............................................................................................9 CHAPTER 2..................................................................................................................11 THEORETICAL BACKGROUND AND CONCEPTS .................................................11 2.1 Introduction .........................................................................................................11 2.2 Test Case Design Techniques...............................................................................12 2.2.1 Equivalence Class Partitioning.....................................................................12 2.2.2 Boundary Value Testing and Analysis .........................................................13 2.2.3 Cause and Effect Graphing ..........................................................................14 2.3 A Problem Definition Model................................................................................16 2.4 Mathematical Preliminaries and Notations ...........................................................17 2.5 Existing Literature on Combinatorial Strategies ...................................................21 2.5.1 Test Suite Generation Tools and Strategies ..................................................22 2.5.2 A Brief Review of Generation Strategies......................................................24 2.6 Application of Combinatorial Testing ..................................................................29 2.6.1 Test Case Prioritization................................................................................30 2.6.2 GUI Interaction Testing ...............................................................................31 2.6.3 Fault characterization...................................................................................31 2.7 The Use of Cuckoo Optimization .........................................................................32 2.8 The Cuckoo Search Algorithm (CSA) ..................................................................34 2.9 Summary .............................................................................................................37 CHAPTER 3..................................................................................................................38 THE DESIGN AND IMPLEMENTATION OF THE PROPOSED STRATEGY...........38 3.1 Introduction .........................................................................................................38 viii

3.2 Design of CS Strategy..........................................................................................38 3.2.1 Input-Factor Combination (IFC) Algorithm .................................................40 3.2.2 d-tuple Generation Algorithm ......................................................................43 3.2.3 Generate Test Suite Algorithm .....................................................................45 3.3 Optimization Process ...........................................................................................49 3.4 Running Examples ...............................................................................................52 3.4.1 Generation of CA.........................................................................................53 3.4.2 Generation Mixed Covering Array...............................................................55 3.5 Summary .............................................................................................................57 CHAPTER 4..................................................................................................................58 EVALUATION AND DISCUSSION ............................................................................58 4.1 Introduction .........................................................................................................58 4.2 Evaluation Process...............................................................................................58 4.3 Experimental Setup ..............................................................................................59 4.4 The CS Efficiency Comparative Experiments ......................................................59 4.4.1 AI-based strategies.......................................................................................60 4.4.2 Computational-based strategies....................................................................61 4.5 The CS Performance Evaluation Experiments ......................................................70 4.6 The CS Effectiveness Evaluation through an Empirical Case Study .....................75 CHAPTER 5..................................................................................................................82 CONCLUSION AND FUTURE WORK .......................................................................82 5.1 Conclusion...........................................................................................................82 5.2 Suggestions for Future Studies .............................................................................83 BIBIOGRAPHY............................................................................................................84 PUBLICATION ............................................................................................................93 Kurdish Abstract............................................................................................................95 Arabic Abstract .............................................................................................................96 CoverPage_Arabic .........................................................................................................97

ix

List of Tables Table 2.1 Decision Table for CEG with two statements .................................................16 Table 4.1 Comparison with existing meta-heuristic algorithms for different configurations ........................................................................................................60 Table 4.2 Size of variable input configurations when 3 k and 2 d .............................................................................................................63 Table 4.3 Size of variable input configurations when 2 ving 7 factors and 2 ............................................................................................................64 Table 4.4 Size of variable input configurations when 2 factors and d= 4 .....................................................................................................65 Table 4.5 Size of variable input configurations when d=4, and 5 5 levels ..................................................................................................................65 Table 4.6 Size of variable input configurations when k=10, each having 5 levels when 2 .......................................................................................................66 Table 4.7 Size of variable input configurations when k=10, each having 2 levels when 2 .......................................................................................................66 Table 4.8 Size of variable input configurations when 2

...........66

Table 4.9 TCAS module (MCA (N; d, 27 32 41 102) ........................................................67 Table 4.10 Five Multi Domain Configurations...............................................................67 Table 4.11 Test sizes and execution times for seven input factors, each having three levels, when 2 .............................................................................................72 Table 4.12 Test sizes and execution times for input configuration when k=7, each factor having levels 2 ......................................................................72 Table 4.13 Test sizes and execution times for input configuration when 4 10, each factor having three levels with d = 3...............................................................73 Table 4.14 Summaries of the input factors and levels for the case study program...........76 Table 4.15 Size of the test suite used for the case study..................................................77

x

List of Figures and Illustrations Figure 2.1 Test Case Design Techniques........................................................................12 Figure 2.2 Boundaries of Variable .................................................................................13 Figure 2.3 Relationships of notation between cause and effect .......................................15 Figure 2.4 Subsets Configurable of Firefox....................................................................17 Figure 2.5 Illustration of three examples of OA, CA, and MCA ....................................20 Figure 2.6 Illustration of the d-tuples coverage .............................................................21 Figure 2.7 OTAT, OPAT uses by tools and strategies ...................................................25 Figure 2.8 Application of CT .........................................................................................30 Figure 2.9 Pseudo code of the CSA [27] ........................................................................35 Figure 3.1 Main Window of the Implemented Strategy ..................................................39 Figure 3.2 Flow Chart for the CS Strategy .....................................................................40 Figure 3.3 Pseudo code of the IFC algorithm .................................................................41 Figure 3.4 IFC algorithm diagram ..................................................................................42 Figure 3.5 The IFC and BD for d = 2, k = 3 ...................................................................43 Figure 3.6 d-tuple generate diagram...............................................................................44 Figure 3.7 Pseudo code of combinatorial test suite generation with CS ..........................45 Figure 3.8 The test case generator Pseudo code..............................................................47 Figure 3.9 Strategy in progress when each algorithm is executed and the final optimized set is generate ........................................................................................48 Figure 3.10 Relationships between CA size and Exhaustive size ....................................49 Figure 3.11 Combination Paths......................................................................................50 Figure 3.12 The indexing of the search space.................................................................51 Figure 3.13 Process of removing and making new indexes.............................................52 Figure 3.14 A Running Example to Explain the Generation of CA (6; 24) ......................54

xi

Figure 3.15 A Running Examples to Explain the Generation of MCA (9, 2, 4, (32, 22)).........................................................................................................................56 Figure 4.1 Main window of the empirical study program ..............................................76 Figure 4.2 Reaction of the test cases with the configuration for the number of mutations detected when d =2. ...............................................................................78 Figure 4.3 Reaction of the test cases with the configuration for the number of mutations detected when d=3 .................................................................................80

xii

List of Abbreviations and Nomenclature Abbreviation

Meaning

ACA AETG AI BD BVA CA CATS CEG CPU CS CSA CT CTE-XL d-tuples ECP GA GUI IBM IFC IPO IPOG IPOG-D ITCH LOC mAETG MCA NP OA OFOT OPAT OTAT PICT PSO PSTG QA RO SA SRS TCAS TConfig TS TVG

Ant Colony Algorithm Automatic Efficient Test Generator Artificial Intelligence Binary Digit Boundary value analysis Covering Array Constrained Array Test System Cause and effect graphing Central Processing Unit Cuckoo Search Cuckoo Search Algorithm Combinatorial Testing Classification-Tree Editor eXtended Logics d-combination of input-factor (all Interactions array list) Equivalence class partitioning Genetic Algorithm Graphical User Interface International Business Machines Input Factor Combination In-Parameter-Order In-Parameter-Order-General In-Parameter-Order-General Double Intelligent Test Case Handler Lines of Code Modified Automatic Efficient Test Generator Mixed Covering Array None deterministic Polynomial time Orthogonal Array One Factor One Time One Parameter At a Time One Test At a Time Pairwise Independent Combinatorial Testing Particle Swarm Optimization Particle Swarm-based Test Generator Quality Assurance Random Optimization Simulated Annealing Software Requirement Specification Traffic collection avoidance system Test Configuration Tabu Search Test Vector Generator xiii

Chapter One

Introduction CHAPTER 1

INTRODUCTION

1.1 Overview Modern societies in today’s digital era depend significantly on computer in almost every activity in their daily lives. Most hardware implementations are now being replaced by software. For this reason, software plays a very important role in improving and developing our lives. The growing dependency on software can be attributed to a number of factors. Unlike hardware, software does not wear out; thus, the use of software can also help to control maintenance costs. In addition, software is as well malleable and can easily be changed and customized.

With the beginning of advancement in computer hardware technology, software applications grow severely in terms of lines of codes (LOC) and functionality [1]. That is, to keep up with ever increasing customer demands new functionalities. In fact, ensuring the quality for these kinds of software is a great task because checking quality is the most important step in a software-development life cycle as it spans through the whole cycle [2].

Quality Assurance (QA) defines the activities required to ensure software quality. It aims to deliver software with minimum defects which meets specific levels of functionality according to reliability and performance [3]. Moreover, it is the systematic pattern of all actions for providing and proofing the ability of a software process to build well products [4]. It also tries to improve the development process from the 1

Chapter One

Introduction

requirement step till the end [5]. Thus, it improves software functionality, including safety and reliability.

One of the critical steps to ensure the quality is testing. Testing is the process of evaluating a system with the point of view to identify any gaps, errors or missing requirements. This operation is to rate the functionality and analyze the software to evaluate its features. This process ensures that the software works correctly [6]. Furthermore, software testing’s aim is to identify as many faults as possible in the system [7, 8]. In general, testing is classified into two main methods: functional and structural testing methods [9, 10]. Functional testing method has been termed as black box and structural testing as white box testing [9-11]. Most recently a combination of both functional and structural is introduced to be as so called grey-box testing methods [9, 12]. In functional testing method (i.e. black box), the tester ignores the internal mechanism of the system-under-test, and focuses only on the outputs. The technique serves the overall functionality validation of the system, and hence identifies both valid and invalid inputs from customer’s point of view. Here, the testers do not need to be an expert in programming and the tester and programmer are independent to each other [10, 13]. Thus, the testing is an unbiased testing process. Structural testing method (i.e. white box) is used to find logical errors in software [10]. The tester needed to have the information of the internal mechanism of the systemunder-test. In addition, he/she needs to use information regarding the data structures and algorithms surrounded by the code [9]. In this method, the plans to design test cases are 2

Chapter One

Introduction

made based on the details of the software, such as programming language, logic, and styles [14].

In another hand, grey box testing method is one of the test techniques that uses some information about the internal code structure with limited information of the detailed programming languages from the tester side. Here, the tester has access to design documents to prepare effective test data and scenarios when he prepares the test plan [12].

In all the above mentioned testing methods, creating a set of data for testing is an important technique that is called test data generation. In the literature, there are many methods for generating test data. Normally, these methods use the information that is available in the Software Requirement Specification (SRS) in which the knowledge about the input requirements is available. The tester is trying to consider all the possible input domains when he/she selects the test cases for the software-under-test. However, considering all inputs exhaustively is really impossible for many applications due to the limitation in time and resource constraints. Hence, the role of test design techniques is very important. A test design technique tries to select the test cases systematically using a specific sampling mechanism for selection. This in turn will optimize the number of the test cases to get an optimum test suite which eliminates the time and cost of the testing phase in the whole software development process.

In fact, test case design techniques are derived according to both ways of testing (i.e. functional and structural). There are many techniques for designing test cases such as, Equivalence Class Partitioning, Boundary Value Analysis, Cause and Effect Analysis 3

Chapter One

Introduction

via Decision Tables in the functional testing side and Statements, Branches and Paths Coverage in the structural testing side [10, 12]. In equivalence partitioning, inputs and outputs of the software-under-test are illustrated. The input domain is divided into partitions that invoke similar behaviours [10]. It is assumed that each group or partition exhibits similar behavior. Test cases are designed by selecting a test case from each partition to be like a representative of that partition. Boundary value analysis, in another hand, is based on the premise that errors occur more frequently near to the boundary of input values. Evidences showed that errors are more occurred near to the edge of a boundary. Hence, boundaries are areas where testing is likely to yield defects [10, 15]. However, cause and effect analysis uses different styles for the test case design. Here, the method is used with the decision table. The logical conditions that are available in the internal design of the system are appeared in the table. Based on the SRS document, the input conditions are identified as causes and actions for these conditions as effects. The conditions and actions are identified in the table as Boolean expression (i.e. true or false). The test cases are arranged as combinations of causes and effects in the table, and then each combination is executed accordingly. The abovementioned methods are used for the test case design. Most of the time, the tester tries to use more than one method for testing. This is due to the assumption that different faults may be detected using different methods of testing. However, evidences showed that some faults were not found by using these methods, while some other faults were made as the combinations of the configuration in the software-under-test [16]. This in turn leads to emerge the combinatorial testing technique. 4

Chapter One

Introduction

1.2 Problem Statements With the vastly growing of software systems and their configurations, there is a chance for fault due to the combinations of these configurations especially for high configurable software systems. While the traditional test design techniques are useful for fault discovering and prevention, they may not be adequate to take care of faults due to combinations of input components and configurations [16, 17]. Considering all the combinations of configuration leads to exhaustive testing which is impossible due to time and resource constraints [9, 18, 19]. For this reason, there is a need to minimize the number of the test cases by designing effective test cases that have the same impact as exhaustive testing.

Considering all combinations covered by a minimized test suite can be viewed a hard computational optimization problem [9, 20-22], as searching for the optimal set is an NP hard problem [9, 21-25]. Hence, searching for an optimum set of test cases can be a painstakingly difficult task, and it is challenging to find a unified strategy that generates optimum results all the times. There are two directions for solving this problem efficiently and finding near optimal solutions, the first is using computational algorithms with mathematical arrangement and the other is using nature-inspired metaheuristic algorithms [26].

Evidences in the literature showed that using nature-inspired meta-heuristic algorithms can generate more efficient results over computational algorithms with mathematical arrangement. In addition, this approach is more flexible than the other approaches as it can construct combinatorial sets with different parameters and values. Hence, its

5

Chapter One

Introduction

outcome is more applicable since most of the real world systems have different parameters and values and they are not having equal values among the parameters.

Cuckoo Search (CS) is a new algorithm, which is developed by Xin-She Yang and Suash Deb [27], used to solve the global optimization problems efficiently [28]. This algorithm can work with NP-hard problems that cannot be solved by exact solution methods [29]. It proves its applicability and efficiency through different real applications [25, 28, 30-32]. To this end, this research presents the design and implementation of a strategy to construct optimized combinatorial sets using Cuckoo Search algorithm( SCA) that complements earlier works in this direction. Thus, it is the hypothesis which suggests that the adoption of CS is useful for an optimized combinatorial set construction.

1.3 Aim and Objectives The aim of the research is to design, implement, and evaluate a new testing strategy for constructing combinatorial test suites. To realize this aim, the following objectives are adopted: i.

Investigating the application of Cuckoo Search and optimization for the design and implementation of combinatorial test suite constructions.

ii.

Investigating and evaluating the performance of the developed strategy against the other computational and AI-based strategies in terms of the generated test suite size and generation time.

iii.

Investigating and evaluating the effectiveness of the test suites generated by the developed strategy for interaction fault detection using real-world case studies. 6

Chapter One

Introduction

1.4 Related Works Nowadays, Researchers have tried various methods to solve combinatorial optimization and NP-hard problem. This section pays attention to general techniques that have been developed by researchers:

Test case generation is an active research area in the CT. Mainly there are two methods to generate a test suite: horizontally and vertically generation methods [8]. Vertical method means that an input factor is generated each time i.e., One Factor One Time (OFOT), sometimes called one parameter at a time (OPAT). At the end of the generation process, the whole test cases are forming the final test suite. However, horizontal method builds One Test At a Time (OTAT) [33].

The OFAT method starts with an initial test suite that consists of several selected factors [34]. To certify combination coverage, the test suite is extended horizontally by adding one factor at-a-time, and upon getting new test cases, the test suite is extended vertically. The OTAT method normally iterates through all elements of the combinations and makes a whole test case per iteration.

Most of the methods start by creating large number of solutions, then selecting the best test cases that cover most of the d-tuples. Searching for the best solution among this number of solutions needs an optimization mechanism to select the best solution. According to the literature, these techniques can be classified into four main groups of algorithm: random algorithm, greedy algorithm, heuristic search algorithm, and metaheuristic algorithm [8].

7

Chapter One

Introduction

Random optimization algorithms (RO) randomly select test cases from a complete set of test cases based on some input distributions [8]. The selection is based on the coverage of the d-tuples. It simply works by iterative moving toward better positions in the search-space which are sampled surrounding the current position.

In general, greedy algorithms are recursively constructing a set of objects from the smallest possible elements. This problem solving is done by recursion in which the solution to a particular problem depends on solutions to smaller instances of the same problem. Greedy algorithms are used with OTAT method to cover many uncovered combinations in each row of the final combinatorial test suite [35].

The heuristic search algorithms are designed for solving problems more quickly [33, 36]. It is used to find an approximate solution in the search for the path to a target. The heuristic algorithm provides a way to find the near optimal solution and estimates which neighbour of solution will guide to a target.

Meta-heuristic algorithm is a higher level of a heuristic search algorithm designed to find and generate best solutions. It provides a suitable and good solution to an optimization problem. Many meta-heuristic algorithms exist in literature to construct combinatorial test suites such as: Ant Colony Optimization (ACO) [37], Particle Swarm Optimization (PSO) [38], Tabu Search (TS) [39].

In general, the previous algorithms generate combinatorial test suites with the aim of constructing minimum number of test cases that are covering the whole set of interactions list. Random algorithm is suitable for generating test suites when there are few input factors and levels. However, when this number is growing, it fails to show 8

Chapter One

Introduction

efficient results. Greedy algorithms are more suitable for this case as well as to generate the results faster than meta-heuristic algorithm. However, they may not generate most optimal test suites [40]. Using nature-inspired meta-heuristic algorithms can generate more efficient results [8, 41]. In addition, this approach is more flexible than others because it can construct combinatorial sets with different input factors and levels. Hence, its outcome is more applicable because most real-world systems have different input factors and levels.

1.5 Thesis Organization This thesis is organized into five chapters. The rest of the thesis is organized as follows. Chapter 2 (Theoretical Background and Concepts) reviews the state-of-the-art for the software testing and test case generation strategies. The chapter starts by introducing the test case design techniques and illustrating how the test cases are selected in each technique. Then, a simple real world case study is given as a configurable software system to illustrate the combinatorial testing. Thereafter, the chapter provides an extensive elaboration to understand how the combinations could be generated and how they could be covered. A theoretical background for combinatorial testing is then given. Finally, the chapter discusses and examines the existing literatures on combinatorial test suite generation strategies and different applications of them. Chapter 3 (The Design and Implementation of Proposed the CS Strategy) outlines the design and implementation of the proposed strategy, including its corresponding algorithms. The chapter illustrates each algorithm in brief with examples. The chapter also gives the justifications for using Cuckoo Search and how to use it in the strategy.

9

Chapter One

Introduction

Chapter 4 (Evaluation and Discussion) highlights the evaluation of the proposed strategy. Benchmarking of the strategy is undertaken to evaluate its competitiveness by comparing the results achieved with those published and publicly available well-known strategy implementations. In addition, the chapter presents a case study using a reliable artifact to demonstrate the applicability and effectiveness of the test suites generated by the strategy. The presentation of the evaluation results in each stage and the case study is accompanied by the analysis.

Finally, in Chapter 5 (Conclusion and Future Work), the conclusions of the research are presented as well as the findings and then the contributions of the research are clearly highlighted. In addition, the chapter highlights the possible future works as a continuation of this work.

10

Chapter Two

Theoretical Background and Concepts CHAPTER 2

THEORETICAL BACKGROUND AND CONCEPTS

2.1 Introduction In the previous chapter, the significances and roles of software that are involved in the modern’s daily live have been explained. Elementary conceptions of software quality assurance for software testing have also been discussed. Moreover, it explains the importance of the testing process for the software development. After that, the types of testing have been classified into two principal types (i.e. Functional and Structural); for each type an explanation of some basic ideas is given. In addition, the third way (i.e. Grey box) discusses the combination of both chief types. Furthermore, the chapter describes the generation of test cases by combinatorial testing approach, in which a combinatorial test suite is generated to cover all the combinations for a specific strength.

This chapter introduces the necessary background concepts and literature related to combinatorial testing. The chapter starts by reviewing test design techniques. After that, it also identifies a problem definition model using a practical example, and illustrates notations and mathematical preliminaries for covering arrays. Then, the existing literatures on combinatorial strategies are reviewed, and also some strategies have been explained and described. Furthermore, the application of combinatorial strategies has been considered. Finally, the justification and preliminaries of CSA are explained.

11

Chapter Two

Theoretical Background and Concepts

2.2 Test Case Design Techniques As mentioned earlier, an essential issue of testing process is the generation of the correct test cases. Theoretically, this process is an endless process, however in practice in some points the testing should stop. To this end, the test suite must be minimized to its minimum size without affecting its effectiveness. Hence, different test case designs and minimization techniques have been developed and studied in the literature. The following subsections focus on three famous techniques (equivalence class partitioning, boundary value testing and analysis, and cause and effect graphing) that showed in Figure 2.1. As well as illustrate each technique with typical samples.

Figure 2.1 Test Case Design Techniques

2.2.1 Equivalence Class Partitioning Equivalence class partitioning (ECP) is one of the black box testing techniques that is used to design test cases. This technique separates inputs into the finite number of equivalence classes in a way that all cases in a single partition exercise the same functionality. For each class, one representative will be selected to test the softwareunder-test [10, 42, 43]. If a fault is detected by a test case in the class partition, it is supposed to be detected by the other counterparts in the same partition. 12

Chapter Two

Theoretical Background and Concepts

For instance, if you have a simple program to set the date and show the day of the week; three input-factors are needed to be set, i.e. day (dd), month (mm) and years (yy). Assume that the ranges of these input factors are as follows 1 ≤ dd ≤ 31, 1 ≤ mm ≤ 12 and 2000 ≤ yy ≤ 2020. A year less than 2000 and greater than 2020 is invalid cases, and month less than one and greater than 12 is an invalid case, however for a day, this function has three conditions which are 1 ≤ dd ≤ 31, 1 ≤ dd ≤ 30, and 1 ≤ dd ≤ 28-29 for February. In this scenario, there are a finite number of date values that could be tested for this program. However, it is practically impossible to test all those values. Therefore, equivalence class partitioning is used here to partition the value of the date into classes depending on this specific range. Clearly, there are three classes of the date value, dd, mm, and yy. To test this program, a value from each partition will be chosen.

2.2.2 Boundary Value Testing and Analysis Boundary value analysis (BVA) is another black box testing technique used to design test cases, this technique focuses on the boundaries of the input and the output, in which tests are selected to include representatives of boundary values [3]. Each boundary variable has three positions (upper bound, nominal, and lower bound). In addition, each position has minimum and maximum bounds in near bounders [43]. Figure 2.2 illustrates variable boundaries. Lower bound min

min-

Nominal

min+

Upper bound max

max-

Figure 2.2 Boundaries of Variable 13

max+

Chapter Two

Theoretical Background and Concepts

For the previous scenarios, interpreted in the equivalence class partitioning, in the boundary values for the three input factors (i.e. dd, mm, and yy), the tester has to choose test values at the sides of each boundary. Therefore, the tester must consider (0, 1, 2), (0, 1, 2), and (1999, 2000, 2001) as test cases.

2.2.3 Cause and Effect Graphing Cause and effect graphing (CEG) is also another black box testing technique used to design and minimize test cases. This technique selects test cases that are logically related with causes (i.e. inputs) and effects (i.e. outputs) to produce test cases [43]. Hence, cause stands for input domain, and effect stands for output condition. The tester identifies the causes, effects, and constraints from the specification of the softwareunder-test. Then, it is building a combinational logic network graph and applying Boolean operation (and, or, not). To illustrate how each cause and effect recognized a unique identifier, Figure 2.3 shows the relation between the cause and the effect in which it marked according to the graph. Afterwards, the graphs have been changed to a decision table to organize the test cases, which each column is equal to one test case.

For the previously mentioned example, suppose that the specification of the application states that the field of date (dd) should not contain any letter and special characters and ranges of input numbers among 1 to 31. In addition, for field (mm) and field (yy), the same condition applied but varied by the range of the input numbers for the month among 1 to 12, and for year among 2000 to 2020. Hence, the input conditions (i.e. causes) are C1: the value of date from 1 to 31, C2: the value of month from 1 to 12, and for the year from 2000 to 2020. Whereas, the output conditions (i.e. effects) are E1: the 14

Chapter Two

Theoretical Background and Concepts

field out of range, and E2: true output. Based on these causes and effects, the relationships of (CEG) can be identified as two statements, for example: IF (C1 and C2 and C3 then E2) and IF (Not C1 or C2 or C3 then E1).

Notation

Statement

Description

IF (C1) then (E1)

For Positive true condition

IF ( NOT ( C1 ) ) then (E1)

For negative true condition

Minimum once true IF ( ( C1 ) OR (C2) ) then (E1) condition.

IF ( ( C1 ) AND (C2) ) then (E1)

All true condition.

Figure 2.3 Relationships of notation between cause and effect

The next step is to develop a decision table. Each row in the table represents a cause or an effect in which the number of rows depends on a number of causes and effects and each column represents a rule (i.e. test case) that depends on the number of the statements. The cells in (Table 2.1) can be filled by “0” incorrect decision, “1” correct decision, or “*” don’t care values. Thus, T1 and T2 represent test suites. In the case of T1, as all causes are true, then second effect accrued, and at T2 because the first cause is equal to “0”, both C2 and C3 are anything, and then E2 not build up. 15

Chapter Two

Theoretical Background and Concepts

Table 2.1 Decision Table for CEG with two statements T1

T2

C1

1

0

C2

1

*

C3

1

*

E1

0

1

E2

1

0

2.3 A Problem Definition Model To illustrate and model the concepts of combinatorial testing, Mozilla Firefox is considered as a practical example. Mozilla Firefox is a well-known web browser that has many options and configurations in which the user can control them without difficulty owing to its user-friendly Graphical User Interface (GUI). Figure 2.4 shows a subset configuration of Mozilla Firefox when many options of the scheme are combined to make a specific configuration. Configurations exist under various forms so that they are controlled in different ways. For example by clicking in the box, checking or unchecking options will be viewed. Users can change it by clicking commands while operating on it. Each task contains six operations in the dialogue box for a specific operation, like warring during closing multiple tabs, warning while opening these tabs the operation will be slow, etc. each task has two possible values (check and uncheck).

For testing the system, none of the above mentioned testing techniques are applicable to consider the configurations. In addition, evidences have showed recently that different faults could be detected due to these combinations [8, 44]. 16

Chapter Two

Theoretical Background and Concepts

Figure 2.4 Subsets Configurable of Firefox

Addressing this issue of configuration combination can be clearly explained with the help of Figure 2.4. Here, to test the system exhaustively, the tester needs to consider all of the configurations options, (i.e. all configuration combinations). The number of the input-factors (i.e. Parameter) equals to six factors (k = 6); each of them has two level values (v = 2), there are (2×2×2×2×2×2) combinations which are equal to 64 combinations. This test suite is considered to be applicable as far as the number of the test cases is manageable. However, in practice, there could be multiple configurations and multiple levels for each of them. This leads to an exponential growing of the test cases which leads to combinatorial explosion.

2.4 Mathematical Preliminaries and Notations Combinatorial optimization is introduced in its simple form within the Orthogonal Array (OA) mathematical object. The formula OAλ (N; d, k, v) represents an N × k 17

Chapter Two

Theoretical Background and Concepts

array in which every N × d is a sub-array, each d-tuple (i.e., d-combination of k) occurs exactly λ times, where ( λ =

). If each t-tuple occurs at least once, the λ parameter is

dropped from the formula because λ = 1 [45]. The parameter d is the degree of the combination; k is the number of the input-factors (k ≥ d), and v is the number of the levels associated with each factor. For example, the OA ( 9; 2, 4, 3), (i.e. N = 9, d = 2, k= 4, v = 3) expressed an orthogonal array containing nine rows that has four factors, each of them having three levels with combination degree equals to two. Figure 2.5 (a) shows this array clearly.

One move toward interaction testing involves using a sampling strategy derived from mathematical objects called covering array (CA) [46]. CA can be illustrated as a table containing all possible test cases. Each row represents a test case, and each column represents an input-factor. In general, CA can be defined as CAλ (N; d, k, v) [47, 48] where the N represents the array size, d is the degree of the combinations, k is the number of the input-factors, and v is the level of the input-factor. Each combination must be appeared in the array λ times. The application of OA is limited by its requirement for uniform factors and levels; thus, this array is suitable only for small test suites[49]. To address this limitation, CA has been introduced to complement OA.

A covering array CAλ (N; d, k, v ) is an N × k array with v levels for each k where (v = 0 ..... v-1) in a way that every N×d is a sub-array (i.e. d-tuples) that contains all ordered subsets from v levels of size N at least λ times [50]. Here, each d-tuples to repeat at least one time, hence λ = 1. As a result, this parameter dropped from syntax and the formula becomes CA (N d, k, v ) [48]. The main aim of CA is minimizing N. The optimal array 18

Chapter Two

Theoretical Background and Concepts

then can be expressed as CAN (d, k, v) as shown in Equation 2.1 [51]. Figure 2.5 (b) shows an example with the formula CA (9; 2, 4, 3).

CAN (d, k, v ) = min {N: ∈ CA (N, d, k, v )} … … … … … … (Eq 2.1)

One serious problem in CA is that the levels for each input factor are considered to be uniform. In other words, each input factor must have equal numbers of levels. However, most of the time, the input-factors have different levels in practice. For this case, mixed level covering array (MCA) is notated. A mixed level covering array, MCA (N; d, k, (v1, v2, ………. vk)), is an N ×k array on v levels, where the rows of each N ×d subarray cover and all d-tuples of values from the d columns occur at least once [19]. For more flexibility in the notation, the array can be presented by MCA (N;d, vk)) and can be used for a fixed-level CA, such as CA (N;d, vk) [24]. Figure 2.5 (c) shows an example of MCA (9; 2, 4, 32 22) where the size of the array equals nine, the strength equals two, the number of the input factors equals four, that is the first two input factors have three levels, and the last two input factors have two levels.

A generated CA must cover the entire d-tuples list. In order to understand this issue, a simple example has been used in Figure 2.6. The covering array CA (4, 2, 23) is used in which the size equals four N = 4, and the degree of combination equal two (d = 2), for three input factors (k = 3 (A, B, C)) each of them has two levels (v = 2) i.e. v1=1 and v2 = 2. Combination between factors equals 12, [(A, B), (A, C), and (B, C) the result for each = 22 = 4]. Exhaustive combinations in this case equal eight which in turn is the maximum degree, which equal the number of the input factors. 19

Chapter Two

Theoretical Background and Concepts

OA ( 9; 2, 4, 3)

MCA (9; 2, 4, 32 22)

CA (9; 2, 4, 3)

k1

k2

k3

k4

k1

k2

k3

k4

k1

k2

k3

k4

1

1

1

1

1

3

3

3

2

1

1

2

2

2

2

1

3

2

3

1

2

2

2

1

3

3

3

1

1

1

2

1

3

3

2

2

1

2

3

2

1

2

1

2

1

3

1

1

2

3

1

2

3

1

1

3

1

1

2

1

3

1

2

2

2

1

3

2

1

2

1

2

1

3

2

3

3

3

2

2

3

2

1

1

2

1

3

3

2

3

1

1

3

1

1

1

3

2

1

3

2

2

2

3

2

3

1

2

a

b

c

Figure 2.5 Illustration of three examples of OA, CA, and MCA

The left hand side array in the Figure 2.6 shows the array that has minimum numbers of rows which cover all the d-tuples. The first row in this array covers three tuples (i.e. red color tuples 25% of the d-tuples), thus only nine tuples remaining. The next row covers three tuples too with green color tuples and totally with the previous row, 50% of the dtuples. The next row covers the other three tuples with the blue color tuples and totally with the previous two rows 75% of the d-tuples. The last one covers all the remaining tuples with the brown color and totally with the previous three rows 100% of the dtuples.

20

Chapter Two

Theoretical Background and Concepts

Figure 2.6 Illustration of the d-tuples coverage

2.5 Existing Literature on Combinatorial Strategies While the problem of covering array generation is an NP-hard problem, researchers have tried various methods to solve it. This section pays attention to general techniques and tools that have been developed by researchers. Some of these tools are freely available while others are just presented as evidences in the literature. 21

Chapter Two

Theoretical Background and Concepts

2.5.1 Test Suite Generation Tools and Strategies To date, there are numbers of software tools and strategies that have been developed for test suite generations. The first attempt of researchers is started from the algebraic approach and using the orthogonal array (OA) that are derived from mathematical functions. This way requires the input-factors and levels to be constructed by predefined rules without requiring any details of combinations. It's created directly by calculating a mathematical purpose for the value [24]. Despite its usefulness, OAs are too restrictive because they exploit mathematical properties, thereby requiring the parameters and values to be uniform. To overcome this limitation, the Mutual Orthogonal Array (MOA) [52] has been introduced to support non-uniform values. However, a major drawback exists for both MOA and OA, i.e., a feasible solution is only available for certain configurations [24, 52].

Another approach which exists in literature is computational approach, which is using both OFAT and OTAT. In this strategy, a single test or set of complete test cases is candidates per iteration, after that the algorithm searches for the test case that covers most uncovered d-tuples to be added to the final test suite. Based on this approach, several tools and strategies have been developed in the previous studies. The most wellknown strategies in this approach are: Automatic Efficient Test Generator (AETG) by [53], Classification-Tree Editor eXtended Logics (CTE-XL) developed by [54], test vector generator (TVG) by [55], test configuration (TConfig) by [56], Modified Automatic Efficient Test Generator (mAETG) developed by [57], jenny by [58], Pairwise Independent Combinatorial Testing (PICT) developed by Microsoft [59], Constrained Array Test System (CATS) developed by [60], Intelligent Test Case 22

Chapter Two

Theoretical Background and Concepts

Handler (ITCH) developed by International Business Machines(IBM) [8], In parameter order (IPO) by Lei and Tai [61] and its variant tools IPOG and IPOG-D [24, 62].

Nowadays, some tools and strategies are still developing to generate minimal combinatorial test suites. Few of them are available freeware on the Internet [63]. Each strategy has its own features and advantages. None of them is the best for all input configurations. Sometimes they are used together and then the best result is chosen.

Most recently, important efforts have also been grown to implement artificial intelligence (AI)-based strategies for the combinatorial test suite generations. So far, GA, SA, TS, HS, ACA, PSO, and SSO have been developed and successfully implemented for small-scale combination degrees [8, 9, 64].

To construct combinatorial test suites, GA, SA, and TS are implemented by [65]. The implementation supports small combinations of input factors d = 2 (i.e., pair-wise). The results confirmed that GA has been least efficient as compared with SA and TS. In addition, for small search spaces TS is effective while for large search spaces SA performs better results. When combinations have been increased, for three and greater, [57] developed and implemented SA for d=3 when a large search space for this case generated. The result confirms that SA performed better outcomes to find the optimal solution.

On the other hand, [66] has developed two artificial methods GA and ACA, combined with compaction algorithm. The results show that the generated covering array is usually small, however they are not always optimal in size for the combination degree 2≤ d ≤ 3. Another technique to construct covering array is Particle Swarm-based Test 23

Chapter Two

Theoretical Background and Concepts

Generator (PSTG) developed by [38]. This technique showed that PSO supports high combination between the input factors 2 ≤ d ≤ 6; which means that it obtains large search spaces. The result shows that PSO roles more effective compared with the other techniques. However, the problem with the conventional PSO is that the convergence speed decreases as the number of iteration is increased, which affects the particles to achieve the best value [67]. It is also appeared that PSO has the problem of parameter tuning due to the different performance due to different parameter values. In fact, most of the meta-heuristic algorithms are using local search, and global search with a generated random initial population [41]. Gaining an optimal combinatorial test suite all the time is next to impossible due to its NP-completeness.

2.5.2 A Brief Review of Generation Strategies As mentioned previously, basically the two directions of generating combinatorial test suites are basically Computational and AI-based strategy. In this section, attention is paid with recaps to the tools that are using computational and AI-based approaches. Figure 2.7 summarizes all these strategies and tools that are used in both directions.

The implementations of computational approach mainly use two directions of algorithm,

sequential

and

parallel

implementation

algorithms.

Sequential

implementation builds the test case one by one until completion. Parallel implementation consists of multiple processing units together constructing the final test suite. Here, a sequential algorithm serves better than the parallel because it is less difficult for implementation. However, the sequential tends to consume more time as

24

Chapter Two

Theoretical Background and Concepts

compared to the parallel implementation especially for a large input factors and configurations [34].

Figure 2.7 OTAT, OPAT uses by tools and strategies

As mentioned earlier, some of the tools that have been implemented by many developers are now available for generating test suites. Many tools have been developed in the literature such as AETG, mAETG, PICT, CTE-XL, TVG, jenny, TConfig, ITCH, IPO, IPOG, and IPOG-D.

AETG strategy uses OTAT and supports only uniform degrees of interaction [24]. It initializes by generating some candidate test cases then selects one of them as a final solution that covers most tuples. Then, it randomly selects another one in order of the remaining input factors. For each remaining input factor, it is necessary to choose values that cover the most uncovered d-tuples [34, 53]. Because of its unavailability for evaluation [57], implements this strategy again with modification in a tool called mAETG. 25

Chapter Two

Theoretical Background and Concepts

PICT strategy uses greedy algorithm and OTAT. PICT contains two main phases; preparation and generation. The first phase computes all information that is needed for second phase. Generation process starts by marking the first uncovered tuples from the uncovered tuples list, then “don’t cares” value are filled iteratively by the value that cover the most uncovered tuples [59]. While PICT does formulate pseudo-random choices, it is always initialized with the same seed value (except the user specifies otherwise). As a result, two executions with the same input construct the same output [34].

TVG algorithm uses OTAT, and generates test cases for each execution with different results for the same inputs. TVG supports all types of interaction degrees. It has been implemented as a Java program with GUI that covers d-tuples where d is specified by tester [34].

Jenny strategy uses OTAT that starts generating with 1-tuples and then extends to cover 2-tuples until covers all d-tuples (where d specified by the tester). Jenny supports only uniform combination degrees. It will cover most of the combinations with fewer test cases [34, 58].

TConfig is another test suite generation strategy that uses both directions OTAT and OPAT. It is been dependent on two main methods: recursive block method and IPO method. The first method is used to generate the pair-wise test suite whilst the second method is used for higher uniform degrees of combinations. Recursive block method uses the algebraic approach to generate a test suite based on OA. It has been used as

26

Chapter Two

Theoretical Background and Concepts

initial blocks for the larger covering array including all d-combinations that can be generated by building covering arrays from orthogonal arrays [34].

IPO strategy uses OFAT that begins the generation process from 2-combinations, then extends by adding one parameter at a time based on horizontal extension. To ensure the coverage of all d-tuples, a new test case may be added from time to time based on vertical extension [20]. Based on this idea, the technique generalized the IPO strategy from pair-wise testing to multi-way testing and produces the modern strategy by [62] called IPOG. However, multi-way testing places a demand for time and space requirements since the number of combinations is frequently very large. For this purpose and based on IPOG strategy [24] a new strategy named IPOG-D is introduced that it is sometimes called Doubling Construct. Doubling Construct algorithm is used to raise the initial test suite size. Thus, the number of horizontal and vertical extensions needed by IPOG-D strategy can be efficiently reduced as compared to IPOG which results in reduced execution time [34].

In addition to the abovementioned strategies and algorithms, nowadays there are several attempts to develop combinatorial strategies based on artificial intelligence (i.e. AIprocedure). So far, GA, ACA, SA, TS, and PSTG AI-based techniques are implemented successfully to generate combinatorial test suites. In general, these techniques are used to find the optimal solution among a finite number of solutions. Each of these techniques starts by initializing random populations, and then iteratively updates the population according to specific algorithms and update roles.

27

Chapter Two

Theoretical Background and Concepts

GA is a heuristic search technique that has been widely used in solving problems ranging from optimizations to machine learning [22]. It initialized with random solutions that denoted chromosomes. After that, it is making a new solution by exchanging and swapping between two good candidates. The procedure swapping has been applied by specific processes; for example, mutation processes and crossover processes. Finally, it chooses the best solution among the solutions and adds to the final test suite.

In ACA, an individual ant makes candidate solutions by first round with an empty solution and then iteratively adds up solution components until generate solutions completed. After building the solution completed, the ants offer feedback on the solutions, and better solutions were used by many ants [68]. The searching operation is done by a number of ants. Since ants travel from position into another to find best path stands for the best value for a test case [34].

The search progress in SA consists of two main parts that take on search procedures. The first part is the acceptance probability of the current solution, and the second one is differences in objective value between the current solution and the neighbouring solution. It allows for fewer restricted movements through the search space, and probability of the search attractive stuck in local optima [41]. This algorithm starts randomly and after that, it applies to a numbers of transformations according to probability equation. The probability equation depends greatly on input-factors [34].

The progresses in TS identify neighbours or a set of moves that may be useful to a given solution to make a new one. It stores more accurately and moves in a data 28

Chapter Two

Theoretical Background and Concepts

structure that is called tabu-list. It records information regarding a solution attributes that is useful for modification of movement from one solution to another. Selecting good solution is done by using adapted evaluation strategies that help the introduction of current solution [68].

PSTG strategy uses PSO algorithm to initialise random population in the beginning and that for each solution has its velocity. The whole population named swarm and each solution is the swarm called particle. The fitness function is defined based on the coverage of d-tuples here. Each solution becomes a good candidate when it covers most of the d-tuples combinations. The algorithm updates the search space periodically based on the update role and velocity of the particle. Here, the role is based on parameters to adjust the movement of the particles and their speed of convergence. These parameters must be tuned carefully to get optimal solution and not to get stuck to the local minima [38, 69].

2.6 Application of Combinatorial Testing Combinatorial testing (CT) has a great deal of considerations from academic area, especially for software engineering. The literature proves that CT has a main role to software testing. A number of researches recommend that CT can be effective in different areas of research not just the software engineering and testing [8, 30, 70, 71]. The usefulness and applicability of CT are recently addressed in different research area that is illustrated by a diagram in Figure 2.8. Accordingly, the combinatorial technique has been applied successfully to improve both the quality and the efficiency in these areas. 29

Chapter Two

Theoretical Background and Concepts

Figure 2.8 Application of CT

CT can be used in many scientific research fields. Each field is surrounded with different applications. Software testing takes large scale of research in this direction to investigate covering array applications. The aim of using covering array in the form of CT is to find faults in the software-under-test. The followig subsections review some of these researches and applications in the software testing domain.

2.6.1 Test Case Prioritization The outcome of the prioritization, is usually a schedule of test cases with the intention that those with the maximum or highest priority, as specified by a number of conditions, are executed beforehand in the testing process. While the testing process is completed, a set of the prioritized test suite is constructed. When those test cases executed, the wellordered test cases reveal faults early [8].

30

Chapter Two

Theoretical Background and Concepts

Mainly, prioritization can be computed in two behaviors: the first one reordering an existing test suite of combinatorial testing based on prioritization criteria, and the second one generating an ordered test suite for CT taking into account the importance of combinations [8].

2.6.2 GUI Interaction Testing Among the many methods of interacting with software, GUI is practically the most popular way that users interact with any software. This interaction is done by menus, and icon on windows that can be controlled by mouse [71]. Any interfaces have actions that are worked by a user, evrey position on a dialogue means a spesific task, and each task interacts with another by combinations. Combinatorial test suites have been used with graph models to effectively test GUI.

Yuan and Memone tried to solve this problem in different researches in the past using some small empirical studied [72, 73]. More recently [71], conducted different complex empirical studies on GUI artefacts using new methods and algorithms. The results of those studies showed that increasing the combination degree, and by controlling the relative positions of events, large number of faults can be detected as compared with the earlier techniques for GUI testing.

2.6.3 Fault characterization Failure diagnosis by finding failure-triggering schema is also called fault characterization [8]. Fault characterization is a technique used to discover position of faults in a given software. With raises of software complexity, several kinds of faults 31

Chapter Two

Theoretical Background and Concepts

could not be diagnosed by the traditional methods, especially when the system configuration spaces are great and resources are limited. It helps to recognize the cause of a specific fault and saves a great time by fixing the fault quickly.

After detecting a fault, it is necessery to investigate the fault location and then remove it [8]. [21] developed a way called “Skoll” to distribute and test individual configurations, at remote user side and transmit the results to a central server. the task of the skoll is fault characterization. This process is done by testing different configurations and features of the software-under-test. Configurations at remote user sides and collect the results at a central server. Moreover, the skoll way is used with CA where the combination degree is between 2 to 6. The results show that by increasing the degree of the combination more precise fault characterization can be achieved.

2.7 The Use of Cuckoo Optimization As can be observed in this chapter, the nature inspired and AI-base algorithms and strategies can achieve more efficient results as compared to the other algorithms and methods. Due to the heavy computation of the search process for covering d-tuples, the AI-based algorithms takes more computation time to generate final combinatorial sets.

PSO tries to solve this problem by using lighter weight computation in the update and search process as compared to the other algorithms. Evidences in the literature showed better results of PSO in most cases as compared to the other methods [38]. In fact, PSO is not deprived from problems and drawbacks.

The performance of the PSO is generally depended on the values of the tuning parameters. In other words, the PSO combines two roles of searching mechanisms: 32

Chapter Two

Theoretical Background and Concepts

exploration and exploitation. In the former, PSO performs global optimum solution searching, and in the latter, seeks more accurate optimum solutions through converging the search around a promising candidate. For instance, selecting the right values for these parameters should be based on the compromise between local and global explorations that would facilitate faster convergence. Evidence showed that, depending on the complexity of the problem, different values of these parameters are required to achieve the optimum required solution [74, 75]. Moreover, the search process in this algorithm gets stuck in the local optima. Hence, it becomes difficult to find the best solution after a certain number of iterations. To solve this issue, it is better to find new algorithms in which contain few parameters to be tuned and not to have those drawbacks.

Recently, CSA has been proven with the intention of its applicability and an efficient optimization method through different real applications [25, 28, 30-32]. Studies have verified that CS has a good performances and it is significantly better than the other algorithms in many applications. [30] also report that the optimal solutions are obtained by CS comparative with GA and PSO.

The applications of Cuckoo algorithm have been developed in different fields including, solving nonlinear problems, optimizing weights and to train spiking of neural networks, parameters of Support Vector Machines and radial basis function, optimizing semantic web service process, job scheduling, finding optimal cluster heads in wireless sensor networks, finding shortest path and clustering [28, 32].

33

Chapter Two

Theoretical Background and Concepts

CS dating back to 2009 when it was developed by Xin-She and Suash [27]. It is one of the newest nature-inspired meta-heuristic algorithms that is used to solve global optimization problems [28]. Most recently, CS algorithm enhanced by the so-called Lévy flight method to be used instead of simple random walks [28, 69].

The algorithm mainly based on the nature inspired behaviour of Cuckoo birds. The species of Cuckoo birds lay their eggs in the other nests. If a host bird determines the eggs that are not its own, they will either throw these alien eggs away or simply abandon its nest and build a new nest elsewhere. The main idealized roles of CSA are that each cuckoo lays one egg at a time and dumps its egg in randomly chosen nest. The best nests with high quality of eggs will carry over to the next generations and the numbers of available host nest are fixed. Thus, an egg that is laid by a cuckoo is discovered by the host bird with a probability between "0" to "1". In addition, CS can work with NP-hard problems and gets best solution among several solutions [25, 29].

Based on these prospects, the hypothesis of this research supposes that this algorithm could perform well to solve the combinatorial optimizations problems.

2.8 The Cuckoo Search Algorithm (CSA) As mentioned earlier, CS is one of the newest and modern strategies that is applied to solve optimization problems. Mainly, the algorithm is used to solve NP-hard problems that need global search techniques [27, 28]. Figure 2.9 shows the Pseudo code that illustrates the general steps of this algorithm.

34

Chapter Two

Theoretical Background and Concepts

begin Objective function f(x), x = (x1, ..., xd)T ; Generate initial a population of n host nests xi (i = 1, 2, ..., n); while (t Fj) replace j by the new solution; end A fraction (pa) of worse nests are abandoned and new one are build ; Keep the best solutions (or nests with quality solutions); Rank the solutions and find the current best; end while Postprocess results and visualization; End

Figure 2.9 Pseudo code of the CSA [27]

According to Figure 2.8, the rules of the CSA are as follows: (1) Each Cuckoo selects a nest randomly to lay one egg in it, in which the egg represents a solution in a set of solutions. (2) Part of the nest contains the best solutions (eggs) that will survive to the next generation. (3) The probability of the host bird finding the alien egg in a fixed number of nests is pa ∈ [0,1] [76]. If the host bird discovers the alien egg with this probability, then the bird will either discard the egg or abandon the nest to build a new one. Thus, Yang and Deb assumed that a part of pa with n nest is replaced by new nests. Lévy flight is used in the cuckoo algorithm to conduct local and global searches [77]. The rule of Lévy flight is used successfully in stochastic simulations of different 35

Chapter Two

Theoretical Background and Concepts

applications, such as biology and physics. Lévy flight is a random path of walking that takes a sequence of jumps, which are selected from a probability function. A step can be represented by the following equations for the solution x(t+1) of cuckoo i:

=

( )

+ ∗ Lévy ( ) . . … … … . . . . . … … . . . . (Eq 2.2)

L´evy ~ u =

Where

(1 < λ ≤ 3) … … … … . . . . . . . . . (Eq 2.3)

the size of each steps in which

> 0 and depends on the optimization problem

scale, and Lévy ( ) is the Lévy distribution. The algorithm continues to move the eggs to another position if the objective function found better positions. The L´evy flight determines the next position of the egg depending on its current position. Random walk via L´evy flight is capable of discovering the search space based on its step size [28].

Another advantage of CSA over the other counterpart stochastic optimization algorithms, such as PSO and GA, is that it does not have many parameters for tuning. The only parameter for tuning is pa. Yang and Deb [27, 30] obtained evidence from the literature and showed that the generated results were independent of the value of this parameter and can be fitted to a proposed value pa = 0.25.

To use the algorithm for the combinatorial test suite generation, it is essential to adapt the algorithm into the generation strategy. Here, the fitness function plays an important role for the adaptation and the application of CS. In this research, after initializing the population, the CSA takes the number covered d-tuples by each nest in the population as a fitness function. This function selects best rows that cover most of the combinations in the d-tuples list. 36

Chapter Two

Theoretical Background and Concepts

2.9 Summary This chapter reviews the state-of-the-art of the software testing and test case generation strategies. The chapter introduces the test case design techniques and illustrates how to select the best test cases to form the final test suite. Then, the chapter demonstrates a simple real-world configurable software system with the subjection of Mozilla Firefox and shows how the combinatorial testing approach is applied to it. The chapter also presents the mathematical notations for covering array and illustrates the coverage criterion. Afterwards, the existing literature on combinatorial test suite generation strategies has been discussed and reviewed different applications of combinatorial testing. Finally, the justifications of using cuckoo optimization have been explained.

37

Chapter Three

The Design And Implementation of the Proposed Strategy CHAPTER 3

THE DESIGN AND IMPLEMENTATION OF THE PROPOSED STRATEGY

3.1 Introduction Within the previous chapter, the test case design techniques are introduced, and each technique has been presented. After that, a simple real-world example that illustrates the combinatorial testing has been identified. The background and definitions for covering array have also been introduced. Extensive reviews of the existing literature on combinatorial testing and their applications in the real world have also been reviewed. Moreover, explanations for the CS and its use in the proposed strategy are provided.

This chapter discusses the design and implementations of the combinatorial testing by using CS for generating test the suites. Also, discusses how CS affects reduction and selects a solution among a finite set of solutions. The chapter also describes how to apply CS in the test generator strategy. For more explanation, two running examples are given in the end of the chapter to illustrate the procedure of the algorithm in detail.

The following sections illustrate how this strategy works in detail.

3.2 Design of CS Strategy The strategy is using CS to generate a combinatorial test suite. The output of the strategy covers all combinations among input-factors (i.e. d-tuples) which in turn are a covering array. Hence, the inputs of this strategy are input-factors. As mentioned earlier, each input-factor has a specific level in which the degree of combination is 38

Chapter Three

The Design And Implementation of the Proposed Strategy

specified by the tester. Figure 3.1 shows the main Graphical User Interface (GUI) of the strategy in which the algorithms are allocated on.

Figure 3.1 Main Window of the Implemented Strategy

The input data to the strategy can be applied either manually by specifying the input factors, levels and the combination degree, or by retrieving them from an excel file automatically. In the same way, the output data can be obtained on the consul screen or by exporting them directly to an excel sheet. In both cases, the input data will be ready for the processing by the algorithms of the strategy. Figure 3.2 is a Flow Chart that is illustrating all algorithms that are allocated on the main window in Figure 3.1.

39

Chapter Three

The Design And Implementation of the Proposed Strategy

Figure 3.2 Flow Chart for the CS Strategy

3.2.1 Input-Factor Combination (IFC) Algorithm This algorithm mainly takes three inputs: number of input-factors (k), level for each input factors (v), and the combination's degree (d). In order to generate the d-tuples list, the strategy first generates a Binary Digit (BD) list that contains binary digits, in which this BD starts from zero up to space limit (SL). The SL is calculated by the equation 3.1. Figure 3.3 shows the pseudo code of this algorithm. 40

Chapter Three

The Design And Implementation of the Proposed Strategy

Input: data according to (k, v, d); Output: Input-Factor combination list (IFC); 1: Let SL be a space limit; 2: SL= 0 ------ 2k - 1; 3: Create Binary number until 2k ; 4: Count ("1"); 5: If count number (1) = d 6: { 7: Add to IFC 8: }; Figure 3.3 Pseudo code of the IFC algorithm

The list BD can be filled by binary digits in which the total elements of BD are equal to (2k). = (2 ) − 1 (

=







)……….(

! … … … … … … … … … . . … … . . ( ! ∗ ( − )!

3.1)

3.2)

When the BD list is created, an algorithm will filter the binary list based on the combination degree. The algorithm will count the number of "1" in each binary number and pass only those binary numbers that meet the combinatorial degree that is specified in the input at the beginning. For more illustrations, Figure 3.4 shows how BD and IFC are created. According to the specified degree of combination, the check point makes a comparison between each element in BD list and a degree number (d). Summation repeats “1” for each row elements in BD list that has the same degree number (i.e. ∑

"1" =

). For example: if d = 2, then each binary digit must contains two 41

Chapter Three

The Design And Implementation of the Proposed Strategy

"1"s in which (011), (101), and (110) will pass the filter. The (IFC) is a list that represents input-factor combinations. Here, for each position that contains “0” a “don’t care” value of input-factor is inserted. However, the “1”s in the same binary element is replaced by a level for that particular input factor. For example: (011) means that there are three input-factors (k1, k2, and k3) respectively. In this case, the first input factor (k1 = 0) is counted as don't care while the combination is between the second and the third factors.

Figure 3.4 IFC algorithm diagram

Here, the algorithm neglects those elements in the BD list that do not satisfy the combination degree conditions and add the rest into the IFC list. Figure 3.5 illustrates a running example of this algorithm by a simple diagram. The diagram shows an example with d = 2, and k = 3. Hence SL = ((2 * 2 * 2) - 1) = 7 which in turn counts binary numbers from 0 up to 7. The outcome of the algorithm is satisfying the results in 42

Chapter Three

The Design And Implementation of the Proposed Strategy

equation 3.2, where the (c) represent to combinations [78], that is the number of the combinations is equal to three elements in IFC .

Figure 3.5 The IFC and BD for d = 2, k = 3

3.2.2 d-tuple Generation Algorithm As previously mentioned, the (IFC) list contains a list of rows. Each row is a binary number that represents a specific combination of the input-factor. The combination characteristic of this binary number depends of the number of "1"s. When this list generated, immediately the algorithm will put the levels for each factor that contains true cases. As explained earlier, the false cases (i.e., "0"s) will be considered as don't cares. Hence, these don't cares can be filled by any level of that specific input factor as far as the combination of the other factors are considered. However, later on, this don't care value can be used for optimization purposes. Here, for better explanation the don't care values are shown as "*". Hence, for generating the d-tuples list, the IFC list is 43

Chapter Three

The Design And Implementation of the Proposed Strategy

taken, then don't care and factor levels (Ti) are added to the list. Figure 3.6 illustrates dtuple generation algorithm graphically.

Figure 3.6 d-tuple generate diagram

Simply, if you have three input-factors (k1, k2, and k3) for each factor you have twolevels and the degree of the combination equals two. All possible combinations according to equation 3-4 equal to three become (k2, k3), (k1, k3), and (k1, k2). In Figure 3.5 the IFC is a list which records by these forms (011), (101), and (110). For the first combination (i.e. k2, k3), all possible interacts equals to four (2 * 2 = 4), these operations will be repeated for the second and third combinations, and the results will be equal to twelve cases, and then added to d-tuple list. For this case, Figure 3.9 shows the real outputs. Figure 3.7 is a pseudo code which has summarised and showed general steps of the above algorithm.

44

Chapter Three

The Design And Implementation of the Proposed Strategy

Input: input data according Input-factor's level; Output: d-tuples list; 1: Let v = level for each Input-factor; 2: IFC= Input-factor combination; 3: Check to number "1" in IFC; 4: If detect number "1" 5: { enters each level to the Input-factors; 6: combining values between them; 7: accordingly to the possible levels Input-factor; 8: }; 9: in accordance with equation 3.2 generates combination 10: Let Ti be a number of true character; 11: don't care is negative value (-1); 12: for each elements in IFC 13: { 14: each Ti And (-1) Add to d-tuples; 15: };

Figure 3.7 Pseudo code of combinatorial test suite generation with CS

3.2.3 Generate Test Suite Algorithm This algorithm is executed immediately after the generation of d-tuples list. Here, the algorithm tries to find a test case that covers most of these tuples. This is basically considered as the fitness function of the CS algorithm. Hence, this algorithm adopts the CSA algorithm along with the other algorithms. Figure 3.8 shows the test case generator algorithm in detail.

The Algorithm starts by initializing a random population. The dimension of this population depends on the number of the input factors. The algorithm chooses a random solution initially to be the best solution cBest. Then, according to the fitness function, the algorithm also evaluates the other nests and compares their coverage with the 45

Chapter Three

The Design And Implementation of the Proposed Strategy

coverage of cBest. If the other nests are found to have better solutions, then cBest is changed to that solution. This procedure will continue until the coverage of all the nests inside the population is calculated. The algorithm will arrange the population again

If the algorithm finds a nest to have highest coverage, then it adds this solution to the final test suite and then deletes the related tuples in the d-tuples list. This in turn prevents the algorithm to cover those tuples again. However, when the maximum coverage is not found, then the population of the algorithm gets updated by using Lévy flight procedures.

Here, the cBest is used within the equations of Lévy flight to update the population. The Lévy flight uses the following equations to update the population [30, 79].

sigma =

gamma(1 + beta) ∗ sin pi ∗ gamma

∗ beta ∗ 2



3 beta = … . . … (Eq 3.3) 2 ⎬ ⎭

u = rondom ∗ sigma v = rondom ⎫ u and v are drawn from Normal Distirbution … … . . (Eq 3.4) u step = ⎬ |v| ⎭

stepsize = s° ∗ step ∗ cBest (s° = 0.01 ⋯ 1 ) … . . … (Eq 3.5) Random = Random + stepsize ∗ nextRandom

46

Chapter Three

The Design And Implementation of the Proposed Strategy

Input: IFC list and Determine of search space; Output: test cases (final combination list); 1: while d-tuple is empty: 2: { 3: initialize random population ranges according (space size and length of data); 4: according to iteration number; 5: { 6: starting evaluation and check for coverage; 7: The first coverage equality length of IFC; 8: { 9: add test case to combination list, removed from d-tuples; 10: } 11: If coverage =0 { 12: Continue; 13: } 14: Else if coverage < previous_ coverage; 15: { 16: Current solution = previous solution; 17: } Chose next coverage; 18: Else if coverage > previous_ coverage; 19: { 20: Previous solution = Current solution; 21: } 22: update by cuckoo via Lévy flight; 23: Modify Equation 3.3; 24: Modify Equation 3.4; 25: Modify Equation 3.5; 26: } defend iterations; 27: evaluate and check covers also; 28: if satisfy that current solution have max covers; 29: { 30: add to combination list and remove from d-tuple; 31: } 32: } end; Figure 3.8 The test case generator Pseudo code By updating the population, the search again starts for the best and maximum solutions. At the end of the specified iterations, the best solution is chosen to be added to the final test suite. This process will be continued until all tuples in the d-tuples list removed and the list getting empty. Figure 3.9 shows the main window of the strategy accompanied with the output screen for an example with three input factors, each of them has two levels when the degree of the combination is two. 47

Chapter Three

The Design And Implementation of the Proposed Strategy

Figure 3.9 Strategy in progress when each algorithm is executed and the final optimized set is generate

48

Chapter Three

The Design And Implementation of the Proposed Strategy

3.3 Optimization Process Despite the explanations of the algorithms, however the performance of these algorithms and search processes need different techniques to be adapted with. Here, when the input factors and levels getting large, the search space will also be very large. Hence, there is a need for a mechanism to reach for the targeted tuples fast to check their coverage. For this reason, an indexing mechanism is applied to reduce the time of the search and increase the performance.

The combinatorial optimization can reduce the number of the test cases effectively by covering all the combinations. The degree of the combination must be defined; this definition starts from two up to factor numbers, for the case in Figure 2.4, it is equals to six (i.e. d = 2 up 6). Figure 3.10 illustrates the relationship between the numbers of possible test cases and the reduction in size of the final test suite. The figure also adopts the total number of test cases with the variation of the combination degree d. The figure also shows the percentage of reduction rates with the size of the test suite and the degree

Persentage

of combinations. 100 90 80 70 60 50 40 30 20 10 0

Combinatorial.size % of reduction according (exhaustive)

2-com

3-com

4-com

5-com

6-com

7

13

26

33

64

89.07

79.69

59.38

48.44

0

Figure 3.10 Relationships between CA size and Exhaustive size 49

Chapter Three

The Design And Implementation of the Proposed Strategy

It can be noted from the graph that when the degree is equal to two, the combinatorial size is equal to 7; this indicate approximate reductions of exhaustive size by 89%. However, by increasing the degree of combinations, the combinatorial size increases, hence reductions dramatically decrease until zero percentage. Thus, this covers the exhaustive size.

Assume that the number of the input-factors equals four (k1, k2, k3, and k4) and each input-factors has two levels (vi = 2, 2, 2, 2), when the degree of the combinations equals two (d = 2). According to Eq 3.2, the number of the input factor combination can be found as follows: =

4! 4∗3∗2∗1 24 = = =6 2! ∗ (4 − 2)! 2 ∗ 1 ∗ 2 ∗ 1 4

According to the results of the combination, it has six ways of combinations as shown in Figure 3.11.

Figure 3.11 Combination Paths For each way, multiplying the levels can be combined [(k3, k4), (k2, k4), (k2, k3), (k1, k4), (k1, k3), (k1, k2)], then the results equal to [(2 * 2 = 4), (2 * 2 = 4), (2 * 2 = 4), (2 * 2 = 4), (2 * 2 = 4), (2 * 2 = 4)] respectively. 50

Chapter Three

The Design And Implementation of the Proposed Strategy

In accordance with these results, the search space is divided into six partitions and an indexing for each partition is created by taking into account the summation of each combination [(4), (4 + 4 = 8), (8 + 4 = 12), (12 + 4 = 16), (16 + 4 = 20), (20 + 4 = 24)]. Hence, six categories are created dynamically by the strategy such as [(1 - 4), (5 - 8), (9 - 12), (13 - 16), (15 - 20), (21 - 24)]. Figure 3.12 illustrates this mechanism in detail for the given example.

Figure 3.12 The indexing of the search space

The advantage of this mechanism is the speedup of the search process as the strategy just search for those related tuples in the give index number. The indexing number will change dynamically as when the best test case found because the related tuples in the search space will be deleted immediately. Hence, while a candidate's solution has been sent for a final combination list, the index is re-ordered as illustrated in figure 3.13. 51

Chapter Three

The Design And Implementation of the Proposed Strategy

Figure 3.13 Process of removing and making new indexes

Figure 3.12 explains a real example that has three input-factors, for each input-factor; there are three levels when the combination degree is two. The index starts from 1 up to 27. This range is classified by three categories [(k2, k3), (k1, k3), and (k1, k2)], each category has three ranges of indexes [(1 - 9), (10 - 18), and (19 - 27)] respectively. The candidate from each category covers one element that directly re-orders the indexes and builds new index [(1 - 8), (9 - 16), (17 - 24)]. This process will be continued until each category get empty from the elements and the index becomes [(0 - 0), (0 - 0), (0 - 0)].

3.4 Running Examples As described previously, CS supports both CAs and MCAs. The following subsections explain in detail how the CS strategy works to generate test suite, and how indexing is used. 52

Chapter Three

The Design And Implementation of the Proposed Strategy

3.4.1 Generation of CA For building a CA, there is need for input factors (k), the levels for each of them (v) and the degree of combination to satisfy the syntax of CA (N; d, kv). The following running example will produce CA (6; 2, 24) for four input factors, each of them has two values with d = 2. Figure 3.14 shows this running example in detail.

Here, the d-tuples list contains 24 tuples. The tuples contain "1" and "2" to indicate two levels of the input factors and also contain "*" to indicate the "don't care" values. The index starts from 1 to 24 and it is divided into six groups with four index values in each of them.

To generate the final combinatorial list, each solution from the final list covers numbers of elements in the d-tuple list. According to the fitness function, the first candidate (2:2:1:2) covers six tuples [(*:*:1:2), (*:2:*:2), (*:2:1:*), (2:*:*:2), (2:*:1:*), (2:2:*:*)] which is the maximum coverage, then the strategy adds this case to the final test suite and deletes them from d-tuple. For the next step, the strategy re-ordered d-tuple from 1 up to 18. Next candidate (2:1:2:1) also covers six cases [(*:*:2:1), (*:1:*:1), (*:1:2:*), (2:*:*:1), (2:*:2:*), (2:1:*:*)] and then it deletes each them and the new orders will be between 1 and 12. However, the candidate (1:2:2:1) covers only five tuples because in the first group there is no tuples to be covered, but this candidate is able to cover tuples in the other groups [(*:2:*:1), (*:2:2:*), (1:*:*:1), (1:*:2:*), (1:2:*:*)]. As a result, seven tuples will be remained. The next candidate (1:1:2:2) only covers one tuple [(*:*:2:2)]. Thus, only one tuple remains (*:*:1:1). Finally, (1:1:1:1) covers the last tuple. Hence, the d-tuple will get empty. 53

Chapter Three

The Design And Implementation of the Proposed Strategy

Figure 3.14 A Running Example to Explain the Generation of CA (6; 24) 54

Chapter Three

The Design And Implementation of the Proposed Strategy

3.4.2 Generation Mixed Covering Array The generation of MCA follows the same procedure of CA generation. Here, MCA (9, 2, 4, (32, 22) is taken as an example for a mixed covering array of size 9 with four input factors (k = 4) in which the first and the second factors have three levels while the third and the fourth factors have two levels when the combination degree is two. Figure 3.15 illustrates this example in detail.

The d-tuples for this example are equal to 37 tuples which are grouped by the strategy into six groups [(k3, k4), (k2, k4), (k2, k3), (k1, k4), (k1, k3), and (k1, k2)] respectively in which the tuples count [(2 * 2 = 4), (3 * 2 = 6), (3 * 2 = 6), (3 * 2 = 6), (3 * 2 = 6), (3 * 3 = 9)]. The tuples contain "1", "2", and "3" to indicate three levels of the input factors and also contain "*" to indicate the "don't care" values.

For generating the final test suites, the first four candidate solutions (2:1:1:2), (2:2:2:1), (3:3:2:2), (1:3:1:1)] cover six tuples which is maximum number of tuples. Then, the strategy will arrange the index numbers accordingly and dynamically until all tuples get covered. Here, the strategy will continue to select the best candidates and add them to the final test suites then remove the related tuples from the d-tuples list.

55

Chapter Three

The Design And Implementation of the Proposed Strategy

Figure 3.15 A Running Examples to Explain the Generation of MCA (9, 2, 4, (32, 22)). 56

Chapter Three

The Design And Implementation of the Proposed Strategy

3.5 Summary This chapter explained the design and implementation of the developed combinatorial strategy. The chapter explained how the algorithms inside the strategy work and explained the details of each algorithm. CSA and procedures are showed and its adaptation into the strategy is declared in detail. This is includes the procedures of test case generation and optimization. The next chapter shows the evaluation results of the proposed strategy.

57

Chapter Four

Evaluation and Discussion CHAPTER 4 EVALUATION AND DISCUSSION

4.1 Introduction Design and implementation of the combinatorial combination testing have been presented in the previous chapter. Additionally, chapter three explains the design of CS to be used in the combinatorial strategy. The chapter gives illustration for each algorithm inside the strategy. The chapter also shows the methodology of the selection and the implementation processes.

This chapter presents the results of an extensive evaluation of the proposed strategy. The evaluation process takes different levels for assessment of the strategy. For each level, a fair comparison is made with the counterpart strategies which are available for download or the available strategies which are published in the literature. A careful consideration is taken to make a fair comparison by taking into account the experimental samples and also the experimental environment.

4.2 Evaluation Process The experiments are divided into three essential parts: efficiency, performance, and effectiveness, evaluations of the strategy are based on the literature [26, 38]. Efficiency experiments evaluate the proposed CS strategy in terms of the generated test suite size. The performance experiments evaluate the time takes by the strategy to generate the

58

Chapter Four

Evaluation and Discussion

final test suite. The effectiveness shows the applicability of the strategy and determines the effectiveness of the generated test suites to detect faults.

4.3 Experimental Setup For the first and the second parts of the experiments (efficiency and performance), the environment consists of a laptop PC with Windows 7, 2.6 GHz Core i5 CPU, 4 GB of RAM. The CS strategy is coded and implemented in Csharp (C#) using Microsoft Visual Studio 2012.

For all the experiments, each table represents the smallest test suite size obtained by each strategy, and it is a bolded optimal result. Some configurations are not presented from the literature, thus marked as "NA" which stands for "Not Available". In addition, some configurations are not supported by tools and strategies to generate test suites in accordance with the exact configuration, hence marked as "NS" which stands for "Not Supported". In fact, some of the results are non-deterministic, especially artificial strategies because they depend on the degree of randomness. The published results of those strategies are achieved by running 50 times then selecting the smallest test size as conducted in the literature [57, 64, 66]. For more indication of the results, the average size also reported in the tables.

4.4 The CS Efficiency Comparative Experiments As mentioned earlier, efficiency is measured by the size of the constructed test suites. All results are compared with those strategies published in the literature and those

59

Chapter Four

Evaluation and Discussion

available freely for download. The section is divided into two subsections, comparison with the AI-based strategies and comparison with the computational based strategies.

4.4.1 AI-based strategies The experiment results of this section are compared with strategies that are based on artificial intelligence such as GA, SA, ACA, PSO, AETG, and mAETG. The original results published in [38, 57, 66]. Table 4-1 shows the results with the existing AI-based algorithms that are obtained by these strategies for different configurations.

Table 4.1 Comparison with existing meta-heuristic algorithms for different configurations k = 10 AETG

mATEG

GA

SA

ACA

PSO

CS

N

N

N

N

N

N

Best.N

AVG.N

CA(N; 2, 34)

9

9

9

9

9

9

9

9.8

CA(N; 2, 313)

15

17

17

16

17

17

20

22.4

MCA(N; 2, 51 38 22)

19

20

15

15

16

21

21

22.6

MCA(N; 2, 61 51 46 38 23)

34

35

33

30

32

39

43

45.4

MCA(N; 2, 71 61 51 46 38 23)

45

44

42

42

42

48

51

52.4

CA(N; 3, 36)

47

38

33

33

33

42

43

44.8

CA(N; 3, 46)

105

77

64

64

64

102

105

108.2

CA(N; 3, 57)

229

218

218

201

218

229

233

236.2

CA(N; 3, 66)

343

330

331

300

330

338

350

360.4

MCA(N; 3, 101 62 43 31)

NA

377

360

360

361

385

393

399.8

Table 4.1 shows the smallest test suite sizes for CA and MCA. The degree of combinations varied between to 2 and 3. In these configurations, the results for GA, 60

Chapter Four

Evaluation and Discussion

ACA, and SA perform more efficient than the other strategies and generate better sizes than others in these configurations. PSO, AETG, mAETG, and CS are producing comparative results in these configurations. However, they are failed to produce best results for most of these configurations. Although the reported results for GA and ACA are showing better results, these algorithms are using "compaction algorithm" which optimize the output of the GA and ACA by further optimization of combining the rows of the constructed CAs. As a result, the reported results do not show the actual efficiency GA and ACA. Regarding SA, the algorithm produces best results for these configurations, however due to the heavy weight of the algorithm it failed to produce results for d > 3. CS performs well for these configurations, however there is few evidence reported by these algorithms to investigate and evaluate CS against them since they are not freely available.

4.4.2 Computational-based strategies In this experiment, CS strategy is evaluated and compared with the other available wellknown computational-based strategies. Here, CS is compared with Jenny, TConfig, ITCH, PICT, TVG, CTX-XL, and IPOG. The tools are available for download and obtain the results for specific CA and MCA.

The procedure of the comparison categorises the experiments into different sets. Since the test suite basic components are input factors (k), the levels of these factors (v) and the combination degree, the experiments are taking these components as bases for the experiments. The experiments consist of seven sets as follows: 61

Chapter Four

Evaluation and Discussion

i.

Experiment 1: (k, d) are varied, but (v) is constant.

ii.

Experiment 2: (v, d) are varied, but (k) is constant.

iii.

Experiment 3: (v) is varied, but (k, d) are constant.

iv.

Experiment 4: (k) is varied, but (v, d) are constant.

v.

Experiment 5: (d) is varied, but (k, v) are constant.

vi.

Experiment 6: (v) is varied, but (k) is constant for (TCAS) model.

vii.

Experiment 7: (d) has constant value, for a multi domain configuration.

For each experiment, d is varied between 2 and 6 (2 ≤ d ≤ 6) since most of the applications' faults are detected within these values in the literature. The experiments are chosen from different configuration to show the efficiency clearly. The configurations are as follows:

 Configuration 1: (3 ≤ k ≤ 10) and (2 ≤ d ≤ 6), with (v = 3). Table 4.2  Configuration 2: (2 ≤ V ≤ 5) and (2 ≤ d ≤ 6), with (k = 7). Table 4.3  Configuration 3: (k = 10) and (d = 4), with (2 ≤ v ≤ 6). Table 4.4  Configuration 4: (v = 5) and (d = 4), with (5 ≤ k ≤ 12). Table 4.5  Configuration 5: (2 ≤ d ≤ 6); and (k = 10, v = 5) Table 4.6, and (k = 10, v=2) Table 4.7, and (k = 7, v = 3) Table 4.8

 Configuration 6: TCAS model (MCA (N; d, 27 32 41 102). Table 4.9  Configuration 7: (d = 4) for Multi Domain Configurations. Table 4.10

The following tables summarise each experiment, and show the results that are obtained by running these strategies. 62

Chapter Four

Evaluation and Discussion

Table 4.2 Size of variable input configurations when 3≤ k ≤12, each having 3 levels and 2≤ d≤6 v=3 N

N

N

N

N

CTEXL N

N

N

Best.N

AVG.N

3 4 5 6 7 d=2 8 9 10 11 12

9 13 14 15 16 17 18 19 17 19

10 10 14 15 15 17 17 17 20 20

9 9 15 15 15 15 15 15 15 15

10 13 13 14 16 16 17 18 18 19

10 12 13 15 15 15 15 16 16 16

10 14 14 14 16 17 18 18 20 20

11 12 14 15 17 17 17 20 20 20

9 9 12 13 15 15 17 17 17 18

9 9 11 13 14 15 16 17 18 18

9.6 10.0 11.8 14.2 15.6 15.8 17.2 17.8 18.6 18.8

4 5 6 7 d=3 8 9 10 11 12

34 40 51 51 58 62 65 65 68

32 40 48 55 58 64 68 72 77

27 45 45 45 45 75 75 75 75

34 43 48 51 59 63 65 70 72

34 41 49 55 60 64 68 69 70

34 43 52 54 63 66 71 76 79

39 43 53 57 63 65 68 76 76

30 39 45 50 54 58 62 64 67

28 38 43 48 53 58 62 66 70

29.0 39.2 44.2 50.4 54.8 59.8 63.6 68.2 71.8

5 6 7 8 d=4 9 10 11 12

109 140 169 187 206 221 236 252

97 141 166 190 213 235 258 272

153 153 216 216 306 336 348 372

100 142 168 189 211 231 249 269

105 139 172 192 215 233 250 268

NA NA NA NA NA NA NA NA

115 181 185 203 238 241 272 275

96 133 155 175 195 210 222 244

94 132 154 173 195 211 229 253

95.8 134.2 156.8 174.8 197.8 212.2 231.0 255.8

6 7 8 d=5 9 10 11 12

348 458 548 633 714 791 850

305 477 583 684 773 858 938

NA NA NA NA NA NA NA

310 452 555 637 735 822 900

321 462 562 660 750 833 824

NA NA NA NA NA NA NA

393 608 634 771 784 980 980

312 441 515 598 667 747 809

304 434 515 590 682 778 880

307.8 440.2 517.8 593.8 688.0 780.2 882.2

7 8 9 d=6 10 11 12

1087 1466 1840 2160 2459 2757

921 1515 1931 NA NA NA

NA NA NA NA NA NA

1015 1455 1818 2165 2496 2815

1024 1484 1849 2192 2533 2597

NA NA NA NA NA NA

1281 2098 2160 2726 2739 3649

977 1402 1684 1980 2255 2528

963 1401 1689 2027 2298 2638

970.8 1410.8 1695.4 2035.4 2302.2 2640.6

k

Jenny

TConfig

ITCH

PICT

TVG

63

IPOG

PSO

CS

Chapter Four

Evaluation and Discussion

Table 4.3 Size of variable input configurations when 2≤v ≤5, each having 7 factors and 2≤ d ≤6

k=7 Jenny

TConfig ITCH

PICT

TVG

CTEXL

IPOG

PSO

v

CS

N

N

N

N

N

N

N

N

Best.N

AVG.N

2

8

7

6

7

7

8

8

6

6

6.8

3

16

15

15

16

15

16

17

15

15

16.2

4

28

28

28

27

27

30

28

26

25

26.4

5

37

40

45

40

42

42

42

37

37

38.6

2

14

16

13

15

15

15

19

13

12

13.8

3

51

55

45

51

55

54

57

50

49

51.6

4

124

112

112

124

134

135

208

116

117

118.4

5

236

239

225

241

260

265

275

225

223

225.4

2

31

36

40

32

31

NA

48

29

27

29.6

3

169

166

216

168

167

NA

185

155

155

156.8

4

517

568

704

529

559

NA

509

487

487

490.2

5

1248

1320

1750

1279

1385

NA

1349

1176

1171

1175.2

2

57

56

NA

57

59

NA

128

53

53

54.2

3

458

477

NA

452

464

NA

608

441

439

442.2

4

1938

1792

NA

1933

2010

NA

2560

1426

1845

1850.8

5

5895

NA

NA

5814

6257

NA

8091

5474

5479

5485.2

2

87

64

NA

72

78

NA

64

64

66

67.2

3

1087

921

NA

1015

1016

NA

1281

977

973

978.4

4

6127

NA

NA

5847

5978

NA

4096

5599

5610

5620.8

5 23492

NA

NA

22502 23218

NA

28513 21595

21597

21610.8

d=2

d=3

d=4

d=5

d=6

64

Chapter Four

Evaluation and Discussion

Table 4.4 Size of variable input configurations when 2 ≤ v ≤ 6, each having 10 factors and d= 4

k = 10 Jenny TConfig ITCH PICT TVG IPOG v

PSO

CS Best.N AVG.N

N

N

N

N

N

N

N

2

39

45

58

43

40

49

34

28

30.4

3

221

235

336

231

228

241

213

211

212.8

d=4 4

703

718

704

742

782

707

685

698

701.8

5

1719

1875

1750

1812

1917

1965

1716

1731

1740.2

6

3519

NA

NA

3735

4195

3335

3880

3894

3902.6

Table 4.5 Size of variable input configurations when d=4, and 5 ≤ k ≤ 12 each having 5 levels

v=5 Jenny

TConfig

ITCH PICT

TVG

IPOG

PSO

N

N

N

5

837

773

6

1074

7

CS

N

N

N

N

Best.N

AVG.N

625

810

849

908

779

776

781.8

1092

625

1072

1128

1239

1001

991

1002.4

1248

1320

1750

1279

1384

1349

1209

1200

1205.4

8

1424

1532

1750

1468

1595

1792

1417

1415

1420.6

9

1578

1724

1750

1643

1795

1793

1570

1562

1672.4

10

1791

1878

1750

1812

1971

1965

1716

1731

1740.2

11

1839

2038

1750

1957

2122

2091

1902

2062

2070.6

12

1964

NA

1750

2103

2268

2258

2015

2223

2230.8

k

d=4

65

Chapter Four

Evaluation and Discussion

Table 4.6 Size of variable input configurations when k=10, each having 5 levels when 2 ≤ d ≤ 6 k =10 Jenny TConfig ITCH d

v=5

PICT

TVG

CTEXL

IPOG

PSO

CS

N

N

N

N

N

N

N

N

Best.N

2

45

48

45

47

50

50

50

45

45

AVG.N 47.8

3

290

312

225

310

342

347

313

287

297

299.2

4

1719

1878

1750

1818

1971

NS

1965

1716

1731

1740.2

5

9437

NA

NS

9706

NA

NS

11009

9425

9616

9620.4

6

NA

NA

NS

47978

NA

NS

57290 50350

46098

48500.6

Table 4.7 Size of variable input configurations when k=10, each having 2 levels when 2 ≤ d ≤ 6

d 2 3 v=2 4 5 6

IPOG ITCH N N 10 6 19 18 49 58 128 NS 352 NS

Jenny N 10 18 39 87 169

k =10 TConfig N 9 20 45 95 183

TVG N 10 17 41 84 168

PSO N 8 17 37 82 158

CS Best.N 8 16 36 79 157

AVG.N 9.0 17.4 38.2 81.8 160.2

Table 4.8 Size of variable input configurations when 2 ≤ d ≤ 6, k = 7, and v = 3 k=7 IPOG

ITCH

TConfig

PSO

N

N

N

N

Best.N

AVG.N

2

17

15

15

14

15

16.0

3

57

45

55

50

49

50.2

v=3 4

185

216

166

160

154

156.8

5

608

NS

477

444

434

439.2

6

1281

NS

921

955

966

970.2

d

66

CS

Chapter Four

Evaluation and Discussion

Table 4.9 TCAS module (MCA (N; d, 27 32 41 102)

k = 12 CS

IPOG

ITCH

Jenny

TConfig

TVGII

PSO

2

N 100

N 120

N 108

N 108

N 101

N 100

Best.N 100

AVG.N 104.2

3

400

2388

413

472

434

400

410

415.2

4

1377

1484

1536

1548

1599

1520

1537

1540.0

5

4283

NS

4621

NS

4773

4566

4566

4576.2

6

11939

NS

11625

NS

NS

11743

11431

11450.0

d

Table 4.10 Five Multi Domain Configurations

Configurations MCA (N; 4, 34 45) 1

8

2

d=4 Jenny TConfig ITCH N N N 457 499 704

PICT N 439

TVG N 487

IPOG N 463

PSO N 447

CS Best.N AVG.N 445.2 439

MCA (N; 4, 5 3 2 )

303

302

1683

310

313

324

292

292

295.4

MCA (N; 4,82 72 62 52)

4580

4317

4085

4565

5124

4776

4506

4438

4450.6

MCA (N;4, 65 54 32)

3033

NA

NA

2634

2881

3273

3154

2703

2730.2

MCA (N; 4,101 91 81 71 61 51 41 31 21)

6138

5495

5922

5916

6698

5492

5906

6016

5845.2

67

Chapter Four

Evaluation and Discussion

The results in Table 4.2 and Table 4.3 show that some strategies are not ready for the use of a higher strength, for example: ITCH and CTE-XL. CTE-XL and ITCH are supporting combination degrees 2 and 3. However, they are obtaining fair results when d ≤ 3. In the same way, Jenny, TConfig, PICT, TVG, IPOG, and PSO also generate suitable results in some of the configurations when d ≥ 3. Overall, the CS strategy produces satisfactory results in all configurations and obtains most optimal solutions in most cases.

In Table 4.4 both TConfig and ITCH are not available when v = 6. Here, some tools and strategies generate test suites that provide satisfactory results compared with each other. Each of TConfig, ITCH, PICT, TVG, and IPOG results of the test suite size did not show optimal sizes. On the other hand, Jenny and PSO generate better results than the above-mentioned strategies for most of the configurations. While, Jenny and PSO generate the most optimal size when v ≥ 4. However, the most optimal sizes obtained by CS while v = 2 and v = 3.

Table 4.5 shows that Jenny, TConfig, PICT, TVG, IPOG produce acceptable results but do not obtain the smallest test suite sizes. In addition, as TConfig is not available for k = 12. Furthermore, CTE-XL is not presented because it does not support a degree which is more than 3. ITCH generates the smallest sizes when k = 5, k = 6, k = 11, and k = 12. For k = 10 PSO generates the best size. Finally, the CS generates the best smallest size for k = 7 up to k = 9.

68

Chapter Four

Evaluation and Discussion

Based on the 5th experiment, tables 4.6 to 4.8 examine the results that are obtained by the strategies and tools through various degrees from 2 up to 6 along with the fixed number of input-factors and their fixed levels for each table (Table 4.6, Table 4.7, and Table 4.8) such as (k = 10, v = 5), (k = 10, v = 2), and (k = 7, v = 3) respectively. Table 4.6 shows that TConfig, TVG, CTE-XL, and IPOG generate acceptable results, but there are no options for optimum test suite sizes, also TConfig and ITCH are unsuitable for the degree more than 4. In addition, Jenny does not present d = 6, and at the same time produces the most optimal size when d = 2. ITCH produces the smallest test suite size when the degree is less than 4, however it does not support d = 6. Similar to ITCH, CTE-XL does not support combination degree more than 3. However, PSO and PICT generate best size in some configuration. CS generates accept results in most cases, while obtains most optimal sizes when d = 2 and d = 6. The results in Table 4.7 show that the CS produces satisfactory results and obtains the smallest sizes for most of the cases exclusive of d = 2, while ITCH generates the best size. The latest table for 5th experiment is Table 4.8 show IPOG and TConfig do not produce optimum sizes. PSO, ITCH, and CS produce satisfactory results and acquire the smallest sizes.

The configuration of Table 4.9 belongs to traffic collection avoidance system (TCAS). TCAS is a specification model of software module part [24] for 12 input-factors in which seven 2-levels, two 3-levels, one 4-level, and two 10-levels. The results show that ITCH, Jenny, TConfig, and TVG produce acceptable results but the smallest sizes have been obtained. However, ITCH and TConfig do not support for a degree which is more than 4, and TVG also does not support d = 6. The PSO generates the smallest sizes when d = 2 and d = 3. In addition, IPOG generates results for most of the cases. 69

Chapter Four

Evaluation and Discussion

The results of CS are acceptable however it obtains the most optimal size when d= 2 and d = 6.

For the multi configurations in Table 4.10, CS produces the best results compared with the other strategies. In the cases where CS is not the best, the sizes are still surrounded by an acceptable value. Similar to the other strategies, in some cases CS obtains the best size, and for some other configurations it fails to generate best size while TConfig and ITCH are not available for d = 4.

Overall, according to comparatives of efficiency, the results from previous tables 4.2 to 4.10 approved that the CS strategy is better than the other strategies in most of the configurations. Although, in some cases CS cannot obtain optimal solutions, the solutions are still within an acceptable value. In addition, the CS supports all the combination degrees of interactions while some strategies are unsuitable for high degrees.

4.5 The CS Performance Evaluation Experiments The performance is measured by the time duration of the test suite generation. Here, the evaluations just consider those strategies and tools that are available publically for installation since the execution time is affected by the specification of the system in which the strategy is running on. The strategies that are considered for this experiment are Jenny, TConfig, ITCH, PICT, TVG, CTE-XL, IPOG-D, IPOG, and PSO. The experiments are divided into three groups based on d, k, and v as follows:

70

Chapter Four

Evaluation and Discussion

i.

Experiment 1: (d) is varied, and (v, k) are constant.

ii.

Experiment 2: (v) is varied, and (d, k) are constant.

iii.

Experiment 3: (k) is varied, and (v, d) are constant.

For each experiment (d) represents the degree of the combinations, (k) represents the number of the input-factor, (v) represents the levels of each input-factor, and (t) represents the time duration in second. The experiments are chosen from different configurations to show the efficiency clearly. The configurations are as follows:  Configuration 1: (2 ≤ d ≤6), (k = 7) and (v = 3). Table 4.11  Configuration 2: (2 ≤ v ≤ 6), (d = 7) and (k = 3). Table 4.12  Configuration 3: (4 ≤ k ≤ 10), (d = 3) and (v = 3). Table 4.13

The results are summarized and reported in Table 4.11 up to 4.13. For each configuration, the most optimal solution (best size) is bolded, and best time duration for the generation of test suite in seconds for each configuration is bolded with underlined. Some tools and strategies are not supported to produce acceptable results in specific configuration, since they don’t report any specific result after a long time of execution. These cases are filled by NS which stand for "Not Supported".

For those strategies that depend on some degrees of randomness like CS, the strategy is run ten times and the smallest solution, and then average size and time for each test configuration in the experiments are calculated and reported. It should be noted that in some cases, the best size may be reported however the best time is not in that specific case. This because, of the implemented tool and data structure that applied in that strategy. 71

Chapter Four

Evaluation and Discussion

Table 4.11 Test sizes and execution times for seven input factors, each having three levels, when 2 ≤ d ≤6

Jenny

TConfig

ITCH

PICT

TVG

N/t

N/t

N/t

N/t

N/t

2

16/0.32

15/0.23

15/17.2

16/0.59

3

51/0.57

55/1.86

45/36.72

4

169/0.62

166/17.4

5

458/1.73

6

1087/2.43

d

k=7

v=3 CTE-XL

IPOG-D

IPOG

PSO

CS

N/t

N/t

N/t

N/t

Best.N/t

AVG.N/t

15/0.22

16/0.24

18/0.19

17/0.443

15/0.21

14/1.68

15.2/1.88

51/0.81

55/0.41

54/2.35

63/0.36

57/0.614

50/4.19

50/0.133

52.4/0.18

216/42.62

168/1.46

167/0.82

NS

NS

185/1.357

155/11.11

156/3.30

157.2/12.50

477/187.52

NS

452/2.27

464/4.602

NS

735/0.86

608/2.264

441/40.83

440/13.43

439.2/15.40

921/1122.8

NS

1015/11.547

1016/10.342

NS

1548/1.18

1281/3.97

977/103.72

963/20.41

970.2/22.40

Table 4.12 Test sizes and execution times for input configuration when k=7, each factor having levels 2≤v≤6 with d=3 k=7 Jenny

TConfig

ITCH

PICT

TVG

CTE-XL

IPOG-D

IPOG

PSO

N/t

N/t

N/t

N/t

N/t

N/t

N/t

N/t

N/t

Best.N/t

AVG.N/t

2

14/0.18

16/0.54

13/22.4

15/0.26

15/0.18

15/0.32

14/0.14

19/0.81

13/0.32

12/6.50

13.8/7.80

3

51/0.57

55/1.16

45/37.12

51/0.81

55/0.41

54/2.55

63/0.33

57/0.534

50/4.21

50/0.133

52.4/0.18

4

124/1.21

112/3.43

112/91.3

124/0.94

134/0.35

136/5.7

112/0.33

208/0.84

116/21.34

118/1.49

118.8/1.85

5

236/2.16

239/17.53

225/112.1

241/1.9

260/1.75

267/20.5

292/0.84

275/1.95

225/35.6

233/5.52

235.2/6.25

6

400/3.21

423/84.58

1177/562.5

413/2.62

464/4.21

467/55.6

532/1.12

455/3.514

425/183.56

403/12.59

410.2/14.50

v

d=3

72

CS

Chapter Four

Evaluation and Discussion

Table 4.13 Test sizes and execution times for input configuration when 4 ≤ k ≤ 10, each factor having three levels with d = 3 v=3

d=3

k

Jenny N/t

TConfig N/t

ITCH N/t

PICT N/t

TVG N/t

CTE-XL N/t

IPOG-D N/t

IPOG N/t

PSO N/t

Best.N/t

CS AVG.N/t

4

34/0.08

32/0.14

27/31.16

34/0.14

34/0.17

34/0.75

27/0.03

39/0.22

27/0.17

29/0.15

29.4/0.20

5

40/0.12

40/0.21

45/37.42

43/0.45

41/0.21

43/1.21

49/0.1

43/0.31

39/1.739

39/0.12

39.4/0.20

6

51/0.39

48/0.63

45/37.62

48/0.83

49/0.48

52/1.45

49/0.12

53/0.51

45/2.25

45/0.39

46.2/0.55

7

51/0.42

55/1.72

45/37.85

51/0.98

55/0.57

54/2.31

63/0.31

57/0.61

50/4.21

48/0.44

49.6/058

8

58/0.69

58/2.21

45/38.37

59/1.12

60/1.251

63/2.43

63/0.32

63/0.23

54/7.15

55/2.21

55.2/2.45

9

62/0.73

64/2.12

75/52.4

63/2.32

64/1.1812

66/4.39

71/0.97

65/1.36

58/7.13

60/2.39

60.8/3.25

10

65/0.9

68/4.51

75/52.67

65/2.34

68/2.12

71/4.94

71/0.11

68/1.92

62/12.18

64/3.15

66.2/4.10

73

Chapter Four

Evaluation and Discussion

Table 4.11 shows that the CTE-XL only supports a degree which is less than 3, and ITCH does not support a degree which is more than 4. ITCH produces satisfactory results, especially when d = 3 generates the smallest sizes, however a long period of the time is needed for generation of this configuration. On the other hand, CS obtains the most optimal duration time while d = 3. When d = 4, Jenny obtains minimum duration time, but according to the size, CS obtains the most optimal size. TConfig, PICT, TVG, IPOG and PSO produce satisfactory results, but, each of those strategies did not generate the best solution. Although, IPOG-D obtains the results with best time in most cases, it doesn’t convey a suitable size. In most of the cases, the CS strategy produces satisfactory results according to the time with optimal sizes in most cases.

The results in Table 4.12 show that some strategies are not good enough to obtain the best solutions such as TConfig, PICT, TVG, IPOG and PSO. However, the outcomes for those strategies produce satisfactory results. ITCH produces satisfactory results, especially when 3 ≤ d ≤ 5 generates the smallest sizes, but the time of the generations for these configurations is long. Jenny produces satisfactory results, especially when v = 6 and gets the smallest size, but IPOG-D produces the best time generation for this configuration. When v = 2, the CS strategy produces the most optimal size, but the best time duration can be obtained by IPOG-D. Overall, IPOG-D has better time for generation; however, it fails to generate the best size.

For configuration in table 4.13 the result examines the size that generates test suite with its time duration by the tools and strategies when the input-factors (k) growing. Here, 74

Chapter Four

Evaluation and Discussion

TConfig, PICT, TVG, CTE-XL, and IPOG do not produce any optimal solutions for both purposes of size and time. Jenny obtains the most optimal time duration when k=4, k = 5, k = 8, and k = 9. ITCH generates the smallest test suite size when k = 6 and k = 8, however, according to time there are no optimal. PSO generates the most optimal sizes when k = 4, k = 5, and k = 8 up to k = 10, PSO is not also suitable to obtain best time for generation. CS produced satisfactory results and generates most optimal sizes when k = 5 up to k = 7. In addition, CS obtains the best results according to time duration and size generation especially when k = 5. Generally, IPOG-D obtains minimum time duration in most of the cases.

4.6 The CS Effectiveness Evaluation through an Empirical Case Study An artefact program is selected as the object of the empirical case study. The program is used to evaluate the personal information of new applicants for officer positions. The program consists of various GUI components that represent personal information and criteria to convert them to a weighted number. Each criterion has an effect on the final result, which decides the rank and monthly wage of the officer. The final number is the resulting point. The program is selected because it has a non-trivial code base and different configurations. Figure 4.1 shows the main window of the program.

The program regards different configurations as input factors. Each input factor has different levels. For example, the user can choose “No Degree, Primary, Secondary, Diploma, Bachelor, Master, and Doctorate” levels for the “Degree” factor. Table 4-14 summarizes the factors and levels for the program.

75

Chapter Four

Evaluation and Discussion

Figure 4.1 Main window of the empirical study program

Table 4.14 Summaries of the input factors and levels for the case study program No.

Factors

Levels [No Degree, Primary, Secondary, Diploma, Bachelor, Master, Doctor] [non, 1, 2, 3, 4, More_than_4]

1

Degree

2

children

3

read

[checked, unchecked]

4

write

[checked, unchecked]

5

speak

[checked, unchecked]

6

understand

[checked, unchecked]

7

New graduate

[checked, unchecked]

8

Experience

[checked, unchecked]

9

English

[checked, unchecked]

10

Disability

[checked, unchecked]

11

Marital Status

12

Resident

[Single, Married, Widow] [Local, outsider, Foreigner]

To this end, the input configuration of the program can be represented by one factor with seven levels, one factor with six levels, eight factors with two levels each, and two factors with three levels each. Thus, this input configuration can be notated in an MCA 76

Chapter Four

Evaluation and Discussion

notation as MCA (N; d, 71 61 28 32). We need 96,768 test cases to test the program with exhaustive configuration testing. In this study, a combinatorial test suite is generated by considering the input configuration to minimize the number of test cases.

Table 4.15 Size of the test suite used for the case study Comb. Degree (d)

2

3

4

5

6

Test suite size

42

136

446

1205

2886

The program was injected with various types of transition mutations (faults) using MuClipse [80] to verify the effectiveness of the proposed strategy. MuClipse is mutation injection software that uses muJava as mutation tool. MuClipse creates various types of faults within the original program to test the effectiveness of the generated test suites in detecting these faults.

In general, mutation testing has two advantages on the test suites obtained from the strategy. The first is that it verifies the contribution of different methods and variables defined in the class on the calculation process within the class. The second is that it determines if any similar behaviour or reaction exists between the test cases. Deriving similar test cases and reducing the number of the cases used for the final test suite are important.

As shown in Table 4-15, when the combination degree is 2, 42 test cases were generated from the optimization algorithm, which covers the entire code. muJava generated 278 mutation classes, which are then reduced to 70 mutation classes as a result of similarity in the mutation concept that generates the same effect. Figure 4.2 shows the reaction of these test cases to the 70 mutation classes. 77

Chapter Four

Evaluation and Discussion

50 45 40 35 30 25 20 15 10 5 0

Failed to detect Mutation

Failed Test due to Mutation

Figure 4.2 Reaction of the test cases with the configuration for the number of mutations detected when d =2.

78

Chapter Four

Evaluation and Discussion

The blue strips in Figure 4.2 represent the number of mutation classes that achieved a correct result. In this case, the test case was not affected by the injected mutation because the mutation has no effect on the class calculation and final result. By contrast, the red strips represent the number of failed test cases that resulted from the effect of the injected mutation. In this case, the mutation has a direct effect on the calculated result and thus achieved an incorrect result. In this study, when d = 2, 12 faults were not detected during the 42 tests.

The number of failed test cases with various mutation classes was used to determine the test cases that have the same response. The cases that have the same number of failed tests were compared to detect any behaviour similarity toward the mutations. From the obtained results, test cases 22 and 29 exhibited the same response for all mutation classes. As a result, test case 29 is an excess to the test cases and can be deleted. Meanwhile, the remaining test cases responded differently to the mutation classes and are thus retained.

When the combination degree is 3, 136 test cases are obtained from the program testing strategy. Figure 4.3 shows the reaction to the same 70 mutation classes used when the combination degree is 3.

79

Chapter Four

Evaluation and Discussion

Figure 4.3 Reaction of the test cases with the configuration for the number of mutations detected when d=3

80

Chapter Four

Evaluation and Discussion

As shown in Figure 4.3, the 136 test cases were applied to verify similarity in the same manner as in the previous case. The test cases that have the same responses to the mutation were deleted to further optimize the final test suites. Notably, many tests could not detect many mutations at once. However, the overall test cases successfully detected all the mutations, including the 12 faults that were not detected by the test suite with a combination degree of 2. The higher combination degree (i.e., the test suites for d > 3) was also able to detect the faults. However, as far as all the faults that were detected by the test suite of d = 3, the results were not reported in this study to avoid redundancy.

81

Chapter Five

Conclusion and Future Work CHAPTER 5

CONCLUSION AND FUTURE WORK

5.1 Conclusion In this thesis, a strategy for combinatorial test suite generation using CSA is proposed. The strategy is applicable for software testing activities in which the combination of input configurations is considered. CS has recently been proven to be an effective optimization algorithm for NP-complete problems. An extensive evaluation with different benchmarks and experimental cases has been presented to determine the strengths and weaknesses of the proposed strategy. Through the thesis, the following conclusions can be drawn: 1. CA can be used successfully in combinatorial testing processes. The faults that caused by the interaction of the input-factors for software-under-test could be detected effectively. As can be seen from Figure 3.10, combinatorial testing can optimize the number of test cases effectively by covering all the interactions of the input factors with a minimized set of test cases. This situation is tested effectively through a case study. The case study supported this hypothesis since the minimized test suite detected all the faults that have been detected by the exhaustive test suite. 2. Using the CS to optimize the combinatorial test suites could generate better results most of the time with binominal time for generations compared with its counterpart strategies. 3. The strategy does not rely on CS algorithm only to generate the final test suite. It combines different algorithms and mechanisms with CS to generate the 82

Chapter Five

Conclusion and Future Work

combinations, check the coverage of them, and delete them when they are covered successfully. To do so, a novel indexing mechanism is also developed and implemented for enabling the strategy to search for coverage faster and improve the performance. 4. The current strategy implemented with a user friendly GUI that supports the flexibility of entering the input factors and levels. The inputs could be manually entered by specifying the number of input factors then enter the levels for each of them. They can also be imported automatically from an Excel sheet file easily. In the same way, the output of the strategy can be displayed on the standard consul of windows or exported to an excel sheet.

5.2 Suggestions for Future Studies During the discussion of the CS strategy, and according to the results in evaluations, the following directions for future studies can be suggested: 1. This research can be extended to study for constructing other strengths of inputfactors such as Variable strength (VSCA). 2. The ability of detecting interaction failures and the applicability of the research make the research to be applicable for different directions such as applying to mobile application testing, GUI testing, prioritization and fault characterizations. 3. The strategy can be extended for supporting the highest interaction strength and increasing the experimentations (i.e. a degree which is more than 6). 4. This research can be extended to use other data structure than array for better reducing index size such us (hash map, hash table).

83

BIBIOGRAPHY

[1]

H. B. K. Tan, Y. Zhao, and H. Zhang, "Estimating LOC for information systems from their conceptual data models," presented at the Proceedings of the 28th international conference on Software engineering, Shanghai, China, 2006.

[2]

L. Baresi and M. Pezz, "An Introduction to Software Testing," Electronic Notes in Theoretical Computer Science(ENTCS), vol. 148, pp. 89-111, 2006.

[3]

R. S. Pressman and B. R. Maxim, Software Engineering: A Practitioner's Approach, 8th edition ed.: McGraw-Hill, 2015.

[4]

L. H. Rosenberg, "What is software quality assurance?," The Journal of Defense Software Engineering, vol. 7, pp. 22-25, Thursday, 2 May 2002 2002.

[5]

A. H. Odeh, "System for measuring source code quality assurance," International Journal of Computer Applications, vol. 60, pp. 35-39, 8, December 2012 2012.

[6]

R. N. Kacker, D. R. Kuhn, Y. Lei, and J. F. Lawrence, "Combinatorial testing for software: An adaptation of design of experiments," Measurement, vol. 46, pp. 3745-3752, 2013.

[7]

S. Ali, L. C. Briand, H. Hemmati, and R. K. Panesar-Walawege, "A systematic review of the application and empirical investigation of search-based test case generation," IEEE -TRANSACTIONS ON SOFTWARE ENGINEERING, vol. 36, pp. 742-762, 2010.

[8]

C. Nie and H. Leung, "A survey of combinatorial testing," ACM Computing Surveys, vol. 43, pp. 1-29, 2011.

[9]

W. Afzal, R. Torkar, and R. Feldt, "A systematic review of search-based testing for non-functional system properties," Information and Software Technology, vol. 51, pp. 957-976, 2009.

84

[10]

S. Nidhra and J. Dondeti, "Blackbox and whitebox testing techniques –a literature review," International Journal of Embedded Systems and Applications (IJESA), vol. 2, 2012.

[11]

S. Singh, A. Kaur, K. Sharma, and S. Srivastava, "Software testing strategies and current issues in embedded software systems," International Journal of Scientific & Engineering Research vol. 4, 2013.

[12]

S. Singh, E. S. Singh, and M. Rakshit, "A review of various software testing techniques," International Journal of Research in Engineering & Advanced Technology (IJREAT), vol. 1, Aug-Sept, 2013 2013.

[13]

A. Anitha, "A brief overview of software testing techniques and metrics," International Journal of Advanced Research in Computer and Communication Engineering, vol. 2, pp. 4655-4659, December 2013 2013.

[14]

D. Choudhary and V. Kumar, "Software testing," Journal of Computational Simulation and Modeling, vol. 1, pp. 01-09, 2011.

[15]

G. Saini and K. Rai, "Software testing techniques for test cases generation," International Journal of Advanced Research in Computer Science and Software Engineering(IJARCSSE), vol. 3, pp. 261-265, September 2013 2013.

[16]

C. Nie, H. Wu, X. Niu, F.-C. Kuo, H. Leung, and C. J. Colbourn, "Combinatorial testing, random testing, and adaptive random testing for detecting interaction triggered failures," Information and Software Technology, vol. 62, pp. 198-213, 2015.

[17]

M. B. Cohen, M. B. Dwyer, and J. Shi, "Interaction testing of highlyconfigurable systems in the presence of constraints," in International Symposium on Software Testing and Analysis, London, United Kingdom, 2007, pp. 129-139.

[18]

D. S. Hoskins, C. J. Colbourn, and D. C. Montgomery, "Software performance testing using covering arrays: efficient screening designs with categorical factors," presented at the Proceedings of the 5th international workshop on Software and performance, Palma, Illes Balears, Spain, 2005. 85

[19]

Q. Xiao, M. B. Cohen, and K. M. Woolf, "Combinatorial interaction regression testing: a study of test case generation and prioritization," in Software Maintenance, 2007. ICSM 2007. IEEE International Conference on, 2007, pp. 255-264.

[20]

Y. Lei and K. C. Tai, "In-parameter-order: a test generation strategy for pairwise testing," 3rd IEEE International Symposium on High-Assurance Systems Engineering, pp. 254-261, 1998.

[21]

C. Yilmaz, M. B. Cohen, and A. Porter, "Covering arrays for efficient fault characterization in complex configuration spaces," presented at the ACM SIGSOFT Software Engineering Notes, 2004.

[22]

V. V. Kuliamin and A. Petoukhov, "A survey of methods for constructing covering arrays," Programming and Computer Software, vol. 37, pp. 121-146, 2011.

[23]

S. Maity, A. Nayak, M. Zaman, N. Bansal, and A. Srivastava, "An improved test generation algorithm for pair-wise testing," International Symposium on Software Reliability Engineering ( ISSRE), 2003.

[24]

Y. Lei, R. Kacker, D. R. Kuhn, V. Okun, and J. Lawrence, "IPOG-IPOG-D: Efficient test generation for multi-way combinatorial testing," Software Testing Verification and Reliability (Softw. Test. Verif. Reliab), vol. 18, pp. 125-148, 2008.

[25]

A. Ouaarab, Belaid Ahiod, and X.-S. Yang, "Discrete cuckoo search algorithm for the travelling salesman problem," Neural Computing and Applications, vol. 24, June 2014 2014.

[26]

X. Yuan, M. B. Cohen, and A. M. Memon, "GUI interaction testing: incorporating event context," Software Engineering, IEEE Transactions on, vol. 37, pp. 559-574, July 2011.

[27]

X.-S. Yang and S. Deb, "Cuckoo search via L´evy flights," in Nature & Biologically Inspired Computing, 2009. NaBIC 2009. World Congress on, 2009, pp. 210-214. 86

[28]

X.-S. Yang and S. Deb, "Cuckoo search: recent advances and applications," Neural Computing and Applications vol. 24, pp. 169-174, January 2014 2014.

[29]

S. Dejam, M. Sadeghzadeh, and S. J. Mirabedini, "Combining cuckoo and tabu algorithms for solving quadratic assignment problems," Journal of Academic and Applied Studies, vol. 2, pp. 1-8, 2012.

[30]

X.-S. Yang and S. Deb, "Engineering optimisation by cuckoo search," International Journal of Mathematical Modelling and Numerical Optimisation, vol. 1, pp. 330-343, 2010.

[31]

R. Rajabioun, "Cuckoo optimization algorithm," Applied Soft Computing, vol. 11, pp. 5508-5518, May 2011 2011.

[32]

S. Kamat and A. G. Karegowda, "A brief survey on cuckoo search applications," International Journal of Innovative Research in Computer and Communication Engineering, vol. 2, May 2014 2014.

[33]

R. C. Bryce, C. J. Colbourn, and M. B. Cohen, "A framework of greedy methods for constructing interaction test suites," presented at the Proceedings of the 27th international conference on Software engineering, St. Louis, MO, USA, 2005.

[34]

R. R. Othman, K. Z. Zamli, and S. M. S. Mohamad, "T-Way Testing Strategies: A Critical Survey and Analysis," International Journal of Digital Content Technology and its Applications(JDCTA), vol. 7, pp. 22-235, 2013.

[35]

Z. Wang, B. Xu, and C. Nie, "Greedy heuristic algorithms to generate variable strength combinatorial test suite," in 8th International Conference on Quality Software, Oxford, UK, 2008, pp. 155-160.

[36]

M. B. Cohen, P. B. Gibbons, W. B. Mugridge, and C. J. Colbourn, "Constructing test suites for interaction testing," International Conference on Software Engineering (ICSE), pp. 38-48, 2003.

[37]

X. Chen, Q. Gu, A. Li, and D. Chen, "Variable strength interaction testing with an ant colony system approach," in 16th Asia-Pacific Software Engineering Conference, Penang, Malaysia, 2009, pp. 160-167. 87

[38]

B. S. Ahmed, K. Z. Zamli, and C. P. Lim, "Application of particle swarm optimization to uniform and variable strength covering array construction," Applied Soft Computing, vol. 12, pp. 1330-1347, 2012.

[39]

K. J. Nurmela, "Upper bounds for covering arrays by tabu search," Discrete Applied Mathematics, vol. 138, pp. 143-152, 2004.

[40]

C. Yilmaz, S. Fouche, M. Cohen, A. A. Porter, G. Demiroz, and U. Koc, "Moving Forward with Combinatorial Interaction Testing," Computer, vol. 47, pp. 37-45, 2014.

[41]

P. McMinn, "Search-based software test data generation: a survey: Research Articles," Software testing, verification & reliability(Softw. Test. Verif. Reliab), vol. 14, pp. 105-156, 2004.

[42]

M. E. Khan, "Different approaches to black box testing technique for finding errors," International Journal of Software Engineering & Applications (IJSEA), vol. 12, pp. 31-40, October 2011 2011.

[43]

E. R. Chopra, Software testing, fourth edition ed. New delhi - india: S.K. Kataria & Sons, 2012.

[44]

M. Palacios, J. García-Fanjul, J. Tuya, and G. Spanoudakis, "Automatic test case generation for WS-Agreements using combinatorial testing," Computer Standards & Interfaces, vol. 38, pp. 84-100, 2015.

[45]

C. J. Colbourn, "Combinatorial aspects of covering arrays," Le Matematiche, vol. 59, pp. 125-172, August 31, 2004 2004.

[46]

S. Fouch, M. B. Cohen, and A. Porter, "Towards incremental adaptive covering arrays," presented at the The 6th Joint Meeting on European software engineering conference and the ACM SIGSOFT symposium on the foundations of software engineering: companion papers, Dubrovnik, Croatia, 2007.

[47]

L. Lazic, S. POPOVIĆ, and N. E. Mastorakis, "A simultaneous application of combinatorial testing and virtualization as a method for software testing," 88

WSEAS Transactions on Information Science and Applications, vol. 6, pp. 18021813, 2009.

[48]

S. Raaphorst, "Variable strength covering arrays " Doctor of Philosophy PhD Thesis, University of Ottawa, 2013.

[49]

A. H. Ronneseth and C. J. Colbourn, "Merging covering arrays and compressing multiple sequence alignments," Discrete Applied Mathematics, vol. 157, pp. 2177-2190, 2009.

[50]

L. Gonzalez-Hernandez, "New bounds for mixed covering arrays in t-way testing with uniform strength," Information and Software Technology, vol. 59, pp. 17-32, 2015.

[51]

M. Chateauneuf and D. L. Kreher, "On the state of strength-three covering arrays," Journal of Combinatorial Designs, vol. 10, pp. 217-238, 16 MAY 2002 2002.

[52]

C. S. Cheng, "Orthogonal arrays with variable numbers of symbols," The Annals of Statistics, vol. 8, pp. 447-453, 1980.

[53]

D. M. Cohen, S. R. Dalal, M. L. Fredman, and G. C. Patton, "The AETG system: an approach to testing based on combinatorial design," IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, vol. 23, pp. 437–444., 1997.

[54]

E. Lehmann and J. Wegener, "Test case design by means of the CTE XL," in proceedings of the8th european international conference on software testing,analysis&review(EuroSTAR2000), Kopenhagen,Denmark, 2000, pp. 110.

[55]

J. Arshem. (2000, 3/1/2015). Available: http://sourceforge.net/projects/tvg/

[56]

A. W. Williams, "Determination of Test Configurations for Pair-Wise Interaction Coverage," in Testing of Communicating Systems. vol. 48, ed: Springer US, 2000, pp. 59-74. 89

[57]

M. B. Cohen, "Designing test suites for software interaction testing," Doctor of Philosophy PhD Thesis, University of Auckland, 2004.

[58]

B. Jenkins. (2005, http://burtleburtle.net/bob/math/jenny.html

[59]

J. Czerwonka, "Pairwise testing in real World. Practical extensions to test case generators," in Proceedings of the 24th pacific northwest software quality conference, Portland, Oregon, 2006, pp. 419-430.

[60]

G. B. Sherwood, S. S. Martirosyan, and C. J. Colbourn, "Covering arrays of higher strength from permutation vectors," Journal of Combinatorial Designs, vol. 14, pp. 202-213, 2006.

[61]

K.-c. Tai and Y. Lei, "A test generation strategy for pairwise testing," IEEE Transactions on Software Engineering, vol. 28, pp. 109-111, January 2002 2002.

[62]

Y. Lei, R. Kacker, D. R. Kuhn, V. Okun, and J. Lawrence, "IPOG: A General Strategy for T-Way Software Testing," presented at the Proceedings of the 14th Annual IEEE International Conference and Workshops on the Engineering of Computer-Based Systems, 2007.

[63]

J. Czerwonka. (2015, 7/3/2015). http://www.pairwise.org/tools.asp

[64]

B. S. Ahmed, K. Z. Zamli, and C. P. Lim, "Constructing a t-way interaction test suite using the particle swarm optimization approach," International Journal of Innovative Computing, Information and Control, vol. 8, pp. 431-452, 2012.

[65]

J. Stardom, "Metaheuristics and the search for covering and packing arrays," Master’s Thesis, Simon Fraser University, 2001.

[66]

T. Shiba, T. Tsuchiya, and T. Kikuno, "Using artificial life techniques to generate test cases for combinatorial testing," in 28th Annual International Computer Software and Applications Conference, Hong Kong 2004, pp. 72-77. 90

3/1/2015).

Available

Available:

Tools.

Available:

[67]

H. Liu, A. Abraham, and W. Zhang, "A fuzzy adaptive turbulent particle swarm optimisation," Int. J. Innov. Comput. Appl., vol. 1, pp. 39-47, 2007.

[68]

H. M. Nehi and S. Gelareh, "A survey of meta-heuristic solution methods for the quadratic assignment problem," Applied Mathematical Sciences, vol. 1, pp. 2293 - 2312, 2007.

[69]

X.-S. Yang, "Metaheuristic optimization," Scholarpedia, vol. 6, 2011.

[70]

A. Hartman, "Software and Hardware Testing Using Combinatorial Covering Suites," in Graph Theory, Combinatorics and Algorithms. vol. 34, M. Golumbic and I.-A. Hartman, Eds., ed: Springer US, 2005, pp. 237-266.

[71]

B. S. Ahmed, M. A. Sahib, and M. Y. Potrus, "Generating combinatorial test cases using Simplified Swarm Optimization (SSO) algorithm for automated GUI functional testing," Engineering Science and Technology, an International Journal, vol. 17, pp. 218-226, 2014.

[72]

X. Yuan and A. M. Memon, "Using GUI run-time state as feedback to generate test cases," in 29th International Conference on Software Engineering, Washington, DC, USA, 2007, pp. 396-405.

[73]

A. M. Memon, "Automatically repairing event sequence-based GUI test suites for regression testing," ACM Transactions on Software Engineering and Methodology, vol. 18, pp. 1-36, 2008.

[74]

W. Huimin, G. Qiang, and Q. Zhaowei, "Parameter tuning of particle swarm optimization by using Taguchi method and its application to motor design," in 4th IEEE International Conference on Information Science and Technology (ICIST), , 2014, pp. 722-726.

[75]

G. Xu, "An adaptive parameter tuning of particle swarm optimization algorithm," Applied Mathematics and Computation, vol. 219, pp. 4560-4569, 2013.

[76]

X. Li and M. Yin, "Modified cuckoo search algorithm with self adaptive parameter method," Information Sciences, vol. 298, pp. 80-97, 2015. 91

[77]

T. T. Nguyen and D. N. Vo, "Modified cuckoo search algorithm for short-term hydrothermal scheduling," International Journal of Electrical Power & Energy Systems, vol. 65, pp. 271-281, 2015.

[78]

D. J. Velleman and G. S.Call, "Permutations and combination locks," Mathematics Magazine, vol. 68, pp. 243-253, 1995.

[79]

X.-S. Yang, Nature-Inspired Metaheuristic Algorithms Second Edition. BA11 6TT, Unitet Kingdom: Luniver Press, 2010.

[80]

muclipse. (2015, 8/1/2015). Available: http://muclipse.sourceforge.net/

92

PUBLICATION [1]

B. S. Ahmed, T. S. Abdulsamad, and M. Y. Potrus, "Achievement of minimized combinatorial test suite for configuration-aware software functional testing using the Cuckoo Search algorithm," Information and Software Technology, vol. 66, pp. 13-29, 2015. Elsevier. [ISI, Indexed by Scopus, Thomson Reuters Impact factor =1.32].

93

      

    

           

                              

                            

      

   

                                    

                                                                                                                                                                                                                                                                    

                                                                                                                           

                                                                                                                                                

Kurdish Abstract

  different configurations  quality of         Input-Factors     configurations software  exhaustive test   sampling techniques  Combinatorial Testing  detect faults       combinatorial optimization concepts   test cases          combinations of inputs          combinatorial test suite                   Cuckoo Search      optimized combinatorial sets      algorithms  efficiency      experiment sets      performance         functional testingcase studygenerated test suites detect generated test suites faults

 95

‫‪Arabic Abstract‬‬ ‫اﻟﺨﻼﺻﺔ‬ ‫ﻛﺜﯿﺮ ﻣﻦ ﻣﺠﺎﻻت اﻟﺘﻨﻔﯿﺬ اﻟﻌﻠﻤﻲ و اﻟﮭﻨﺪﺳﻰ‪.‬‬ ‫ﻓﻲ اﯾﺎﻣﻨﺎ ھﺬا‪ ،‬اﺻﺒﺢ ﺳﻮﻓﺖ وﯾﺮ )اﻟﺒﺮﻣﺠﺔ( اﺣﺪى طﺮق اﻻﺧﺘﺮاع ﻓﻲ ٍ‬ ‫اﻟﺘﺄﻛﺪ ﻣﻦ اﻟﻨﻮﻋﯿﺔ ﺿﺮوري ﺟﺪا ً ﻧﻈﺮا ﻟﻮﺟﻮد ﺗﻨﻈﯿﻤﺎت ﻣﺨﺘﻠﻔﺔ )‪ (configurations different‬ﻓﻲ ﻣﺠﺎل‬ ‫اﻟﻤﺆﺛﺮات )‪ (input-factors‬ﻟﻜﻞ ﺑﺮﻧﺎﻣﺞ ﻋﻠﻰ ﺣﺪى‪ .‬اﻟﺘﺄﻛﺪ ﻣﻦ ﻧﻮﻋﯿﺔ ﺳﻮﻓﺖ وﯾﺮ )‪(quality of software‬‬ ‫ﺖ ان‬ ‫ﯾﺤﺘﺎج اﻟﻰ ﺗﻘﯿﻢ ﻣﺠﻤﻞ اﻟﺘﻨﻈﯿﻤﺎت )‪ (configurations‬و اﻟﺘﺄﺛﯿﺮ اﻟﻤﺘﺒﺎدل ﺑﯿﻨﮭﻢ اﻣﺎم اﻟﻨﺘﺎﺋﺞ اﻟﻤﺘﻮﻗﻌﺔ‪ .‬ﻓﻲ وﻗ ٍ‬ ‫اﺧﺘﺒﺎر ﺟﻤﯿﻊ اﻟﺤﺎﻻت )‪ (exhaustive test‬ﻏﯿﺮ ﻋﻤﻠﻲ ﻷن اﻟﺘﻨﻈﯿﻤﺎت ﻛﺒﯿﺮة‪ ،‬ﻟﮭﺬا اﻟﺴﺒﺐ ﯾﺴﺘﻌﻤﻞ طﺮﯾﻘﺔ ﻧﻤﻮذج‬ ‫)‪ (sampling techniques‬ﻟﻜﻲ ﯾﻤﺜﻞ ھﺬه اﻟﺘﻨﻈﯿﻤﺎت‪ .‬و ﺗﺴﺘﺨﺪم طﺮﯾﻘﺔ اﻷﺧﺘﺒﺎر اﻻﻧﺪﻣﺎﺟﻰ ) ‪combinatorial‬‬ ‫‪ (testing‬ﻟﻜﺸﻒ اﻷﺧﻄﺎء )‪ (detect faults‬اﻟﺘﻲ ﻻ ﯾﻤﻜﻦ ﻛﺸﻔﮭﺎ ﺑﺎﻟﻄﺮق اﻻﺧﺮى‪ .‬ھﺬه اﻟﻄﺮﯾﻘﺔ ﺗﺴﺘﺨﺪم اﺳﺎﺳﯿﺎت‬ ‫ﻣﻔﺎھﯿﻢ اﻟﺘﺤﺴﯿﻦ اﻷﻧﺪﻣﺎﺟﻲ )‪ (optimization concepts combinatorial‬اﻟﺘﻲ و ﺑﺸﻜﻞ ﻧﻈﺎﻣﻲ و ﻣﻨﻈﻢ ﺗﻘﻮم‬ ‫ﺑﺘﻘﻠﯿﻞ ﻋﺪد ﻛﻞ ھﺬه اﻟﺤﺎﻻت )‪ (test cases‬ﺑﻤﻮﺟﺐ اﻧﺪﻣﺎج اﻟﻤﺆﺛﺮات )‪ .(combination of inputs‬اﻟﻘﺼﺪ ﻣﻦ‬ ‫وراء ھﺬا اﻟﺒﺤﺚ ھﻮ اﯾﺠﺎد طﺮﯾﻘﺔ ﺟﺪﯾﺪة ﻷﺑﺘﻜﺎر اﻟﺤﺎﻻت اﻷﻧﺪﻣﺎﺟﯿﺔ )‪ (combinatorial test suite‬و اﻟﺘﻰ ﺗﺴﺘﺨﺪم‬ ‫اﺳﺲ اﻟﺒﺤﺚ ﻋﻨﺪ طﺎﺋﺮ اﻟﻮاﻗﻮاق )‪ .(Cuckoo Search‬ﯾﺴﺘﺨﺪم طﺮﯾﻘﺔ ﺑﺤﺚ اﻟﻄﺎﺋﺮ اﻟﻤﺬﻛﻮر ﻟﺘﺼﻤﯿﻢ )‪ (design‬و‬ ‫ﺗﻨﻔﯿﺬ )‪ (implementation‬ھﺬه اﻷﺳﺘﺮاﺗﯿﺠﯿﺔ ﻟﺒﻨﺎء ﻣﺠﻤﻮﻋﺎت اﻷﻧﺪﻣﺎج اﻷﻣﺜﻞ ) ‪optimized combinatorial‬‬ ‫‪ .(sets‬ﺗﺘﻜﻮن ھﺬه اﻷﺳﺘﺮاﺗﯿﺠﯿﺔ ﻣﻦ ﻋﺪة طﺮق ﻣﺨﺘﻠﻔﺔ )‪ (algorithms‬و اﻟﺘﻲ ﯾﻌﻤﻠﻮن ﻣﻌﺎ ﻟﺒﻨﺎء و ﺗﻘﺪﯾﻢ ھﺬة‬ ‫اﻷﺳﻠﻮب ﻓﻲ اﻟﺒﺤﺚ ﻟﻄﺎﺋﺮ اﻟﻮاﻗﻮاق‪ .‬ھﺬة اﻟﻄﺮﯾﻘﺔ اﻟﺠﺪﯾﺪة و ﺑﺤﺴﺐ ﻣﺠﻤﻮع اﻟﺘﺠﺎرب )‪ (experiment sets‬ﯾﺒﺮھﻦ‬ ‫ﺑﺎن ھﺬه اﻷﺳﺘﺮاﺗﯿﺠﯿﺔ ﺗﺤﻘﻖ ھﺪﻓﯿﻦ أﺛﻨﯿﻦ ﻣﻦ ﺣﯿﺚ اﻟﻜﻔﺎﺋﺔ )‪ (efficiency‬و اﻷﺳﺘﺠﺎﺑﺔ )‪ ،(performance‬ﻟﺘﻠﻚ‬ ‫اﻟﺘﺠﺎرب اﻟﺘﻲ ﺗﻢ اﻟﻘﯿﺎم ﺑﮭﺎ ﻟﺘﻘﯿﯿﻢ واﻻﺳﺘﺮاﺗﯿﺠﯿﺔ اﻟﺘﻲ ﺗﻢ اﻟﺤﺼﻮل ﻋﻠﯿﮭﺎ اﻟﺤﺪ اﻷدﻧﻰ ﻣﻦ اﻟﻨﺘﺎﺋﺞ ل ‪ 50‬ﺗﺮﺗﯿﺐ ﻣﻦ‪. 95‬‬ ‫ﺑﻞ و أﻛﺜﺮ ﻣﻦ ذاﻟﻚ‪ ،‬و ھﻮ ﺗﻘﯿﻢ اﻟﺠﺎﻧﺐ اﻟﻌﻤﻠﻲ ﻟﮭﺬه اﻷﺳﺘﺮاﺗﯿﺠﯿﺔ ﺑﺘﻨﻔﯿﺬ و ﺻﻨﻊ ﺣﺎﻻت اﻷﺧﺘﺒﺎر) ‪generated test‬‬ ‫‪ (suites‬ﻓﻲ ﻧﻤﻮذﺟﯿﻦ ﻋﻤﻠﻲ )‪ (case study‬ﻣﺜﻞ اﻷﺧﺘﺒﺎر اﻟﻔﻨﻜﺸﻨﺎل )‪ .(functional testing‬و اﻟﻨﺘﺎﺋﺞ ﺗﻜﺸﻒ‬ ‫ﺑﺎن ﺣﺎﻻت اﻷﺧﺘﺒﺎر)‪ (generated test suites‬ﺗﺤﺪد اﻷﺧﻄﺎء )‪ .(detect faults‬ﻋﻮﺿﺎ ً ﻋﻦ ان ھﺬة‬ ‫اﻷﺳﺘﺮاﺗﯿﺠﯿﺔ ﺗﻔﺘﺢ ﻟﻨﺎ ﻋﺪة ﻣﻨﺎﻓﺬ ﻓﻲ اﺳﺘﺨﺪام اﺳﻠﻮب ﺑﺤﺚ طﺎﺋﺮ اﻟﻮاﻗﻮاق ﻓﻲ اﻟﮭﻨﺪﺳﺔ اﻟﺒﺮﻣﺠﯿﺔ‪.‬‬

‫‪96‬‬

‫‪CoverPage_Arabic‬‬

‫ﺗﺼﻤﯿﻢ وﺗﻨﻔﯿﺬ اﺳﺘﺮاﺗﯿﺠﯿﺔ اﺧﺘﺒﺎر اﻧﺪﻣﺎﺟﻲ ﻋﻦ طﺮﯾﻖ‬ ‫اﻟﺘﻜﯿﻒ اﻟﺨﻮارزﻣﯿﺔ اﻟﻮﻗﻮاق‬

‫رﺳﺎﻟﺔ ﻣﻘﺪﻣﺔ اﻟﻰ ﻛﻠﯿﺔ اﻟﺘﺠﺎرة ‪ -‬ﺟﺎﻣﻌﺔ اﻟﺴﻠﯿﻤﺎﻧﯿﺔ‬ ‫ﻛﺠﺰء ﻣﻦ ﻣﺘﻄﻠﺒﺎت ﻧﯿﻞ درﺟﺔ اﻟﻤﺎﺟﺴﺘﯿﺮ ﻋﻠﻮم ﻓﻲ‬ ‫ﺗﻘﻨﯿﺔ اﻟﻤﻌﻠﻮﻣﺎت‬

‫ﻣﻦ ﻗﺒﻞ‬ ‫طﯿﺐ ﺷﻤﺲ اﻟﺪﯾﻦ ﻋﺒﺪاﻟﺼﻤﺪ‬

‫ﺑﺈﺷﺮاف‬ ‫اﻟﻤﺪرس‬ ‫د‪.‬ﺑﯿﺴﺘﻮن ﺻﻼح اﻟﺪﯾﻦ اﺣﻤﺪ‬ ‫‪‬‬

‫‪‬‬ ‫‪97‬‬

Design and Implementation of a Combinatorial Test Suite Strategy ...

Design and Implementation of a Combinatorial Test Su ... rategy Using Adaptive Cuckoo Search Algorithm_ p.pdf. Design and Implementation of a ...

3MB Sizes 5 Downloads 320 Views

Recommend Documents

COLLADA Conformance Test Suite and ... - Khronos Group
best solution for developers with regard to COLLADA. The WG agreed ... COLLADA Conformance Test Suite (CTS) should support OpenCOLLADA plugins for. MAX and ... MAYA plugins based on Feeling software FCOLLADA API. The core ...

COLLADA Conformance Test Suite and ... - Khronos Group
Other company and product names may be trademarks of the respective ... MAYA plugins based on Feeling software FCOLLADA API. .... Budget: $15K (US) ... framework, and of any test development you have previously performed in this.

Test Case Prioritization and Test Suite Optimization ...
Abstract: Software Testing is an important activity in Software Development Life Cycle. (SDLC). Software testing is a process of executing a program or application with the intent of finding the bugs. Testing is expensive and prone to mistakes and ov

A Review Study of NIST Statistical Test Suite
Development of an indigenous Computer Package .... A concept of degrees of freedom is introduced in these tests in the form of blocks or classes. For such ...

Design and Implementation of e-AODV: A Comparative Study ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 6, ... In order to maximize the network life time, the cost function defined in [9] ...

design and implementation of a high spatial resolution remote sensing ...
Aug 4, 2007 - 3College of Resources Science and Technology, Beijing Normal University, Xinjiekou Outer St. 19th, Haidian ..... 02JJBY005), and the Research Foundation of the Education ... Photogrammetric Record 20(110): 162-171.

Design and implementation of a new tinnitus ... -
School of Electronics and Information Engineering, Sichuan University, ... Xavier Etchevers; Thierry Coupaye; Fabienne Boyer; Noël de Palma; Gwen ...

Design and Implementation of a Fast Inter Domain ...
Jul 6, 2006 - proximity of virtual machines sharing data and events can .... that share file systems is already being investigated [14] [15]. [16]. It is not ...

design and implementation of a high spatial resolution remote sensing ...
Aug 4, 2007 - 3College of Resources Science and Technology, Beijing Normal University, ..... 02JJBY005), and the Research Foundation of the Education.

design and implementation of a high spatial resolution remote sensing ...
Therefore, the object-oriented image analysis for extraction of information from remote sensing ... Data Science Journal, Volume 6, Supplement, 4 August 2007.

The Design and Implementation of a Large-Scale ...
a quadratic analytical initial step serving to create a quick coarse placement ..... GORDIAN [35] is another tool that uses a classical quadratic objective function ...... porting environment of software components with higher levels of functionality

Design, Simulation and Implementation of a MIMO ...
2011 JOT http://sites.google.com/site/journaloftelecommunications/. Design, Simulation and Implementation of a. MIMO MC-CDMA based trans-receiver system.

Design and Implementation of e-AODV: A Comparative Study ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 6, ... Keywords: Wireless mobile ad hoc networks, AODV routing protocol, energy ... In order to maximize the network life time, the cost function defined in [9] ...

Design and Implementation of a Ubiquitous Robotic ...
three proposed spaces are similar to the work conducted by Saffiotti and colleagues [8]. .... tential parent nodes around the mobile node by calling NLME-NET- .... to investigate and interact with the physical space in an intuitive way. Fusion of ...

design and implementation of a voronoi diagrams ...
There are various legal systems in place to protect consumers. Electronic fraud related to ... at lower prices. Online companies are trying their best to attract and ...

Design and Implementation of a Log-Structured File ... - IEEE Xplore
We introduce the design principles for SSD-based file systems. They should exploit the performance character- istics of SSD and directly utilize file block level statistics. In fact, the architectural differences between SSD and. HDD result in differ

Design and implementation of Interactive ...
Processing can be deployed within the development environment, in Java projects, as well as in HTML/JSP pages in tags. For web pages integration, processing.js extension is needed. Its main purpose is to translate the whole sketch into the Javascrip

design and implementation of a computer systems ...
for a specialised diagnosis service arose which would give the management system the ability to predict failure causes ...... taccesspolicy.xml and crossdomain.xml files at the root of the domain where the service is hosted. ...... Algorithm types Li

The Design and Implementation of a Large-Scale ...
Figure 2.3: GORDIAN: Center of Gravity Constraints: The average location of ..... as via a distributable solution framework for both global and detailed placement phases. ...... BonnPlace calls this step repartitioning, we call it re-warping.