close

Вход

Забыли?

вход по аккаунту

?

Ontology-based Unit Test Generation

код для вставкиСкачать
Ontology-based Unit Test Generation
by
Valeh Hosseinzadeh Nasser
B . S c , Amirkabir University of Technology, 2007
A THESIS S U B M I T T E D IN PARTIAL F U L F I L L M E N T OF T H E
R E Q U I R E M E N T S FOR T H E D E G R E E OF
Master of Computer Science
In the Graduate Academic Unit of Faculty of Computer Science
Supervisor(s):
Examining Board:
Weichang Du, Ph.D., Computer Science
Dawn Maclsaac, Ph.D., Computer Science
Przemyslaw R. Pochec, Ph.D., Computer Science, Chair
Harold Boley, Ph.D., Computer Science
Yevgen Biletskiy, Ph.D., Electrical and Computer Engineering
This thesis is accepted
Dean of Graduate Studies
T H E U N I V E R S I T Y OF N E W B R U N S W I C K
September, 2009
©Valeh Hosseinzadeh Nasser, 2009
1*1
Library and Archives
Canada
Bibliotheque et
Archives Canada
Published Heritage
Branch
Direction du
Patrimoine de I'edition
395 Wellington Street
Ottawa ON K1A 0N4
Canada
395, rue Wellington
OttawaONK1A0N4
Canada
Your file Votre reference
ISBN: 978-0-494-82648-5
Our file Notre reference
ISBN: 978-0-494-82648-5
NOTICE:
AVIS:
The author has granted a nonexclusive license allowing Library and
Archives Canada to reproduce,
publish, archive, preserve, conserve,
communicate to the public by
telecommunication or on the Internet,
loan, distribute and sell theses
worldwide, for commercial or noncommercial purposes, in microform,
paper, electronic and/or any other
formats.
L'auteur a accorde une licence non exclusive
permettant a la Bibliotheque et Archives
Canada de reproduire, publier, archiver,
sauvegarder, conserver, transmettre au public
par telecommunication ou par I'lnternet, preter,
distribuer et vendre des theses partout dans le
monde, a des fins commerciales ou autres, sur
support microforme, papier, electronique et/ou
autres formats.
The author retains copyright
ownership and moral rights in this
thesis. Neither the thesis nor
substantial extracts from it may be
printed or otherwise reproduced
without the author's permission.
L'auteur conserve la propriete du droit d'auteur
et des droits moraux qui protege cette these. Ni
la these ni des extraits substantiels de celle-ci
ne doivent etre im primes ou autrement
reproduits sans son autorisation.
In compliance with the Canadian
Privacy Act some supporting forms
may have been removed from this
thesis.
Conformement a la loi canadienne sur la
protection de la vie privee, quelques
formulaires secondaires ont ete enleves de
cette these.
While these forms may be included
in the document page count, their
removal does not represent any loss
of content from the thesis.
Bien que ces formulaires aient inclus dans
la pagination, il n'y aura aucun contenu
manquant.
1+1
Canada
Dedication
To my beloved husband, who has always been an inspiration to me and my dear
parents and brother, who have given me great support and love.
Valeh H. Nasser
n
Abstract
Various software systems have different test requirements. In order to specify adequate levels of testing, coverage criteria are used. The knowledge that is referred by
coverage criteria for test case selection is defined in test oracles. This thesis is devoted to application of knowledge engineering techniques to facilitate the enrichment
of test oracles with test experts' mental model of error-prone aspects of software and
definitions of custom coverage criteria.
The test oracles are represented in ontologies, which are highly extensible. The
coverage criteria are written in a rule language, using the vocabulary defined by the
test oracle ontology. This approach makes it possible for the test experts to add
the knowledge to the test oracles and compose new coverage criteria. To decouple
the knowledge that is represented in test oracles from the test selection algorithms,
reasoning is used for test case selection. Prevalent test case generation technologies
are then used for generating the test cases. The focus of this thesis is on unit testing
based on UML state machines.
m
Acknowledgements
I would like to express profound gratitude to my supervisors, Dr. Weichang Du
and Dr. Dawn Maclsaac, for their recommendations, encouragement and support
throughout the course of this thesis. I am also highly thankful to Dr. Harold Boley
for his invaluable suggestions.
Valeh H. Nasser
IV
Table of Contents
Dedication
ii
Abstract
iii
Acknowledgments
iv
Table of Contents
ix
List of Tables
x
1
Introduction
1
1.1
Test Generation and the Role of Test Experts
1
1.2
Thesis Scope
4
1.3
Structure of Thesis
5
List of Figures
1
2
Background
6
2.1
Specification of What Needs to Be Tested
8
2.1.1
UML State Machines
8
2.1.2
Coverage Criteria for State-machine-based Unit Testing . . . .
8
2.1.3
Mapping UML to OWL
9
2.2
Identification of Test Objectives Through Reasoning
v
14
2.3
3
Generation of Test Cases with Artificial Intelligence Planning
....
16
2.3.1
PDDL 2.1
17
2.3.2
A Mapping between PDDL and UML State Machine
19
A n Ontology-based M e t h o d for Software Testing
22
3.1
Method Overview
22
3.2
Syntax and Semantics of Specifications
24
3.2.1
Behavioral Model Ontology
24
3.2.2
Expert Knowledge Ontology
25
3.2.3
Test Objectives
29
3.2.4
Coverage Criteria Rules
30
3.2.5
Abstract Test Suite Ontology
32
3.2.6
Redundancy Checking Rule Templates
33
3.2.7
Implementation Knowledge Ontology
35
3.2.8
Executable Test Suite
37
3.3
3.4
Transformation Phases
38
3.3.1
Test Objective Generation Phase
38
3.3.2
Redundancy Checking Phase
40
3.3.3
Abstract Test Suite Ontology Generation Phase
41
3.3.4
Executable Test Suite Generation Phase
42
A Simple Example
43
3.4.1
Elevator Door Example
43
3.4.2
Specifications
43
3.4.2.1
Behavioral Model Ontology
43
3.4.2.2
Coverage Criteria Rule
46
3.4.2.3
Test Objectives
46
3.4.2.4
Expert Knowledge Ontology
46
vi
3.4.3
3.5
4
5
3.4.2.5
Abstract Test Suite Ontology
47
3.4.2.6
Redundancy Checking Rule Templates
48
3.4.2.7
Implementation Knowledge Ontology
48
3.4.2.8
Executable Test Suite
49
Transformation Phases
50
3.4.3.1
Test Objective Generation Phase
51
3.4.3.2
Redundancy Checking Phase
51
3.4.3.3
Abstract Test Suite Ontology Generation Phase
3.4.3.4
Executable Test Suite Generation Phase
Summary
. .
53
54
55
S y s t e m Design
57
4.1
System Overview
57
4.2
Design Classes
61
4.2.1
Test Objective Generation Subsystem
61
4.2.2
Redundancy Checking Subsystem
62
4.2.3
Test case Generation Subsystem
63
4.2.4
System Operation
66
4.3
System Operation Scenario
67
4.4
Summary
68
System Implementation
70
5.1
Realization of Design Classes
70
5.2
Detailed Design
71
5.2.1
The testStructureGenerator.generator Package
72
5.2.2
The testStructureGenerator.assessment Package
72
5.2.3
The testStructureGenerator.common Package
74
5.2.4
The testcaseGenerator.plannerinit Package
75
vii
5.3
6
The testcaseGenerator.plannerinit.datastructures Package . . .
5.2.6
The testcaseGenerator.plannerinit.datastructures.
78
PDDL package
80
5.2.7
The testcaseGenerator.plannerRunner Package
81
5.2.8
The testcaseGenerator.testWriter Package
83
Summary
85
System Demonstration and Evaluation
87
6.1
Case Study
87
6.1.1
Case Study: Traffic Light Class
88
6.1.2
Generated Test Suites
91
6.1.3
Limitations
92
6.2
6.3
7
5.2.5
Extensibility
95
6.2.1
Examples of Extension of Test Oracle with Expert Knowledge
95
6.2.2
Unit Testing Coverage Criteria from the Literature
97
6.2.3
Test Design Based on an Error Taxonomy
97
Summary
98
Conclusions
99
References
103
Appendix
109
A The Syntax of the Specifications
109
A.l
State Machine OWL Ontology TBox
109
A.2 Syntax of Coverage Criteria Rules
Ill
A.3 Expert Knowledge Ontology TBox
112
A.4 Syntax of Test Objectives
114
viii
A.5 Test Suite OWL Ontology TBox
115
A.6 Redundancy Checking Rule Template Syntax
115
A.7 Implementation Knowledge OWL Ontology TBox
117
A.8 The structure of the JUnit code
119
B Elevator Door Class Ontologies
B.l
120
Door State Machine OWL Ontology ABox
120
B.2 Door Test Suite OWL Ontology ABox
122
B.3 Door Implementation Knowledge Ontology
124
B.4 Door JUnit Test Suite
125
C Code for Using Jena A P I , OO J D R E W A P I and POSL g e n e r a t i o n l 2 6
C.l
Reading an OWL File with Jena API
126
C.2 Writing a POSL File
128
C.3 Creating the OWL Test suite with Jena API
129
C.4 Using OO JDREW for Reasoning
130
D Traffic Light Example
132
D.l Traffic Light State Machine OWL Ontology ABox
132
D.2 Test Objectives and Corresponding Paths for the All Transition Coverage
138
D.3 Test Objectives and Corresponding Paths for the All Transition Pair
Coverage
139
E Unit Testing Coverage Criteria from the Literature in POSL
Vita
IX
141
List of Tables
2.1
Several UML state-machine-based coverage criteria
10
2.2
Mapping of the UML elements to OWL in ODM
14
2.3
Specification of the UML Transition class and Effect property in OWL
(from [1])
15
2.4
Mapping the UML state machine specification to PDDL
20
3.1
Semantics of classes of state machine ontology TBox
26
3.2
Semantics of properties of state machine ontology TBox
27
3.3
Examples of test objectives
29
3.4
Semantics of classes of abstract test suite ontology TBox
33
3.5
Semantics of properties of abstract test suite ontology TBox
33
3.6
Semantics of classes of implementation knowledge ontology TBox
3.7
Semantics of properties of implementation knowledge ontology TBox
. .
36
36
A. 1 Mapping between SHOIQ (D) and Horn Logic statements (from Grosof
et al. [2])
113
A.2 Syntax of test objectives
114
x
List of Figures
1.1
Scope space of testing activity
2
1.2
Data flow diagram of an automated test case generator
3
1.3
Levels of control of test experts over automated test generation
2.1
RuleML example from [3]
11
2.2
The UML state machine superstructure overview from [4]
13
2.3
A PDDL 2.1 example [5]
18
2.4
An example of a UML state machine and the equivalent PDDL spec-
...
3
ification
21
3.1
Phases of transformation of specifications
23
3.2
Part of TBox of state machine ontology
26
3.3
A general expert knowledge ontology
28
3.4
Part of TBox of the abstract test suite ontology
32
3.5
Part of TBox of implementation knowledge ontology
37
3.6
Elevator door class and its state machine
43
3.7
Part of ABox of the door state machine
45
3.8
Part of ABox of implementation knowledge ontology of the Door class
50
4.1
High level data flow diagram of the system
58
4.2
Technologies for realizing of the data flow diagram of the system . . .
59
XI
4.3
The class diagram and activity diagram of the test objective generation subsystem
4.4
61
The class diagram and activity diagram of the redundancy checking
subsystem
62
4.5
Test case generation subsystem
64
4.6
The activity diagram of the test case generation subsystem
65
4.7
The high-level activity diagram of system operation
67
5.1
Mapping of design classes to implementation classes
71
5.2
The testStructureGenerator.generator package
73
5.3
The classes of the testStructureGenerator.assessment package
74
5.4
The classes of the testcaseGenerator.plannerinit package
76
5.5
The testcaseGenerator.plannerinit package
77
5.6
The testcaseGenerator.plannerinit.datastructures package
78
5.7
The classes of the testcaseGenerator.plannerinit.datastructures package 79
5.8
The testcaseGenerator.plannerRunner package
82
5.9
The testcaseGenerator. test Writer package
83
5.10 The classes of the testcaseGenerator. test Writer package
84
6.1
Traffic light state machine
89
6.2
Traffic light state machine ontology
90
6.3
Traffic light test suite ontology
90
6.4
Traffic light implementation knowledge
91
6.5
Expert knowledge ontology TBox: use of an unreliable library . . . .
95
6.6
Expert knowledge ontology TBox: Boundary values
96
6.7
Extraction of expert knowledge from a portion of Beizer bug taxonomy
[6]
98
A.l An ontology describing the vocabulary of a coverage criteria rule . . . 112
xn
Chapter 1
Introduction
The goal of software engineering is production of software that conforms to quality
and functional requirements [7]. A crucial software engineering activity is software
testing, which examines the conformance of software to requirements specifications.
This activity can be very costly. The quality of a test suite has a direct relation to
the number of errors it reveals, while it is negatively affected by its size. In order
to reduce costs and elevate the quality of the testing activity, automated testing has
been promoted since the 1970s [8].
In automated testing, offering a test expert the opportunity to input their knowledge can assist in the generation of a high quality test suite. This knowledge can
include error-prone aspects of software and a specification of what needs to be tested,
based on known priorities. This thesis investigates the use of knowledge engineering
to increase control of a test expert over the generated test suite.
1.1
Test Generation and the Role of Test Experts
The scope of a testing activity can be specified in a three dimensional space as
depicted in Figure 1.1. The three axes specify different aspects of a testing method
1
and scope can be specified by a set of points in that space. The X-axis specifies what
is being tested, which can be a unit, integration of units, or the system. The Y-axis
specifies the software artifact based on which the test cases are generated.
This
source, which specifies the behavior of the system under test is called a test oracle [9].
Test oracles can be code (in white-box testing) [10, 11], design (in gray-box or modelbased testing) [11, 12], or requirements (in black-box testing) [10, 11]. The Z-axis is
the coverage criteria specification, which denotes criteria for tests to be generated.
The specification of coverage criteria must be based on knowledge available through
the test oracle. For instance, if the coverage criteria requires that all concurrency
relations be tested, this information must be available in the test oracle. The focus
of this work is on model-based unit testing, which is testing the smallest unit of the
system under test based on its abstract model specification. Particularly, scope is
limited to exploiting knowledge engineering to enhance generation of unit tests from
a UML state machine representation of a unit under test.
z
4
GUI
Concurrency
Code Coverage
Boundary Testing
Exceptions
Unit-
code
—•—
Design
*"•• '••'»'
Requirements
— •
Y
•
Integration,
System-
X: The software under test
Y: The software artifact on which tests generated are based (Test Oracle)
Z: The specification of what tests should be generated (Coverage Criteria)
Figure 1.1: Scope space of testing activity
An automated test generator generates a test suite, which is a collection of test
cases based on the test oracle and coverage criteria specifications as depicted in
Figure 1.2. Different test case generators provide different means for a test expert to
intervene and control what test cases are generated as depicted in Figure 1.3. One
method is allowing test experts to identify test cases directly. While this method
may be required in some cases, it is not efficient. A second method is to provide
support for test experts to choose among a selection of coverage criteria but this
method can be overly restrictive. A third method which is promoted in this work,
is to provide a language for test experts to specify their own coverage criteria rules
and to extend test oracles with extra knowledge that may be necessary to address
the criteria.
Test Oracle:
Specification of
software under test
^
Example: The UML
state machine of a
Test Suite
Coverage Criteria:
Specification of what
tests should be
generated
ma^WMw^waimaw^
Example Cover
every transition of the
state machine
Figure 1.2: Data flow diagram of an automated test case generator
'Extend test oracle a n d \
compose
j
^coverage criteria r u l e s x
Figure 1.3: Levels of control of test experts over automated test generation
The third method (i.e. extending test oracles and defining custom coverage criteria) which provides the highest level of control to the test experts, can enhance
the quality of generated test suites, owing to the fact that test oracles are abstract
representations of software and removal of essential knowledge which can be caused
by a poor abstraction can be a barrier to identification of error-prone test cases [13].
Benz [13] demonstrates that utilization of a test oracle that includes the error-prone
aspects of software and domain-specific coverage criteria can enhance the quality of
the generated test suite. Error prone aspects, also used in Risk Based Testing [14],
are software elements that are likely to produce an error and can be: (1) domain
specific (such as concurrent methods, database replications, network connection), (2)
based on general test guidelines and experience (such as boundary values), or (3)
system specific and revealed in interactions of testers with developers and designers
(such as use of an unreliable library).
1.2
Thesis Scope
The objective of this research is to provide a method for test experts to extend test
oracles and specify custom coverage criteria for unit testing. For this purpose, the
application of knowledge engineering in model-based unit testing is explored and an
ontology-based test case generation method is developed. The test oracle considered
is a UML state machine, also called a state-chart, which is a design-level abstraction
of a unit's behavior.
The use of knowledge engineering allows decoupling the test oracle and coverage
criteria specifications from test selection algorithms. Hence, it makes it possible
to extend test oracles and define coverage criteria. Both test oracles and expert
knowledge are defined in ontologies. Coverage criteria rules are defined based on
the vocabulary specified in these ontologies. Then reasoning algorithms are used for
test case selection. Finally, prevalent test case generation technologies are used for
generating the test cases.
4
1.3
Structure of Thesis
The rest of this thesis is organized as follows: Chapter 2 is background and literature
review. Chapter 3 introduces the ontology based methodology for test case generation. Chapter 4 describes design of a system, which is based on the ontology-based
test case generation method. Chapter 5 delineates implementation of the system
prototype. Chapter 6 demonstrates the performance of the system in test case generation for a simple class and evaluates its extendability. Chapter 7 concludes this
work.
5
Chapter 2
Background
The ontology-based methodology for software testing is designed based on the principle of separation of concerns. An automated tool for software testing can be decomposed into three separate concerns: specification of what needs to be tested,
identification of test objectives, and generation of test cases for the identified test
objectives.
Specification of what needs to be tested is addressed by the definition of test
oracles and coverage criteria, which specify the correct behavior of the software and
requirements for the generated test suite [15], respectively. This aspect of software
testing is crucial, because it impacts the quality of the generated test suite. In the
ontology based testing method, ontologies and rules are used for the specification pf
what needs to be tested.
The second aspect of test generation is identification of test objectives based on
the specification of what needs to be tested. A test objective delineates a single
test case. There are several approaches to identification of test objectives: Explicit
identification of test objectives by a test expert [16]; the use of identification algorithms with rules implicitly built into them [17]; provision of a language for defining
coverage criteria rules and use of identification algorithms that rely on the specifica-
6
tion language [18]; and translating coverage criteria into temporal logic for use with
a model checker to identify test objectives [19]. The second aspect can be tightly
coupled with the first aspect, because the identification algorithm is often tightly
coupled with the specification of what needs to be tested. In order to decouple these
two aspects, in an ontology-based testing method, ontology-based reasoning is used
for identification of test objectives.
The third aspect in test generation is generation of test cases for the identified
test objectives. Test cases can be generated based on a test oracle with several
approaches. One approach is to use graph traversal algorithms [17, 20]. In this
approach the test oracle is translated into a graph. A graph traversal algorithm is
then used to generate paths in the graph. Another approach is using model-checking
tools [19]. A model checker is set to find a path (a test case) in the model with the
specified requirements (delineated by a test objective). A third approach is using AI
planners [21]. Artificial intelligence (AI) planners are programs that given a domain
definition analogous to a state machine and a problem definition, which describes
the properties of the goal state of the state machine, generate a plan to take the
state machine to the goal state [22]. To use AI planners for test case generation, a
test oracle (e.g. a state machine) is translated into a domain definition. The test
objective, which specifies the test case to be generated is translated into a problem
specification. Then an AI Planner is used to generate plans to reach the identified
goals in the specified domains.
In the following sections the technologies that are used to address the aspects of
test case generation in the ontology-based test generation method are described.
7
2.1
Specification of W h a t Needs to Be Tested
In this work the test oracle is an ontology-based representation of a UML state
machine, which can be generated from existing UML state machines represented in
XMI format. The method strives to support a variety of coverage criteria, which are
represented in a rule-based language.
2.1.1
UML State Machines
A UML state machine is a model that is used for specifying the transitions in the
state of a unit [4]. A basic UML state machine has a set of transitions and a set of
states, one of which is a start state and one or more are final states. A transition has a
source state and a destination state. It also has an event that triggers the transition,
a guard which specifies conditions under which the transition can be triggered, and
an action which specifies the behavior of the system when the transition is passed.
XML Metadata Interchange (XMI) [23] is an XML-based standard format supported
by the OMG, which is used for exchanging UML diagrams including UML state
machines.
2.1.2
Coverage Criteria for State-machine-based Unit Testing
Many methods that use UML state machines for test case generation are based on
some coverage criteria. A coverage criterion is an indicator of how much testing is
enough. Zhu et. al. [15] identifies two roles for coverage adequacy criteria rules: (1)
they are the explicit specification for test selection; (2) they determine what needs
to be observed. A coverage criterion specifies requirements for the generated test
suite; for instance for each state in the state machine one test must exist in the test
suite.
8
In [15], Zhu et. al categorize coverage criteria as structural testing, fault-based
testing, and error-based testing. Structural testing coverage criteria uses structural
features of the system under test (such as All Transition [17], All Transition Pair
[17], Full Predicate [17], Faulty Transition Pair[20], All Content Dependence[24],
Session Oriented [21], and 2-Way criterion [21]). Error-based testing coverage criteria use knowledge of error-prone locations (such as the criteria used in Boundary
Testing [25]). Fault-based testing uses measurement of fault detecting ability of the
test suite (such as mutation-based plannable criteria [21]). Table 2.1, summarizes
the listed UML-state-machine-based coverage criteria, including criteria reviewed by
McQuillan et. al. [26].
2.1.3
Mapping UML to OWL
The ontology-based test case generation method uses an ontology-based representation of UML diagrams for test case generation. To do this, the UML state machines
are represented in OWL ontologies.
Based on Gruber [29], Studer et. al. [30] define an ontology as follows
'An ontology is a formal, explicit specification of a shared conceptualization. Conceptualization refers to an abstract model of some phenomenon in the world by having identified the relevant concepts of that
phenomenon. Explicit means that the type of concepts used, and the
constraints on their use are explicitly defined. Formal refers to the fact
that the ontology should be machine-readable. Shared reflects the notion
that an ontology captures consensual knowledge, that is, it is not private
of some individual, but accepted by a group.'
One state-of-the-art ontology language that is widely used to specify ontologies
is OWL-DL [31] which is based on description logic. OWL stands for Web Ontology
9
Table 2.1: Several UML state-machine-based coverage criteria
All-Transitions Coverage criterion (AT) [17]: For each transition tr in the state
machine, there exists a test t in the test suite such that t causes tr to be traversed.
All-Transition Pair coverage (ATP) [17]: For each pair of adjacent transitions
(tr, tri) in the state machine, there exists a test t in the test suite such that t causes
tr and tri to be traversed in sequence.
Full Predicate coverage (FP) [17]: For each clause c in each precondition p on
transitions of the state machine there exists a test ti in the test suite T such that ti
causes c and p to evaluate to true and there exists test £2 in T such that t% causes c
and p to evaluate to false.
Complete Sequence (TT) [17, 27]: For each complete sequence s defined by the
test engineer there exists a test t in the test suite such that t causes s to be traveresed.
Belli et. al. impose a restriction on the length of the paths to make them finite [20].
Faulty Transition Pair coverage (FTP) [20]: Faulty transitions, which are transitions that are illegal in a state and lead to an error state, are added to the state
machine. Then, similar to All Transition Pair coverage, for all transition pairs (tr, tr'),
where tri is an illegal transition, there exists a test t in the test suite such that t causes
tr and tri to be traversed in sequence.
All Content Dependence Relationships coverage [24]: A function J2 has a
content dependence relationship with a function / 1 , if and only if, the value of a
variable, which is defined in /1 is used in f%. For each content dependence relationship
r, there exists a test t in the test suite such that t tests r.
Session Oriented criterion [21]: A transition tr is a self-loop, if both endpoints of
tr are in the same node of the state machine. For a node s, V is the set of system
state variables, which are updated by the transitions enabled in s. If it is possible
to partition V into Vi and V2, such that Tri be the set of self-loop transitions that
update the variables in V\, and TV2 be the set of non-self-loop transitions that update
the variables in V2, then s is a candidate for Session Oriented criterion.
The transitions in set Tri need to be sequenced before those in Tr?. Once a transition
tr G TT2 is sequenced (ending in a state si which is not equal to s), a path from si
back to node s must exist in the test to execute the transitions in Tri and verify the
state.
2-Way criterion [21]: Two self-loop transitions tri and tr2 in a given node are
independent if (1) the results represented by tri and tr2 are not exceptions, and (2)
tri (^2) is not a reader operation for tr2 (tri). For each pair of independent self-loop
transitions tri and tr2, a test case with the sequences < tri,tr2 > and < tr2,tri >
must exist.
User-Defined Test Objective[18]: The User-Defined criterion specifies some of the
states, their values, the transitions, and paths of the state machine to be included in
the test suite and forces others to be excluded.
Boundary Testing criteria [25, 28]: A boundary state is state, where at least one
state variable has a value at an extremum - minimum or maximum - of its subdomains. When the system is in a boundary state, the operations in a model must
be tested with boundary inputs.
10
Hatural Language:
"A customer is premium if their spending
has been min 5000 euro in the previous year."
RuleML:
<Implies>
<head>
<Atom>
<Rel>premium</Rel>
<Var>customer</Var>
</Atom>
</head>
<body>
<Atom>
<Rel>spending</Rel>
<Var>customer</Var>
<Ind>min 5000 euro</Ind>
<Ind>previous year</Ind>
</Atom>
</body>
</Implies>
Figure 2.1: RuleML example from [3]
Language and is a W3C standard for representing ontologies. Protege [32], a tool
developed by Stanford University can be used for composing OWL ontologies. In
description logic the terminology, which includes concepts and the relations between
them, is defined in a TBox (Terminological Box), and the instances, which are the
individuals and the relations between them are denned in an ABox (Assertional Box).
The Jena API [33] and the OWL API [34] are two open source Java interfaces that
enable reading OWL files, their in memory manipulation, and writing them to file.
In a knowledge based system, rules can be used to derive implicit knowledge in
given knowledge via a reasoning algorithm. The Rule Markup Language, RuleML,
is an XML-based markup language for specifying rules [35]. An example of a rule in
RuleML 0.91 is provided in Figure 2.1.3 (from [3]). The sPositional Slotted Language
(POSL) is a shorthand notation for RuleML [36].
The Ontology Definition Metamodel (ODM) [37] [37], which was not finalized at
the time of writing this thesis is a specification adopted by the OMG and defines a
set of mapping rules between UML models and OWL ontologies. The UML formal
11
super structure specifications [4] describe the elements in UML diagrams formally.
The ODM sketches how this formal specification can be mapped to the OWL representation, but it does not directly provide the ontology. However, a set of ontologies
that represent UML diagrams and roughly conform to the ODM are provided by
Lehtihet [1].
Table 2.2 summarizes an overview of mappings of the UML representation to the
OWL representation. Based on these transformation rules, the UML superstructure
elements, (or the elements of the UML metamodel) are mapped to the OWL elements. The following example describes how a portion of the UML superstructure
is mapped to its OWL representation.
FinalState, State, Namespace, Vertex, and RedefinableElement are classes in the UML
superstructure. The following generalization relationships hold among these classes:
A FinalState is a State. A State is a Namespace, RedefinableElement, and a Vertex.
A RedefinableElement has a boolean attribute isLeaf. Kernel and BehaviorStateMachines are two packages. The Namespace and RedefinableElement classes are from
the Kernel Package and the State, FinalState, and Vertex classes are from the BehaviorStateMachines package.
The mapping of this portion of the UML superstructure to OWL is done as
follows:
According to the rule # 2 , for each of the classes in the hierarchy, an OWLClass is
generated, which bear the corresponding generalization relationship. According to the
rule # 3 , for the isLeaf attribute, an OWL property is generated. According to rule
# 1 , for the BehaviorStateMachines and Kernel packages, two ontologies are generated,
which include OWLClasses which their corresponding UML classes are owned by the
corresponding packages.
12
Package
BehaviorStateMachines
UML :;CotnmonBehavior$:.
BasicBehaviors. Behavior
«enumeratian>:
TransitiooKmd
«enumerat)an.--->
PseudostateKtrtd
internal
tocal
external
•submachine ,
initial
deepHistory
shallowRistory
join
fork
junction
choice
entryPotnt
exitPotnt
terminate
(subsets namespace}
•stateMaehme
^
a T
L.
UML.Ciases-.Kerml
Namespace
0.1 0
]~
{subsets namespace)
•staleMachme
{subsets ownedMember}
•region
{subsets namespace)
•container
UML'.Cfass&s. Kerne!. {subsets namespace)
NamedElBm&ni
•container
Region
UML Cfassos. Kernel.
Namespace
{subsets ownedMember)
•transition
{subsets ownedMember}
+region
•outgoing
kind ; TcansittonKind
•incoming
{subsets
ownedEtement)
•effect
{subsets
,
ownedElernentJ
kind, PseudostateKind
•entry
UML: Glasses;
Kernel:.Namvspac0
{subsets namespacej
•state
{subsets
ownedMember)
•connection Point
•exit
UML:
.t^Sf!
2&; GommonB&haviors.
{subsets
-•
ownedElementi {subsets ownor},/jSCompas!te . Batefin
•connectionPomt
•stale; ,%orthogonai - Boolean
: /tsSimple: Boolean
; /tsSubmactitneStaie. Boolean
{subsets
ownedDement}
• exit
BasicBehaviors::
Behavior
0.1
0,-1
{subsets
ownedEiement}
•doAciivity ^
ConrJectionPosnt Reference •* connection +s,tale
n o
• suhmachmeState
0*7"'
0.1
{subsets owner)
•deferrable Trigger
•ownmgState
FinalState
UML::CommonBehaviors::
Communications: Tngger
{subsets ownedElement}
•stalelnvanant
UML: Classes. Kernai: Const rami
{subsets ownedRule)
•guard
Figure 2.2: The UML state machine superstructure overview from [4]
13
Table 2.2: Mapping of the UML elements to OWL in ODM
#
1
2
3
4
5
6
7
T h e U M L Superstructure E l e m e n t
Package
Class
Attribute
Binary Association
Association Classes
Multiplicity
Association Generalization
R e p r e s e n t a t i o n in O W L
Ontology
OWLClass
Property
Object Property
N-ary Associations
OWLRestriction
SubPropertyOf or SubClassOf
Figure 2.2 illustrates elements contained in the UML behavioral state machine
package of the UML superstructure. A state machine has a number of regions, which
contain transitions and vertexes which can be states. The transitions and states have
incoming and outgoing, source and target associations with each other. A transition
can have an association with a guard, a trigger, and an effect. A state can be a final
state. A state has an association with a constraint which specifies the condition that
holds when the system is in that state. The OWL specification for the UML state
machine is generated based on the mapping rules in Table 2.2. As an example, the
OWL code that defines the Transition class and the Effect property of a transition
are listed in Table 2.3 [1].
2.2
Identification of Test Objectives Through Reasoning
Reasoning is concerned with derivation of implicit knowledge in knowledge represented in a knowledge representation language. In the ontology-based test case generation method, the explicit knowledge is represented in test oracle ontologies and
coverage criteria rules; the implicit knowledge is a collection of test objectives; reasoning is used to derive test objectives from the test oracle ontologies and coverage
criteria rules.
The use of reasoning on the ontologies for test objective generation delivers a
14
Table 2.3: Specification of the UML Transition class and Effect property in OWL
(from [1])
<owl:Class rdf: I D = ' T r a n s i t i o n ^
<rdfs:subClassOf rdf:resource='feKernel;RedefinableElement'/>
<rdfs:subClassOf rdf:resource='feKernel;NamedElement'/>
<rdfs:subClassOf>
<owl:Restriction
<owl:onProperty rdf:resource='#Transition.guard'/>
<owl:maxCardinality rdf:datatype 3 'fexsd;int'>K/owl:maxCardinality>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty r d f : r e s o u r c e 3 ' # T r a n s i t i o n . t r i g g e r ' / >
<owl:maxCardinality rdf:datatype 3 'fexsd;int'>K/owl:maxCardinality>
</owl:Restriction
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl: R e s t r i c t i o n
<owl:onProperty r d f : r e s o u r c e 3 ' # T r a n s i t i o n . e f f e c t ' / >
<owl:maxCardinality rdf:datatype 3 'fexsd;int'>K/owl:maxCardinality>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<owl:ObjectProperty rdf:ID='Transition.effect'>
<rdfs:domain r d f : r e s o u r c e 3 ' # T r a n s i t i o n ' / >
<rdfs:range rdf:resource 3 'feBasicBehaviors;Behavior'/>
<rdfs:comment rdf:datatype 3 'fexsd;string'
>subsets ownedElement</rdfs:comment>
</owl:Obj ectProperty>
highly extensible test oracle and coverage criteria. Software can be decomposed into
algorithms and the knowledge manipulated by the algorithms. Knowledge engineering helps decrease the dependency of algorithms on the knowledge, by encoding the
knowledge into ontologies and rules, and providing generic reasoning algorithms,
which operate on the knowledge. Hence, ontologies can be modified without changing the algorithms, and vice versa. The implemention of the method in this work
uses a reasoner, 0 0 jDREW [38, 39] that supports RuleML.
15
2.3
Generation of Test Cases with Artificial Intelligence Planning
AI planning is an area in artificial intelligence and is concerned with finding a plan
to solve a problem within a domain [22]. A domain is specified as a set of state
variables and actions that manipulate state of the system. A problem specifies a
start state, a goal state, and possibly some constraints on the generated plan. A
plan is a set of actions that takes the system from the start state to the goal state
and conforms to the constraints. AI planners are algorithms that generate a plan
based on the specified domain and problems.
One of the applications of AI planning is in test case generation [40, 16, 21]. To
apply AI planning in test case generation, the problem of generating a test case based
on a test oracle (e.g. state machine) is translated into the problem of finding a plan
to solve a problem within a domain specification. For this purpose, the test oracle
and specifications and the specifications of the required test cases are translated into
the domain description and problem description. The simplest test case is a list of
actions (path) from the initial (start) state to the goal (final) state of the domain
(state machine). To define more complex test cases additional predicates are added
to the domain and problem description to put constraint on the generated plan. In
UML-state-machine-based test case generation, UML state machine specification is
mapped to problem and domain description.
The specifications of the domain and problem are written in a planning language.
The conceptual model of many planning languages represent the system as a state
transition system [41]. Planning Domain Description Language (PDDL) [42] is a
planning language that has become the de-facto standard language for AI planning.
It was originally designed in 1998 for the International Planning Competition and
has hence been maintained for it. PDDL can be mapped to a state machine.
16
2.3.1
P D D L 2.1
PDDL 2.1 [5] is the planning language used in the 3rd International Planning Competition. The goal of PDDL 2.1 is to support encoding realistic problems. In this
regard, one of the features added to PDDL in this extension is support for numbers.
The support for numbers and numeric operations are an indispensable part of state
machine specification for many software systems. Also, PDDL 2.1 defines a metric
that is required to be maximized or minimized in a plan. This can be used to specify
constraints on the cost of the generated paths. One of the planners that use PDDL
2.1 is Metric-FF [43]. Metric-FF performed outstandingly in IPC 3 in domains with
numeric variables (or fluents) [44]. Below a general overview of the used features of
PDDL 2.1 is provided.
Figure 2.3 shows an example of a domain and a problem specification [5]. In
PDDL 2.1 an AI planning problem is decomposed into two parts: domain description
and problem description. The components of each of them are described using the
vehicle example below.
The name of the domain is metricVehicle.
A PDDL domain describes the types,
predicates, functions and actions in a system for which a plan is to be devised.
In PDDL, everything is in Lisp prefix notation. The keywords are preceded by ':'
(such as -.requirements).
The requirements
declaration specifies what constructs of
PDDL language is used by the domain description. The types declaration describe
the types of objects in the environment; in this domain there are two types defined:
vehicle and location.
The predicates are constructs with boolean values; For in-
stance 'at' is a predicate that takes two arguments v and p; for each pair of objects
of type vehicle and location, it returns either true or false.
In PDDL, variables
start with '?' and types are denoted by a '- ' before the type name. The
functions
are constructs with numeric results (numericfluents).
fuel-level
For instance the
specifies a numeric value for each vehicle v. The set of predicates and functions
17
define (domain metncVehicle)
:requirements :strips :typing :fluents)
:types vehicle location)
:predicates (at ?v - vehicle ?p - location)
accessible ?v - vehicle ?pl ?p2 - location) )
:functions (fuel-level ?v - vehicle)
fuel-used ?v - vehicle)
fuel-required ?pl ?p2 - location)
total-fuel-used))
taction drive
parameters (?v - vehicle ?from ?to - location)
precondition (and (at ?v ?from)
accessible ?v ?from ?to)
>= (fuel-level ?v) (fuel-required 'from 'to)))
effect (and (not (at ?v ?from))
at ?v ?to)
decrease (fuel-level ?v) (fuel-required ?from ?to))
increase (total-fuel-used) (fuel-required ?from ?to))
increase (fuel-used ?v) (fuel-required ?from ?to)))
define (problem metricVehicle-example)
:domain metncVehicle)
:objects
:ruck car - vehicle
'aris Berlin Rome Madrid - location)
: mit
at truck Rome)
at car Paris)
= (fuel-level truck) 100)
= (fuel-level car) 100)
accessible car Paris Berlin)
accessible car Berlin Rome)
accessible car Rome Madrid)
accessible truck Rome Paris)
accessible truck Rome Berlin)
accessible truck Berlin Paris)
= (fuel-required Paris Berlin) 40)
= (fuel-required Berlin Rome) 30)
= (fuel-required Rome Madrid) 50)
= (fuel-required Rome Paris) 35)
= (fuel-required Rome Berlin) 40)
= (fuel-required Berlin Paris) 40)
= (total-fuel-used) 0)
= (fuel-used car) 0)
= (fuel-used truck) 0)
:goal (and (at truck Paris)
at car Rome))
rmetrac minimize
(total-fuel-used))
Figure 2.3: A PDDL 2.1 example [5]
18
collectively define the state of the system. The actions declaration denote how the
state of the system can be changed. Each action has a set of parameters, a precondition, and an effect; for instance action drive has three parameters v, from, and to,
which respectively have the types: vehicle, location and location. The
precondition
specifies a boolean condition that must hold for an action to be executable. It uses
predicates, relational operators on numeric fluents ( < = , > = , =, < • <), and boolean
logic operators (and, or, not), universal and existential quantifiers. The effects
de-
scribe the conditions that hold after an action is executed. The effect describes the
changes that an action makes in the system state. It specifies new values for predicates and numeric fluents. The numeric operations that are supported in PDDL 2.1
are /,*,+,—,increase,decease,assign,scale-up and scale-down.
The effects can
have conditions and universal quantifiers.
The name of the problem described in Figure 2.3 is metric—vehicle
a problem for the domain metricVchicle.
Berlin,
example. It is
There are 4 objects in the problem: car,
Rome and Madrid, which respectively have the types: vehicle,
location, and location.
location,
The initial state of the system is described by assigning
values to predicates and numeric fluents. The goal state is specified by condition
that holds on predicates and numeric fluents. The metric specifies a criteria that
needs to be minimized or maximized in the devised plan.
2.3.2
A Mapping between P D D L and UML State Machine
In order to use an AI planner for test case generation, the state machine specification
should be translated into their input specification [40, 16, 21]. The UML state
machine specification can be mapped to PDDL domain description and problem
description. The mapping rules to generate PDDL specification from a state machine
and an example are presented in Table 2.4 and Table 2.4, respectively.
19
Table 2.4: Mapping the UML state machine specification to PDDL
state ost.
- Transition tr from ost^ to ost.,
(:types sx
(:active s
(:action tr
state)
state)
denotes active state
(OStJ=OStj)
- Guard of the transition is
condition g on state variable sv
- action a is manipulated in
manipulates sv
:parameters(?st, - st ?st3 - s3)
precondition ( (active 'stj
(g (sv)))
: effect ( (active 'stj)
(Not active 'stj
(a (sv))
)
- Transition tr from ostx to ost3
(:action tr
(OSti'=OSt-|)
- Guard of the transition is
condition g on state variable sv
- action a is manipulated in
manipulates sv
:parameters('st, - s j
precondition ( (active 'stj)
(g (sv))
:effect(a (sv))
)
Numeric state variable n-var
(:functions
(n-var))
Boolean state variable b-var
(:predicates
(b-var))
state ost.
Initial values of the state
variables are set in the problem
(lobjects osti
sj
(unit (active startstatename)
(not active otherstates)
(initialize state variables))
Goal state and goal values for
the state variables
(:goal (active goalstate)
(predicate of the value of boolean
variables)
(function of the value of numeric
variables)
) _
s Cost minimization
(metric minimize (+ (total-time)))
20
Transition^
Transition
cck() [a-count==3]
Ci!ck{J[a-count-=4)/pnnt f b ' ) t a<oimt=i
iitlonS
Transitions
Transittrtttl
Cfick{) (a-coum<4]/print ('a'), a-count++
\
Y
o
Gick{) [a-counK3]/print ('a'), a-count-*-*
Dufin1.T ie'-s'-r ptx 3* *
(define (domain StatemachineName)
(:requirements
:typing :fluents)
(:types
si,s2,s3 - state)
;p*^l-~ Q L C I i p t ^<.Ai.
(define (problem pi)
{:domain StatemachineName)
(:objects ostl - si
ost2 - s2
ost3 - s3)
(:init (active ostl)
(not active ost2)
(not active ost3)
(= (a-count) 0))
(:goal (active ost3))
(:predicates
(active
7
s - state))
( :functions
(a-count))
(imetric minimize
(+ (total-time))))
(:action transition!
:parameters ( ?stl - st)
:precondition (
(active ?stl)
(< (a-count) 4))
:effect
(increase (a-count) 1))
(:action transition2
rparameters (?stl - si ?st2 - s2)
:precondition (
(active ?stl)
(= (a-count) 4))
:effect (
(and (active ?st2)
(not active ?stl)}
(assign a-count 0)))
{:action transition3
:parameters (?st2 - s2)
:precondition (
(active ?st2)
(< (a-count) 3})
:effect (increase (a-count) 1})
( :action
transition4
:parameters ( 9 st2 - s2 ?st3 s3)
:precondition (actave , s t 2 )
:effect (and (active ^st3) (not
active ?st2))))
Figure 2.4: An example of a UML state machine and the equivalent PDDL specification
21
Chapter 3
An Ontology-based Method for
Software Testing
The ontology-based software testing methodology is described as a series of transformations of specifications from the test oracle to the executable test suite. To be
more precise, an ontology-based representation of the behavioral model of the system
under test, an expert knowledge ontology, an implementation knowledge ontology,
and coverage criteria rules are used to generate executable test cases.
The rest of this chapter is organized as follows: Section 3.1 provides an overview
of the method. Section 3.2 delineates the specifications that are transformed. Section
3.3 describes the phases of the transformation. Section 3.4 describes the specifications
and the transformation phases with a simple example. Section 3.5 summarizes the
method.
3.1
Method Overview
The method generates an executable test suite in four phases. Figure 3.1 illustrates
the phases of transformations, and their inputs and outputs. Phase 1 generates a set
22
of test objectives. After phase 1 is completed, phase 2 and phase 3 are performed
repeatedly to generate an abstract test suite. Then in phase 4, based on the abstract
test suite, the executable test suite is generated.
In each phase, the inputs are
transformed into the outputs as follows:
Coverage
Criteria
Rules
Redundancy
Checking
Rule
Templates
Test Oracle
Expert
Knowledge
Ontology
Behavioral
Model
Ontology
Implementation
Knowledge
Ontology
Non-redundant
Test
Objectives
Test
Objectives
Abstract
Test Suite
Ontology
Figure 3.1: Phases of transformation of specifications
Phase
1- Test Objective
Generation:
Behavioral model specifications, ex-
pert knowledge, and coverage criteria rules are used to generate a set of test objectives.
Phase
2- Redundancy
Checking:
Behavioral model specifications, test ob-
jectives, a partially generated abstract test suite ontology, and redundancy checking
rule templates are used to select a non-redundant test objective, one at a time.
Phase
3- Abstract
Test Suite
Ontology
Generation:
An abstract test
case is generated and added to the partially generated abstract test suite ontology,
for each non-redundant test objective.
Phase
4~ Executable
Test Suite
Generation:
Behavioral model specifica-
tions, abstract test suite ontology, and implementation knowledge ontology are used
to generate the executable test suite.
The goal of the method is to facilitate exploitation of test experts' knowledge
23
in automated test generation. To achieve this the method promotes the separation
of three concerns of test case generation; namely, specification of what needs to
be tested, identification of test objectives, and generation of test cases. To exploit
a test expert's knowledge, the specification of what needs to be tested and the
identification of test objectives need to be decoupled. The decoupling allows the test
experts to freely manipulate the specification of what needs to be tested, without
the need to modify hardcoded test objective identification algorithms. Given this, a
test expert can enrich the extensible ontology-based test oracle and specify custom
coverage criteria freely. Hence, the method responds to the need for supporting the
specification of arbitrary test cases, implementation knowledge, invariants on model
elements, distinguished states [45], and knowledge about error-prone aspect of the
system, while supporting standard coverage criteria, as well as additional coverage
criteria rules which are based on test experts' mental model.
3.2
Syntax and Semantics of Specifications
This section describes the specifications that are used in the process of test case
generation and their syntax and semantics. The specifications are either provided
by external entities as inputs or generated by the system as intermediate or final
outputs.
3.2.1
Behavioral Model Ontology
The behavioral model ontology is the ontological representation of the test oracle
of the system under test. Various software test case generation methods are based
on different behavioral models, which can be modeled in an ontology. For the UML
models, the Ontology Definition Meta Model (ODM) [37] ontologies for the UML
diagrams can be used and can be automatically generated from the XML Metadata
24
Interchange (XMI) [23] representation of existing models.
The TBox of a prototype ontology for the UML state machine is depicted in
Figure 3.2. The complete ontology TBox in OWL is included in Appendix A.l. The
semantics of the classes and properties of this ontology are described in Tables 3.1
and 3.2. The ontology describes the structure of a state machine by defining the state
machine's structural elements and the relationships between them. The structural
elements include: states, transitions, guards, actions, state variables, and events.
The guards and actions have a String property called description, which describes
their semantics and can be parsed by a software. The description properties read
and manipulate the values of state variables.
3.2.2
E x p e r t Knowledge Ontology
Figure 3.3 depicts a general expert knowledge ontology. Appendix A.3 includes the
syntax of this ontology. AClassFromStateMachineOntology is a class that is defined in the behavioral model ontology. ExpertKnowledgeClass3 is a subclass of this
class defined in the expert knowledge ontology. ExpertKnowledgeClassl is a class
defined in the expert knowledge ontology which is connected to the AClassFromStateMachineOntology by thw ObjectPropertyl property. ExpertKnowledgeClass2
is a class defined in the expert knowledge ontology which is connected to the ExpertKnowledgeClassl by the ObjectProperty2 property. DataProperty2 is an attribute
of AClassFromStateMachineOntology and its range is boolean. DataPropertyl is an
attribute of ExpertKnowledgeClassl and its range is string.
The expert knowledge ontology extends the behavioral model ontology and provides the knowledge that is beyond the behavioral model and is used for specification
and identification of test cases. This ontology further describes the elements in the
behavioral model ontology by importing the behavioral model ontology and adding
additional classes and properties to it. It can describe new classes, relationships, and
25
sm:StateMachine
srjw<ars \
snftstajes
sm:StateVariable
I
sm:AbstractState
sm:FinalState
sm:Condition
sm:StartState
sm:Call
sm:Behaviour
TBox Definitions:
class2 <
prope
y
classl
5S!|^?S^^^^^^^^^S^^S^^S''
Figure 3.2: Part of TBox of state machine ontology
Table 3.1: Semantics of classes of state machine ontology TBox.
#
1
2
Class
StateMachine
AbstractState
3
State
4
StartState
5
FinalState
6
StateVariable
7
8
Transition
Behavior
9
Call
10
Condition
Description
An instance of a StateMachine class represents a single UML state machine.
An AbstractState is the parent of the three types of state in a state machine: StartState, FinalState, and State.
A state is a child of AbstractState and represents a state of the state
machine, which is not a start state or a final state.
A StartState represents the start state of a state machine. It doesn not
have any incoming transitions and a state machine can only have one start
state.
A FinalState represents the final state of a state machine. A final state
does not have any outgoing transition.
A StateVariable represents a state variable of the class that state machine
describes its behavior. State variables are used in the state machine guard
and action descriptions.
A Transition represents a transition of the state machine.
A Behavior describes a set of changes in the state variables. It is used as
action of a transition.
A Call represents a call event in the state machine. It is used as event of
a transition.
A Condition represents a constraint on the state variable values. It is used
as guard of a transition.
26
Table 3.2: Semantics of properties of state machine ontology TBox.
#
1
Property
states
Domain
StateMachine
Range
State
2
transitions
StateMachine
Transition
3
vars
StateMachine
StateVariable
4
out
State,
State
Start-
Transition
5
in
State,
State
Final-
Transition
6
from
Transition
State,
State
Start-
7
to
Transition
State,
State
Final-
8
9
event
guard
Call
Condition
Transition
Transition
10
guardDesc
String
Guard
11
behaviourDesc
String
Behaviour
12
name
String
StateVariable,
Call
27
Description
It is a collection that contains the states
of the state machine.
It is a collection that contains the transitions of the state machine.
It is a collection that contains the state
variables that are used in the specification of its guards and actions.
It is a collection that contains the transitions, which their source is the state.
This property is the inverse of the from
property.
It is a collection that contains the transitions, which their destination is the
state. This property is the inverse of
the to property
It is a state, which is the source of the
transition. This property is the inverse
of the out property.
It is a state, which is the destination
of the transition. This property is the
inverse of the in property.
It is the event on a transition.
It is the constraint on the state variable
that must hold, before a transition can
be fired.
It is a string that describes the guard
condition.
It is a string that describes how state
variables are changed by an action. The
behaviorDesc of the transition emanating from the start state describes the
initial value of the state variables.
It is a string that denotes the names of
the entities.
sm:ACIassFromStateMachineOntology
e
roperty—> boolean
ehObje^Propertyl
ek:ExpertKnowledgeClass3
ek:ExpertKnowledgeClass1 — e
2Ef£2L_> string
ek:ObjectPtooperty2
ek:ExpertKnowledgeClass2
TBox Definitions:
class2<
property
c|ass1
Figure 3.3: A general expert knowledge ontology
attributes. The relationships introduced in this ontology can be between two classes
in the behavioral model ontology classes, between a class in the behavioral model
ontology and a class in the test expert ontology, or between two classes in the test
expert ontology. The domain of the attributes can be from the behavioral model
ontology or the expert knowledge ontology classes.
This ontology describes test experts' mental model and is an extension point that
facilitates support of various coverage criteria rules. The additional knowledge that
is referred by coverage criteria rules are added to this ontology. Examples of pieces of
knowledge that can be included in this ontology are: knowledge about use of an unreliable library, boundary values of state variables, exceptions, variable definition and
use, concurrency relationships, and user interaction points. Two potential sources
for extraction of expert knowledge are error taxonomies and commonly accepted coverage criteria. The ontology has the advantage of retaining this knowledge, which is
gathered by test experts.
28
3.2.3
Test Objectives
A test objective delineates a test-case. It consists of two parts: predicate and parameters. The syntax of test objectives is shown below.
[Predicates separated by comma],[Parameters separated by comma]
The predicateList and the parameterList are lists of predicates and parameters
separated by a comma. A predicate in the predicateList has one or more parameters,
which are listed respectively in the parameterList. The use of 'list' syntax provide
the flexibility to add predicates and parameters to a test objective.
A test objective specifies a condition that must hold on some model elements in
a corresponding test case. The predicates specify the conditions. The parameters
specify instances of the behavioral model ontology classes or their values. The test
objectives provide a language for test experts to define the test cases abstractly.
Several test objectives and their semantics are described in table 3.3. The syntax of
the test objectives is listed In Appendix A.4.
Table 3.3: Examples of test objectives
2
3
Name
Cover transition
Cover state
Immediate
4
After
5
Full
cate
6
At
transition
state
variable has
value
#
1
predi-
Arguments
transitionl
Semantic
A test case that passes transitionl of the state machine
state1
transitionl,
transition2
transitionl,
transition2
conditionl,
predicatevalue,
clauselvalue,
clause2value, ...
A test case that passes statel of the state machine
A test case that passes transition2 immidiately after
transitionl of the state machine
A test case that passes transition2 after transitionl of
the state machine
A test case in which, when the system is at a state t h a t
is the source of the transition that have conditionl as
guard, the value of the condition is predicatevalue and
the clauses in the predicate have the values listed; The
values are boolean. The condition is in conjunctive normal form.
A test case in which, when the system is at a state t h a t
is the source of the transitionl, the values of the state
variables are as indicated by the parameters.
transitionl,
statevariablel,
value 1,
statevariable2,
value2, ...
29
A test objective describes the structural properties of a test case, which directly or
indirectly make it a candidate as a test objective. For instance, in unit testing based
on the UML state machines, a test objective can specify that transition tri of the state
machine should be traversed immediately after transition tr2 is traversed. This test
objective can be directly required because every possible sequence of two transitions
is required to be covered. This sequence can be indirectly required because the two
transitions have a definition-use relationship [24], which is required to be tested.
A test case can be described by combining test objectives.
For instance the
combination of the the two test objectives for a single test case can be defined as
shown below.
[immediate.after],
[transitionl,transition2,transition2,transitions]
This test objective consists of two predicates: immediate and after. Each of the
predicates have two parameters: The parameters of the immediate predicate are
transitionl and transition2. Parameters of the after predicate are transition2 and
transition3. This test objective means that a single test case that passes transit i o n immediately after transionl, and some time after passing transition2, passes
transition3 must be included in the test suite.
3.2.4
Coverage Criteria Rules
A general rule has the form shown below. ':-' is the deduction symbol. The right
hand side of the deduction symbol describes the premises. The left hand side of the
deduction symbol describes the conclusions which are derived when the premises are
satisfied.
conclusions :- premises.
The coverage criteria rules are test case selection rules that specify what test
cases should be generated. As shown below, in general a rule consists of two parts:
30
a head and a body. The body of the rule specifies a test objective selection criteria.
The head of the rule specifies a test objective. The test objective selection criteria
specify the conditions that should hold on some model elements for them to be a
part of the structure of a test case. The test objective specifies the structure of the
test cases.
t e s t objective :- t e s t objective selection c r i t e r i a .
They can be expert-defined, system/domain-specific, or standard. Coverage criteria rules refer to the vocabulary, which are defined by the behavioral model and an
expert knowledge ontology. The body of a rule specifies conditions on the parameters
of the test objective, which is defined in the head of a rule. Appendix A.2 defines
how coverage criteria rules are related to the ontologies that define the vocabularies.
The general syntax of a coverage criteria rule in POSL is shown below.
coverage([PredicateNamel, PredicateName2, . . . ] , [?C1, ?C3, ? V a l u e l , . . . ] ) : Classl(?Cl),Class2(?C2),Class3(?C3),Propertyl(?Cl,?C2),Property2(?C2,?C3),
OtherRule(?C3,?Value1),... .
0therRule(?C3,?Value1):- Attribute!.(?C3,?Valuel), Attribute2(?C3,?Valuel), . . . .
Classl, Class2 and Class3 are classes denned in the ontologies. ?C1, ?C2 and
?C3 represent three instances of these classes, respectively. Propertyl is an object
property, whose domain includes ?C1 and whose range includes ?C2.
Property2
is an object property, whose domain includes ?C2 and whose range includes ?C3.
Attributel is a data property, whose domain includes ?C3 and whose value is represented by ?Valuel. Attribute2 is another data property, whose domain includes ?C3
and whose value is represented by ?Valuel.
The classes and values that are among the test objective parameters must belong
to the behavioral model ontology.
The other elements can belong to either the
test expert ontology or the behavioral model ontology. This is because the expert
31
knowledge is an extension point of the system, but the test objectives are hard
coded. Hence, the expert knowledge elements can not appear in the test objectives.
A consequence of this is that in the above rule, Classl and Class3 necessarily belong
to the behavioral model ontology, but Class2 can belong to either the behavioral
model ontology or the test expert ontology.
3.2.5
Abstract Test Suite Ontology
The abstract test suite ontology describes the test suite. It is linked to the behavioral
model ontology. Abstract means that it is implementation-neutral and programminglanguage-neutral, depending merely on the design model. Figure 3.4 depicts the
TBox of the abstract test suite ontology.
sm:Transition <
ts:VaIue
<
ts:value
TBox Defi n itions:
ts:hasCall
(s:Step
ts:VariableValue
ts:variable
class2 <—property
y sm:StateVariable
c|ass1
Figure 3.4: Part of TBox of the abstract test suite ontology
The abstract test suite ontology consists of a set of abstract test cases.
An
abstract test case is specified by a list of steps and the values state variables after
each step. A step corresponds to an event that changes the state of the system. It
is called an abstract test case because it is a programming language independent
description of a test case.
The semantics of the classes and properties of the Abstract Test Suite Ontology
is described in Tables 3.4 and 3.5. Appendix A.5 provides the abstract test suite
ontology TBox in OWL. An abstract test case is represented by the class Test in the
32
ontology. A Test consists of a set of Steps. Each Step has a link to the next step
of the test case. A Step is described by the value of the state variables after it is
executed and its corresponding transition of the state machine.
Table 3.4: Semantics of classes of abstract test suite ontology TBox
#
1
Class
Test
2
Step
3
VariableValue
Description
A Test describes a test case in the test suite. A test case consists of a
number of steps.
A Step corresponds to a pass of a transition of the state machine. A step
provides information about the transition, parameters, the new state of
the system, and the next step.
A VariableValue is a pair of a state variable and its value. The system
state is specified by a set of variablevalues.
Table 3.5: Semantics of properties of abstract test suite ontology TBox
#
1
2
3
Property
nextStep
hasStep
arg
4
outcome
5
hasCall
6
hasVariable
7
hasBooleanValue
3.2.6
Domain
Step
Test
Transition
Description
specifies the step that is after this step.
specifies the collection of steps of a test case.
specifies the values of the arguments of a
event of the Transition.
Step
VariableValue specifies the values of the state variables after a step is executed.
specifies the transition of the state machine,
Step
Transtion
which is passed when the step is executed.
Step
VariableValue specifies the name of the variable that its
value is being defined.
specifies the value of a boolean variable.
VariableValue boolean
Range
Step
Step
VariableValue
Redundancy Checking Rule Templates
Redundancy checking rule templates are used to generate redundancy checking rules
for test objectives. As redundancy checking rule facilitates checking whether test
objectives are already satisfied by a test suite. A test objective is satisfied by the
test suite if a test case that satisfies the test objective already exists in the test
suite. The body of a rule describes the characteristics of a test case that satisfies the
corresponding test objective. The head of a rule is a predicate that means the test
33
objective is satisfied. The redundancy checking rules follow the following form:
The t e s t objective i s s a t i s f i e d by the t e s t s u i t e :the s t r u c t u r a l c h a r a c t e r i s t i c s of a t e s t case that s a t i s f i e s the t e s t objective.
The body of the rule describes the characteristics that hold about a test case
in the test suite ontology, if the test objective is satisfied by it. The characteristics
of the test case are defined by the vocabulary specified by the abstract test suite
ontology and the behavioral model ontology. A general redundancy checking rule in
POSL is shown below.
Test Objective: [predicatename],[parameterl, parameter2, parameter3]
Redundancy Checking Rule:
e x i s t ( ) :- t e s t ( ? t ) , h a s s t e p ( ? t , ? s t e p l ) , h a s c a l l ( ? s t e p l , parameterl),
arg(parameterl, ?variablevaluel), variable(?variablevaluel, parameter2),
value(?variablevaluel, parameter3).
The redundancy checking rule above is generated for a test objective with predicatename as predicate and parameterl, parameter2, and parameter3 as parameters. If
the body of the rule can be unified with the knowledge that is defined by the test
suite and behavioral model ontologies the test objective is satisfied. The body of the
rule defines that in order for the test objective to be satisfied by the test suite the
following conditions must hold:
• There is a test referred by ?t, which has a step referred by the '/stepl variable;
• parameterl is the name of the transition of stepl;
• ?variablevaluel is an argument of the parameterl and the name of its variable is parameter2
and the name of its value is parameter3.
The test objective predicate of the above rule can be called 'transitionhaveinputvariablevalue'. The redundancy rule checks whether there is a test with parameterl
34
as transition, which would have these values: parameter3 for the variable: paramet e r in its arguments.
For every test objective a redundancy checking rule is generated.
For every
test objective predicate, there exists a redundancy checking rule template.
The
redundancy checking rule template of a test objective predicate describes how a
redundancy checking rule is generated for a test objective that uses that predicate.
Appendix A.6 describes the syntax of the redundancy checking rule templates, and
how the test suite and behavioral model ontology are related to the redundancy
checking rules.
3.2.7
Implementation Knowledge Ontology
The implementation knowledge ontology specifies the implementation-dependant
knowledge, which is essential for translating the abstract test suite ontology to an
executable test suite. This ontology is linked to the behavioral model ontology and
extends it with implementation information.
Figure 3.5 depicts a portion of TBox of this ontology. The knowledge represented
by this ontology, which is programming-language-dependent, can include: variable
getters and setters, implementation names of methods, classes, namespaces, constructors, etc. This ontology can be automatically populated, if the source code is
available.
This ontology helps postponing the task of generation of actual executable test
cases to after the implementation is done. The semantics of the implementation
knowledge ontology is described in Tables 3.6 and 3.7. Appendix A.7 provides the
implementation knowledge ontology TBox in OWL.
35
Table 3.6: Semantics of classes of implementation knowledge ontology TBox
#
1
2
3
4
5
Class
ImplementedClass
ImplementedMethod
ImplementedGetterMethod
ImplementedSetterMethod
ImplementedStateVariable
Description
represents an implementation
represents an implementation
represents the getter method
mentedMethod.
represents the setter method
mentedMethod.
represents an implementation
of a class.
of a method.
of a state variable. It is an Impleof a state variable. It is an Impleof a state variable.
Table 3.7: Semantics of properties of implementation knowledge ontology TBox
#•
1
Property
hasCall
Domain
Call
Range
ImplementedMethod
2
inverseofHasCall
ImplementedMethod
Call
3
hasStateVariable
StateVariable
ImplementedStateVariable
4
inverseofHasStateVariable
ImplementedStateVariable
StateVariable
5
7
hasGetterMethod
hasSetterMethod
packageName
ImplementedGetterMethod
ImplementedSetterMethod
String
ImpementedStateVariable
ImpementedStateVariable
Step
8
name
String
9
hasClass
ImplementedClass
ImplementedClass,
ImplementedMethod,
ImplementedState Variable
StateMachine
6
36
Description
specifies a call from the state machine
ontology that is implemented by the
ImplementedMethod. The inverse of
this property is InverseofHasCall.
specifies an
ImplementedMethod
that implements the Call from the
state machine ontology. The inverse
of this property is HasCall.
specifies a state variable from the
state machine ontology that is implemented by the ImplementedStateVariable. The inverse of this property
is InverseofHas-StateVariable.
specifies an ImplementedStateVariable that implements the StateVariable from the state machine ontology.
The inverse of this property is Has
StateVariable.
specifies a getter method of the ImplementedState Variable.
specifies a setter method of the ImplementedState Variable.
specifies name of the package t h a t
contains the class.
specifies name of the entity.
specifies an ImplementedClass, which
its behavior is described by the state
machine.
imp:name
imp :pack»guName
imp:lmplementedClass < imP:hasClass
iB e
'^ ^
impxlassname
sm;S t a teMachine
,mp\sVar
sm:StateVariable
^
impihasStateVariable
imp: ImpIementedState Variable
imp:hasGetterMethod
imp:hasSettepMethod
\
imp:lmplementedGetterMethod
imp:TmplementedSetterMethod
is
V
imp:lmplementedMethod
.
,
. ,-. ,,,, ., ,
imp:lmplementedCaNMethod
imp:inverse of hascall
^_
~ ~
cm-Call
Figure 3.5: Part of TBox of implementation knowledge ontology
3.2.8
Executable Test Suite
An executable test suite is the main output of the method, and is the result of the
abstract test suite being translated using the implementation knowledge. This test
suite is written in a programming language for an implementation of the system
under test.
The general procedure of a simple executable test case is shown below. After
each step of a test case the values of the state variables are read and verified by
comparing with the expected values. After an object is created, the values of the
state variables are verified using their getter methods. Then a method of the object
is called and the values of the state variables are verified against the expected values
again. A step in the abstract test suite ontology corresponds to a method call and
state variable verification. If the values of the state variables are not as expected
or the method throws an exception, the test case fails. Finally the object is deleted
from the memory.
- A constructor is called to create an object.
37
- The values of the state variables are verified.
- A method of the object is called.
- The values of the state variables are verified.
- The object is deleted from the memory.
Some exceptions are expected to be thrown after a method call. These exceptions
can be defined as expected exceptions. In this case, if the expected exceptions are
not thrown the test case fails.
If the API of an automated testing framework such as JUnit is used, the tests
can be executed and verified automatically. Appendix A.8 describes the structure of
JUnit test cases.
3.3
Transformation Phases
This section describes the phases of transformation of the specifications. Phase 1
generates test objectives. Phase 2 and phase 3 incrementally generate abstract test
suite. Phase 4 translates the abstract test suite into an executable test suite.
3.3.1
Test Objective Generation Phase
During this phase initial test case selection is conducted and the output, which is a
very high level test suite is presented as a set of test objectives. These objectives are
generated based on the behavioral model, the expert knowledge, and the coverage
criteria rules. This information set is externalized and segregated into rules and
ontologies, having only the decision making algorithm hard-coded. This makes the
overall method extensible to support various coverage criteria.
The behavioral model is a model that specifies the behavior of the system. In
object-oriented design and development regardless of this method, the model can
be a state machine diagram, sequence diagram, or activity diagrams. The model is
38
represented in an ontology, which enables reasoning, as well as extension. Expert
knowledge is represented in an expert knowledge ontology. This ontology provides
information that is needed for decision making, but is not included in the standard
system behavioral models. This ontology imports the behavioral model ontology
and adds new classes, properties, etc. to it. The expert knowledge ontology and
behavior model ontology together define the knowledge that is used for test case
selection. Based on the vocabulary defined by these two ontologies, coverage criteria
rules define what test cases should be included in the test suite. Coverage criteria
rules use the information provided by the ontologies to define specifications for the
test cases that should be included in the test suite for the coverage criteria to be
satisfied. The generated test suite is specified by a set of test objectives.
The set of test objectives are the specifications of test cases that are required to be
included in the test suite. The test objectives enable the system to separate the test
selection process from test generation process. This separation has two advantages:
First, different test generation algorithms can be used to generate test cases from
the test objectives. Second, it makes it possible to use different test case selection
strategies. Also an advantage of using test objectives is that the test objectives serve
as a language that the test expert can use to compose the test suite manually or
modify the generated test suite.
The behavioral model and expert knowledge ontologies together define the system and additional knowledge that are needed for decision making with regards to
selecting test objectives. Coverage criteria, conversely, define selection rules that
specify what test cases should be included in the test suite based on the provided
knowledge. For instance, a rule can be defined as follows:
A test case that passes the transition with a wrong input :A transition calls a method in its action,
and the method receives an input from the user,
39
and the input can be wrong based on the business l o g i c .
For this example, expert knowledge should specify the methods that interact with
the users and the erroneous input values that are likely to be given to the system.
Then test cases can be identified that satisfy the coverage criterion. The first part
of the rule specifies the test objectives that must be selected and the second part
specifies test objective selection criteria.
During this phase formal knowledge representation languages such as OWL-DL
and rule languages such as Rule ML can be used to represent the specifications.
Reasoning engines such as 0 0 jDREW can be used for identification of the test
objectives.
3.3.2
Redundancy Checking Phase
The goal of this phase is to avoid generating test cases for test objectives that are
already satisfied by the test suite. When a test case is generated for a test objective
and added to the test suite, it is possible that the generated test case satisfies another
test objective that is required to be satisfied later. To avoid generating test cases
for a test objective that is already satisfied by the test suite, the test suite should
be examined, before passing the test objective to phase 3 for test case generation. A
non-redundant test objective is given to the abstract test suite ontology generation
phase for test case generation, before the system continues to check another test
objective for redundancy.
Information sources used during this phase are: the test objectives and the corresponding test objective redundancy checking rule templates, the test suite ontology,
and the behavioral model ontology.
For each test objective, using the redundancy checking rule template of the test
objective predicate, a redundancy checking rule is generated by replacing the param-
40
eters of the template with the arguments of the test objective. The test objective
redundancy checking rules refer to the test suite ontology. The test suite ontology
provides information about the steps of the test cases and the values of state variables at each step. The rules also refer to the behavioral model ontology to examine
other properties of the steps of the test cases in the test suite.
Referring to the information provided by the behavioral, the test suite can be
examined to determine whether there is a test case with the specification given by
the redundancy checking rule, and therefore decide whether a test case should be
generated for the given test objective.
As in phase 1, during this phase formal knowledge representation languages such
as OWL-DL and rule languages such as Rule-ML can be used to represent the specifications. Reasoning engines such as OO-JDREW can be used for examining the
test suite for the existence of the test objectives.
3.3.3
Abstract Test Suite Ontology Generation Phase
The goal of this phase is generation of an abstract test suite, which is implementationindependant. The abstract test suite is written in an ontology and merely describes
the test cases of the test suite by specifying their steps and the values of the state
variables at each step.
The use of the abstract test suite ontology instead of generating the test cases
directly has several advantages: First, the test cases can be generated before the
detailed design is final and implementation decisions are made. Second, it enables
the system to reason on the test suite to specify whether a test case with the given
specification exists in it (in the redundancy checking phase). Third it makes it
possible to extend the system to design coverage criteria that based on the test suite
decide whether enough test cases are included in the test suite.
Information sets used in this phase are test objectives and the behavioral model
41
ontology. A path in the behavioral model ontology is generated, such that it conforms
to the requirements of the given test objective. The generated path is the abstract
test case for that test objective and is added to the abstract test suite ontology. Then
the redundancy checking phase continues to provide another test objective, and the
test suite ontology is generated incrementally.
This phase performs test case generation while the former two phases perform
test case selection. An advantage of separating this phase from the other phases is
that other technologies that have widely been used for test case generation can be
used, such as: AI Planning, Model Checking, or Graph traversal algorithms.
3.3.4
Executable Test Suite Generation Phase
This phase generates a test suite that can be executed by an automated software
testing framework. Separation of this phase enables support for various programming languages and automated testing frameworks. The abstract test suite ontology,
which is generated in the abstract test suite ontology generation phase, is given as
input to this phase. Also the behavioral model ontology, and implementation knowledge ontology are the inputs to this phase.
The implementation knowledge ontology imports the behavioral model ontology
and adds information that is required for generating an executable test case. This
information can include names that are used in the implementation, method schemas
and the order of their parameters, the names of the setters and getters of variables,
the names of packages or namespaces, etc. With this information the steps of test
cases are translated into an executable test case.
42
3.4
A Simple Example
In this section a simple door class example is used to describe the method.
The
specifications and the phases of the method are described.
3.4.1
Elevator Door Example
The class under test is an elevator door class. Figure 3.6 depicts the class diagram
and the state machine model of the door class. The class has one state variable
named Open that has a getter method: isOpen. The door is initially closed. There
are two methods: PressOpenKey and PressCloseKey, which can be called when the
door is closed and open, respectively. The PressOpenKey method changes the value
of Open to true and the PressCloseKey method changes the value of Open to false.
Door
boolean Open;
+ PressOpenKeyC);
+ PressCloseKeyC);
+ isOpenC);
+ DoorC);
+ boolean isOpenC);
new() [] /
<delete()n//gjv
J-XCsio^ . }
\ PressCloseKeyO
PressOpenKeyC /
w
Open=true,
/ Close=true,
v
\
£<•- OjjeiMM}
Figure 3.6: Elevator door class and its state machine
3.4.2
Specifications
In this section the specifications that are used or generated by the method are described.
3.4.2.1
Behavioral Model Ontology
Figure 3.7 illustrates part of the ABox of the door class state machine. An instance
of the ontology Class StateMachine called doorstatemachine represents the state ma43
chine of the door class. The two states of the system, namely open and closed are
defined by instantiateing State. The system has also a StartState named startstate
and FinalState named fmalstate_l. Four transitions of the state machine namely:
starttoopen, opentoclosed, closedtoopen, and closedtofinal are defined by instantiateing Transition. The state variable, Open is defined by instantiating StateVariable.
These structural elements are referenced by the doorstatemachine instance through
the properties called: states, transtions, and vars respectively. The starttoclosed,
which is from the startstate to the closedstate is traversed when an object of the
Door class is created. The Event of this transition is named new and corresponds to
a call to the constructor of the Door. The Action of this transition is named init; it
initialises the value of the Open state variable to false. The listing below shows part
of this ontology in OWL. Appendix B.l includes the OWL description of the door
state machine.
<smuri : T r a n s i t i o n
r d f : 11)=' r! ;i i t t 01. ' 'IM I\">
< s m u r i : From>
<smuri : S t a r t S t a t e
r d f : II >= s-• i • i-s .. I i ' >
< s m u r i : Out rdf : r e s o u r c e s
' 1' <
(--<,<-
;
i \>v
>l< n u,- I'; - t -.> r (* o< 1 u- < d " / >
</smuri : StartState>
< / s m u r i :From>
< s m u r i : A d ion>
<smuri : Behaviour
rdf : 1(1=' in ' • >
<smuri : Behaviour-desc
'!• i.'">
r d f : d a t a t y p e = ' In ,•<
Open=f a l s e ; < / s m u r i : B e h a v i o u r . d e s O
</smuri : Behaviour>
< / s m u r i : Act nni>
< s m u r i :To>
<smuri : State
rdf:U"'=
i>
<'
>
>
</smuri : State>
< / s m u r i :To>
< s m u r i : I \ onl>
<smuri:Call
.-,•>•,>.• \ i i.i;., JOul
r d f : 111=
:•'.,>
</smuri : Call>
< / s m u r i : l'~\ rn1>
44
V.IINI/IV---'
sm:StateMachine
simians
dooi doorstatemdchine -«*
/
,,.
/
V
smva's
m
/
n
i
'
I
i
s , t
\
t
sm:StateVariaWe sm:AbstractState
\t
s , a
'
^
sm states
\
dooi open
~" — —&»
~~ -~ ^
i
door openstate
- „
^
door closedstate
-
"•*•"!,_
i
^rn transilions
i
-
*
'
c
*
•
dooi )indlstate_1
"smrFinal State
*""•*""-a*.
y
door startslate
/
\
I
i
I
;m:StartState
sm to
/
;/
{
1
I
!
1
X
{
J
i
I
«|
1
~ "
/
'
'/
'
sm from
/
'
f
sm:Condition
/
'
door opentoclosed
smy
door closecUofinal -s$.
sm:Transition
\l,
door closedtoopen "*
J//
srtKaction
dooi starttoclosed **~
A
sm_ovenl
sm:Behaviour
door new - *
•*•* ~ sal action
TBox Definitions:
Instantiation:
ABox Definition:
class2 < P r o P e r t y
c lass1
individual •*
classl
individual? < • property' — - irdividuall
wmmm^xma^mm^mmm^^^w^^
WWWBPF
Figure 3.7: Part of ABox of the door state machine
45
</smuri : Transition>
<smuri
stateVariable
rdf:ll)=
<•)< n >
< s m u r i : n,im<> r d f : d a t a t y p e = ' ' t i p
•.„ ,x.i . i t '
Jnfjl
\ U! ,->c M n> v '
>•
>Open</smuri:<-J
nai)K>
<smuri : InitBooleanValue rdf : datatype= bttp
\,vv " ! c,i<. Jut) I W I V i n u ' l u ' i l ' I'l
> t r u e < / s m u r i : InitBooleanValue>
</smuri : s t a t e V a r i a b l e >
3.4.2.2
Coverage Criteria Rule
The coverage criteria rule used in this example is 'transition coverage' which requires
that all of the transitions of the state machine be covered at least in one test case.
The code of this coverage criteria in POSL is shown below. It means that if there
exists a transition ?tl, then generate the test objective [covertransition][?tl].
coverage([covertransition],[?tl]):-transition(?tl).
3.4.2.3
Test Objectives
The following test objectives are generated for transition coverage:
[covertransition],[starttoclosed]
[covertransition],[closedtoopen]
[covertransition],[opentoclosed]
[covertransition],[closedtofinal]
Generation of test cases for these test objectives ensure that all of the transitions in
the state machine are covered at least once.
3.4.2.4
Expert Knowledge Ontology
This example does not need extension of the state machine ontology with expert
knowledge. The only knowledge that is used for selection of test cases is the transitions of the state machine, which is described by the state machine ontology.
46
3.4.2.5
Abstract Test Suite Ontology
The listing below delineates a test in the abstract test suite ontology. This test
includes two steps: testOstepO and testOstepl. In testOstepO, transition starttoclosed
of the state machine is traversed. In testOstepO, transition closed to final of the
state machine is traversed. The value of the Open state variable is indicated as
the outcome of each state. In the test below Open is false after both of the steps.
Appendix B.2 includes the OWL description of the test suite.
<j.0:test
rdf:about= http
\,->i\. * I U I I < . r n r u -1 u >
<j . 0 : h a s s t e p >
<j.0:step
r d f : a b o u t = h' i }> \»u\\
<j . 0 : h a s c a l l
rdf : resource=
,il< h i on , i. • i 0^1 < pti >
(i 1 <
1 -fi , i n p u t
fi-.i-i u d - ' - M i I '•« 1-i-i.l
/>
< j . 0 : outcome>
<j . 0 : v a r i a b l e v a l u e
r d f : a b o u t = hi t p
\ ',",
. *<
,ii.
>
( i - ' ')-l ;i I I . o< 'o«( i| . > , i \ ,ii ui t) >
<j.O:hasvariable
rdf:resource=
< j . 0: h a s b o o l e a n v a l u e
1 > ir
I •*< r~ i t p u i
rdf : d a t a t y p e = l i l t p
• ]<> >i o> I •?<•<>< 'i / >
i w . • > oi
JOnl \ \ I v l u n , .
IHH-U di,
>false</j .0: hasbooleanvalue>
</j .0: variablevalue>
< / j . 0 : outcome>
<j . 0 : n e x t s t e p >
<j.0:step
r d f : a b o u t = 1M >
<j.0:hascall
•»" >> M I H I
rdf:resource=
; ii*
IO'I.'
* in • . ,> I >
t-> - i . i p i
J . H C .,- I -
,>-1 r \ ( \ >,
/>
< j . 0 : outcome>
< j . 0. v a r i a b l e v a l u e
11
I 'I'
[
r d f : a b o u t = "utj
d t < (i I <<l 1 1 1 1 . J
<j.O:hasvariable
< j . 0: h a s b o o l e a n v a l u e
>false</j
,.
i , ( , , (I
rdf: resource=
• ,< \<iM i o n .
>
ill
I M>'- MI .I H < • c
1
rdf d a t a t y p e = I ' >
0: hasbooleanvalue>
</j .0 variablevalue>
</j . 0- outcome>
</j .0: step>
< / j . 0 : nextstep>
< / j .0: step>
</j . 0 . hasstep>
47
• \ •- < ! ..
>> • •<"•< i
'">'
/>
<j.O:hasstep
rdf : r e s o u r c e = I n t o
\><>\\ \,<l.n umi.'i ( «, UM < i> I ° / >
</j .0:test>
3.4.2.6
Redundancy Checking Rule Templates
The redundancy checking rule template for a test objective that uses covertransition
test objective predicate is as follows:
$covertransition
e x i s t O :- t e s t ( ? t ) , hascall (?stepnamel,#0), hasstep(?t, ?stepnamel).
A redundancy checking rule is generated by replacing the argument of the coverage criteria in the place of # 0 . The number following # indicates the index of the
parameter which is replaced in the parameters of the test objective. The resulting
coverage criteria means that if there is a test ?t, that has a step named ?stepnamel,
and the ?stepnamel has a transition # 0 , then the test objective is satisfied by the
test suite.
3.4.2.7
Implementation Knowledge Ontology
Figure 3.8 depicts the implementation ontology of the door class. This ontology
describes the names of the class and the enclosing packages - Door and Elevator,
respectively. The method of the Door class, which corresponds to the pressopenkey
call of the statemachine is PressOpenKey. The name of the member variable of the
Door class that corresponds to the open state variable is Open and its getter method
is named isOpen. The listing below shows part of this ontology in OWL. Appendix
B.3 includes the OWL description of the implementation knowledge ontology of the
Door class.
<imp: implementedGetterMethod
rdf:H1=
<imp : name rdf : d a t a t y p e = " 1 <' i>
-1) n •>< f (,. • <t\li
v, v, ,» > ,.•>
i m p : ii<iiiR>
48
><>(*
h..d'>
\ ,11 :xl t rn,, M M I M
>is0pened</^J
</imp: implementedGetterMethod>
<imp : i m p l e m e n t e d M e t h o d
rdf : ID= Op< nK' \ I'n —-ltr pl< im r u d M<M hod' >
<imp: hasCall>
<rdf
Description
rdf. about=
<imp. i n v e r s e _ o f . h a s C a l l
(. <
rdf
1 »'.
resources
O ' H I I K I d'ji^iLiiDKim ii (dMfiliud
</rdf
IT pu
do n ' \ 1 '• j > - i i' n » " '
ti •
\ --•. ' - i ipi.l
domain
>
m 1.*
/>
Description>
</imp • hasCall>
< i m p : lirinic r d f : d a t a t y p e = In * p
\\v\
>\ ', on< JOu
\* i [ .->< him,.'- i i i n ;
>PressOpenKey<<- J
/ i m p : iidiii(>
< / i m p : implementedMethod>
<imp : i m p I e m e n t e d S t a t e V a r i a b l e
rdf : 11W O p ' HM j l i \ n
<imp:ii<uuo r d f : d a t a t y p e = (n p
mi
\ . oi'
20u
i'd<
>
X d l N l u . i . - I i i .n>
> 0 p e n < / i m p • <-j
nmiiO
<imp: hasGetterMethod
rdf . r e s o u r c e s
i' (>>>< ,i< <K,r i ,( I "M> 1'od
i||(
1 -(>--, n i p , t
i-'o'M .nn>
ml/
/>
<imp: h a s S t a t e V a r i a b l e >
<rdf: Description
rdf' about=
t ,%
<imp. i n v e r s e . o f . h a s S t a t e V a r i a b l e
)
</rdf
.-Op( 1^1 i n \ ,u i,'tilt
\ ~i r - u put
do •] in. IT np< n >
rdf: resource=
lilo
I •>.!<- i . n i i l
rlum. p
>
/>
Descnption>
</imp. hasStateVariable>
</imp. implementedStateVariable>
3.4.2.8
Executable Test Suite
The code below shows a test case in the JUnit test suite of the door class. The test
suite is the DoorTest class, which extends the TestCase class of the JUnit framework.
Methods of the DoorTest class are the test cases. The test case named testO creates
an object of the Door class and then checks whether the value of the Open state
variable is initialized to false. Then the object is deleted by the garbage collector
after the method is exited. Appendix B.4 includes the JUnit test suite of the Door
class.
49
doonmp Door ««
imp:name
imp:packageName
'--'
W
~~~
door doorstatemdchine
T
/
mp:lmplemehtedClass < imp.ha.Clas.
mi tk name
imp^sVar
imp:hasStateVariable
npiha^Vj
impiclassname
imp:lmplementedStateVariable
sm:StateMachine
sm:StateVariable
\
door open
imp:hasSettepMethod
\
^^"-~^_
. _. , . ;
r
-*^
\
^ _ _ _
h^sStateVaridhle
imp:hasGerterMethod
~~~~———^.
f
V
doonmp OpenStateVaridble
imp:lmplementedGetterMethod
«
imp:lmplementedSetterMethod
\^-~^_
>" P h^<,l" "•' M l < '1,H'
doonmp isOpcnGctlerMelhod
imp:lmplementedMethod
imp:inverse_of__hascall
-sm:Call
imp:lmplementedCallMethod
doonmp OpcnKeyPresslmplcmcntdtionMcthod ^ „
lm
P "n
i1-1.
of
'>< -.< a i
-door ptessopenkey
Figure 3.8: Part of ABox of implementation knowledge ontology of the Door class
package unittests,
import Elevator Door,
public class DoorTest extends TestCase {
//
public void testO() {
Door uot = new Door(),
assertFalse (
i >pc K il
1
< ,uot isOpened() ) ,
}}
3.4.3
Transformation Phases
This section describes the transformations of the specifications at each phase.
50
3.4.3.1
Test Objective Generation Phase
In this phase, based on the state machine ontology of the Door class and the coverage
criterion, a set of test objectives is generated. The coverage criteria in this example
is all-transition coverage:
coverage([covertransition],[?tl]):-transition(?tl).
Thus, for every transition defined in the state machine a test objective is generated that covers that transition. Given the Door class state machine ontology with
the four transitions defined below, the variable ?tl in the body of the coverage criteria rule is unified with four values. Hence, the four test objectives which are listed
below is generated:
<smuri T r a n s i t i o n
rdf:ll)=
-t.i
<smun T r a n s i t i o n
rdf :II)=
(!,»< •
<smuri T r a n s i t i o n r d f : ! r > = ..,>.
<smuri T r a n s i t i o n
rdf :!!)=
i(1.>.
' n
- . (i
' l o o pt n
» U > i l o - ' il
- . 1 1 . „(
< <<,
i i'
. .. </smuri : Transition>
>
>
.
. </smuri : Transition>
>
.
.</smuri : Transition>
>
. . .</smuri : Transition>
[covertransition],[starttoclosed]
[covertransition],[closedtofinal]
[covertransition],[closedtoopen]
[covertransition],[opentoclosed]
3.4.3.2
Redundancy Checking Phase
A test objective is selected at a time. First [covertransition],[starttoclosed] is selected.
Based on the redundancy checking template for the covertransition test objective
predicate, a redundancy checking rule is generated by replacing the parameters in
the template as below:
TestObjective:
[covertransition],[starttoclosed]
51
Redundancy Checking Rule Template:
$covertransition
e x i s t O : - t e s t ( ? t ) , hascall(?stepnamel,#0), hasstep(?t, ?stepnamel).
Redundancy Checking Rule:
e x i s t O :- t e s t ( ? t ) , hascall(?stepnamel, s t a r t t o c l o s e d ) , hasstep(?t, ?stepnamel).
Then the test suite ontology is examined based on the generated redundancy
checking rule. At this point the test suite ontology ABox is empty, because there is
not any test case generated yet. Hence the [covertransition],[starttoclosed] is given
to the next phase for test case generation.
After the test case is generated for the [covertransition],[starttoclosed] test objective, another test objective is selected. The [covertransition],[closedtofinal] test
objective is selected and the following redundancy checking rule is generated for it:
Redundancy Checking Rule:
e x i s t O : - t e s t ( 7 t ) , hascall(?stepnamel, closedtofinal), h a s s t e p C t , ?stepnamel) .
This time, there is a test named testO, which has a step testOstepO, which has
a call closedtofinal.
The corresponding piece of the test suite ontology ABox is
shown below. Therefore, the [covertransition],[closedtofinal] test objective is already
satisfied by a testO and is discarded. Then, another test objective is selected, until
all of the test objectives are examined.
<j
0 test
rdf
a b o u t = 1 ' ,)
'>
n '
1 n >
<j 0 h a s s t e p >
<j
0 step
<j
</j
</j
0
rdf
0 hascall
a b o u t = 1 t ,%
rdf
\,
i
1
resource=
0 step>
test>
52
I
'
>
, , ,
,
/>*-
3.4.3.3
Abstract Test Suite Ontology Generation Phase
For a given test objective, based on the door state machine, a path is generated
from the start state to the final state to satisfy the test objective. For the [covertransition], [starttoclosed] test objective, a path needs to be generated to cover the
transition, starttoclosed. As shown in the listing below, a test case corresponding
to the path is added to the test suite ontology ABox for reasoning. The test case,
which is generated for the [covertransition], [starttoclosed] coverage criteria has two
steps: testOstepO and testOstepl. The transition, which is passed at each step is
used as the value of the hascall property of the step: starttoclosed for testOstepO
and closedtofinal for testOstepl. The order of the steps is specified with the nextstep
property of a step. The teststepl is the next step of the testOstepO. Also at each
step the value of the Open state variable is specified in the outcome property of the
step.
<j.0:test
rdf:about=
h ( i \>
\\v
ii<]. ( 0 ' i . . - ' ( -
ii
>
<j . 0 : h a s s t e p >
<j.0:step
rdf: about= h'lp
<j.0: hascall
vvv
rdf: r e s o u r c e =
< j . 0 : outcome>
...
\ ilil M J , .- 1 '
lil<
'I -,'r-
i i j - "•[•i>
u ;MI 1
;u(;i
>
M\'U's';!iu.!l
! o-*-'
r/>
< / j .0 : outcome>
<j . 0 : n e x t s t e p >
<j.0:step
r d f : a b o u t = h i ' •>
<j . 0 : h a s c a l l
< j . 0 : outcome>
\»\ w \,>U,i
rdf:resource=
...
; I1 <
I >-
C O P l * ' < '«
• -
O-n f) 1 * >
l»l i •
Joo;
^^ i
< / j . 0 . outcome>
</j .0: step>
</j . 0 : nextstep>
</j .0:step>
</j . 0 : hasstep>
<j O . h a s s t e p
rdf • r e s o u r c e = ! ' ,>
•'
\
</j .0: t e s t >
53
li.i
(M.i
•
(
(-
1
/>
J
<
*>-<'•
l o
r
, •.
'/>
3.4.3.4
Executable Test Suite Generation Phase
Based on the generated test suite ontology and implementation knowledge, the executable test suite is generated. For testing a Java class, JUnit automated testing
framework can be used. To generate JUnit code, the name of the class, which is Door,
and the name of the package, which is Elevator, are retrieved from the implementation knowledge ontology. The test suite is generated by extending the TestCase
class of the JUnit framework. For each test, a method is generated. Hence for testO
a method called testOQ is added to the class. For each step of a test case in the
abstract test suite ontology, from the hascall property of the step, the name of the
transition which is traversed at that step is extracted. For testOstepO, the name
of the transition is starttoclosed, which has the event init. In this case, since the
source of the starttoclosed transition is the start state, the init event is mapped to
the constructor of the class. The generated test case for testO is shown below.
p>j'>h<
Door
\oid
uot
testO()
= mn,
assertFalse(
{
Door() ;
'OIMKII
-
i.
> i
, u o t i s O p e n e d () ) ;
}
If a transition of a step is not from a start state and to a final state, based on
the name of the transition, the name of the event of the transition is taken from
the state machine ontology. The name of its event is used to extract the implementation information from the implementation knowledge ontology. For instance
for the pressopenkey event, the following piece of implementation knowledge ontology is used to extract the name of the corresponding implemented method which is
PressOpenKey.
<imp i m p l e m e n t e d G e t t e r M e t h o d
rdf
<imp-1 di.if- r d f ' d a t a t y p e = h 11>
111= i 0;><m<J<'. c .
\\\
v i or"
iiairr>
54
'
!
.
>
'00 1 » . \ 1 V I I , ' , I
> i s O p e n e d < / i m p : «->
</imp. implementedGetterMethod>
<imp : implementedMethod
rdf : 10= Op< i l\< \ I'rc:" iinph r<i'n'< u U< u^nl >
<imp. hasCall>
<rdf: Description
rdf:about=
<imp: in v e r s e . o f . h a s C a l l
()>'
I -<>-.,)rpu'
rdf: r e s o u r c e s
( I p c i K c \>'i «-lrr]il.-ujt i K d W t l n . ' l
</rdf
(>](
ilooi <>v 1% p T .^~oT>rnki > >
t <-< r~ i n o a i
i l o n n m p o>*,;.
/>
Description>
</imp : hasCall>
<imp:ruitip rdf . d a t a t y p e = IM t |>
,<\>v> >5 <>i; 'JO" I \ \ I v | i , i i,
inm
>PressOpenKey</<- J
imp: narnf>
< / i m p . implementedMethod>
For a state variable, the name of the getter method is extracted from the implementation knowledge ontology. Based on the value of a state variable at each step,
which is specified by the outcome property of the step, the correct value of the state
variable is asserted using the JUnit assert methods. For instance, assertFalsef'isOpened
is false",uot.isOpened()) requires that the value returned by the isOpened getter method
be false, otherwise this test case fails.
3.5
Summary
The ontology based test case generation method uses ontologies, rules, and reasoning to promote separation of concerns in case generation; this makes the method
flexible for supporting various coverage criteria, test domains, and software models,
and increases the control of test experts. The method generate test cases from the
behavioral model specification of the system, coverage criteria specification, expert
knowledge, and implementation knowledge. Based on these inputs it generates test
cases in four phases.
The behavioral model specification, coverage criteria specification and expert
knowledge, which describe different aspects of the problem domain are used to generate test objectives in the test objective generation phase. The test objectives,
55
which are in the solution domain, describes the required test suite in a very high
level language. The test objectives are translated into abstract test cases. An abstract test case might satisfy several test objectives. Hence, before an abstract test
case is generated for a test objective, the partially generated abstract test suite ontology is examined for the existence of a test case that already satisfies the test
objective. This is done using redundancy checking rules which are generated for
every test objective. The abstract test suite ontology describe the test cases as a list
of steps and the states of the system. This test suite is not executable and is based
on the design and requirements. The abstract test suite ontology is then translated
into an executable test suite using the implementation knowledge.
The system transforms the high level testing requirements and system specification into a lower level test suite in each phase. During each phase some off-the-shelf
tool can be used to perform the main transformation algorithm. The inputs of the
system are ontologies and rules, which are highly modifiable. Reasoning algorithms,
which are independent of what is being expressed, are used to make the system
extensible to support extensions for various coverage criteria.
56
Chapter 4
System Design
This chapter describes the design of a prototype that automates transformations of
specifications based on the the ontology-based methodology described in Chapter
3. An overview of the system and the flow of data in it is discussed in Section 4.1.
Section 4.2 describes the subsystems and their interactions. An operation scenario
of the system, which is established for demonstration of an implementation of the
system is discussed in Section 4.3. Section 4.4 summarizes this chapter.
4.1
System Overview
Figure 4.1 shows a data flow diagram of the design of the system, which has three
main processes: Test Objective Generation which is responsible for automating the
test objective generation phase of the method, Redundancy Checking which is responsible for automating the redundancy checking phase of the method, and Test
Case Generation which is responsible for the abstract test suite ontology generation
and the executable test suite generation phases of the method.
The Test Objective Generation process uses a reasoner to generate the test objectives based on a behavioral model ontology, an expert knowledge ontology, and
57
<Ruies>
Standard and
ExpertDefined
Coverage Criteria
v input
Test Redundancy
Rule Templates
input ,
<Omology>
Expert
Knowledge
input
<Ontology>
Behavioral Model
Executable
Testcases
<0ntology>
Test Suite
output
•3 Output
<Ontology>
Implementation
Knowledge
input
Figure 4.1: High level data flow diagram of the system
coverage criteria (phase 1 of the method). The Test Redundancy Checking process
also uses a reasoner to examine a test suite ontology for the existence of a test case
that satisfies a given test objective, using test objective redundancy checking rule
templates (phase 2 of the method). The Test case Generation process performs
the abstract test suite ontology generation and the executable test suite generation
phases of the method. It consists of four subprocesses. The Initialization subprocess,
initializes the inputs to the Test case Generator subprocess based on the behavioral
model ontology and selected test objectives. The Test case Generator subprocess
generates test cases. This subprocess can be implemented using different technologies including AI planning, graph traversal, or model checking. The generated test
58
<POSL>
Standard and
Expert-Defined
Coverage Criteria
input '•
<OWL-DL>
UML State Machine
Model
<OWL-DL>
Expert
Knowledge
. input
Test Redundancy
Checking Rule
Templates
input
output
-, output
<QWL-DL>
Implementation
Knowledge
i input
Figure 4.2: Technologies for realizing of the data flow diagram of the system
cases are written to the test suite ontology by the Ontology Test Writer subprocess.
At this point, the abstract test suite generation of the method is done. Finally, the
Executable Test Writer subprocess generates the executable test cases for a particular language. For generating test cases in different languages different Executable
Test Writers can be used.
The data flow diagram depicted in Figure 4.1 is abstract and can be realized with
various technologies. Figure 4.2 depicts a realization of the system for state machine
based unit testing with technologies including: OWL-DL, POSL, and 0 0 jDREW,
and an AI planner named Metric-FF [43].
The UML state machine, expert knowledge, implementation knowledge, and test
59
suite are represented in OWL-DL [31]. TBox ontologies define the concepts and
relationships among them, and ABox ontologies import the TBox ontologies to instantiate the elements defined in them. The TBoxes are reusable while the ABoxes
are for a single unit under test. The Ontology Definition Metamodel (ODM), which
is adopted by the OMG, has a section that describes the UML 2.0 metamodel in
OWL-DL. However, a prototype ontology which is much simpler and less modifiable
is suitable and used for this work. The XMI [23] representation of the UML state
machine can be converted to the ontology-based representation automatically. The
implementation knowledge can be automatically imported, when the source code of
the unit is available.
The Test Objective Generation and Redundancy Checking processes use 0 0
jDREW [38] for reasoning tasks. The OWL-DL Ontologies are first transformed
into POSL [36]. The coverage criteria and the test redundancy rule templates are
provided and translated to POSL respectively.
The Test case Generation process uses an AI planner called Metric-FF [43] in
the Test case Generator subprocess. The inputs to Metric-FF are the problem and
domain description in the PDDL 2.1 language [5], which are provided by the PDDL
Generation subprocess.
The inputs of the planner are initialized based on data
from the state machine and structure predicates of a test objective. The generated
test cases include methods to be called at each step, their inputs, and the expected
values of the state variables. The generated test cases are then given to the Test
Suite Writer subprocess to be written back to the Test-Suite Ontology in OWL-DL,
from which the JUnit test cases are generated by the JUnit Test Writer subprocess.
60
4.2
Design Classes
This section details the high level class diagrams for the system, which has three
main subsystems: the test objective generation subsystem, the redundancy checking
subsystem, and the test case generation subsystem.
4.2.1
Test Objective Generation Subsystem
Figure 4.3 depicts the high level class diagram for the test objective generation
subsystem together with its activity diagram.
The TestObjectiveGenerator uses
POSLReasoner for generating POSL files and reasoning on them.
Test Objective Generation
T
TestObjectiveGenerator
OntModel theOntModel;
String coverageCriterion;
+ process(Stnng resultFileAddress)
- OntModel readOnto()
- void readCoverageCriteriaO
thePOSLReasoner
POSLReasoner
Reosoner reasoner,
+ POSLWriter(OntModel ontModel, String
FileAddress)
+loadRules(String rule)
' + reason(String query, String
resultFileAddress)
theOntModel = theTestObjectiveGenerator.readOnto();
coverageCntonon =
theTestObjectiveGenerator.readCoverageCriteria();
thePOSLReasoner.POSLWriter(theOntModel,
outFileAddress);
thePOSLReasoner.loadRules( coverageCriterion)
thePOSLReasoner reason(query, resultFileAddress)
www^mmwtimwjmt'wmwfw
Figure 4.3: The class diagram and activity diagram of the test objective generation
subsystem
The TestObjectiveGenerator has a process method. When the process method
is called, the readOnto and readCoverageCriteria methods are called to read the
state machine ontology and coverage criteria, which are stored in the theOntModel
and coverageCriterion properties, respectively. To read the ontology, the TestOb-
61
jectiveGenerator can use Jena API or OWL API. The process method then uses
POSLReasoner for converting the ontology stored in the theOntModel property into
POSL and reason on it. For this purpose, it first calls the POSLWriter to write the
ontology into POSL format. Then, it calls the loadRules method to load the generated POSL and a coverage criterion. Finally, the reasoning method is called with a
query and the results are written to the resultFileAddress. The POSLReasoner uses
0 0 jDREW for loading the POSL rules and performing reasoning.
4.2.2
Redundancy Checking Subsystem
Figure 4.4 depicts the high level class diagram for the redundancy checking subsystem
together with its activity diagram. The RedundancyChecking uses POSLReasoner
for writing POSL files and reasoning on them.
Redundancy Checking
RedundancyChecking
OntModel theOntModel;
String redundancyRule;
+ boolean process(Stnng testObjectivej
- String generateRedundancyCheckmgRule(String
testObjective)
• OntModel readTestSuiteOntoQ
mMnmm^w»>*'
?
theOntModel = theRedundancyChecker readTestSuiteOnto();
redundancyRule =
theRedundancyCheckmg.
generateRedundancyCheckmgRule(testObjective);
wmmmmmmmmmm
wwgww*w
mm^www^^wm^amm.
owlToPOSL
POSLReasoner
Reasoner reasoner;
+ POSLWriter(OntModel ontModel, String
FileAddress)
+loadRules(String rule)
+ reason(Stnng query, String
resultFileAddress)
thePOSLReasoner POSLWnter(theOntModel, outFileAddress),
thePOSLReasoner loadRules( redundancyRule)
thePOSLReasoner reason(query, resultFileAddress)
T~
S^Wraj^^p^B^^^^psps^w^TOSSW^^SSSas^^SBP
«WPWII»WIW»l»»«
Figure 4.4: The class diagram and activity diagram of the redundancy checking
subsystem
The RedundancyChecking has a process method. When the process method is
called, the readTestSuiteOnto method is called to read the test suite ontology, which
62
is stored in the theOntModel property. Then it calls the generateRedundancyCheckingRule, which returns a redundancy checking rule for the given test objective. The
generated redundancy rule is stored in the redundancyRule property. The process
method then uses POSLReasoner for converting the ontology stored in the theOntModel property into POSL and reasons on it. For this purpose, it first calls the
POSLWriter to write the ontology into POSL format. It also calls the loadRules
method to load the generated POSL and the redundancy checking rule. Finally, the
reasoning method is called with a query and the results are written to the resultFileAddress. The POSLReasoner uses 0 0 jDREW for loading the POSL rules and
performing reasoning.
4.2.3
Test case Generation Subsystem
Figure 4.5 depicts the high level class diagram of the test case generation subsystem.
It has five classes: theTestGeneratorController uses other classes for test case generation and controls the process. PDDLGeneration initializes the input of the planner
by generating PDDL domain and problem files. TestcaseGeneration generates test
cases. OntologyTestSuiteWriter adds the generated test cases to the test suite ontology.
JUnitTestSuiteWriter, generates the JUnit test suite from the test suite
ontology. To change the test case generation technology, the PDDLGeneration and
the TestcaseGeneration classes are replaced with a class that initializes the inputs
of the new test case generator and a class that implements the test case generator,
respectively.
The theTestGeneratorController controls the process of generation of test cases
from test objectives. Figure 4.6 depicts how the process method uses the other classes
for test case generation. First, it uses PDDLGeneration for initializing the input to
the planner. Then, for every test objective, it uses the RedundancyChecking class to
check whether it is satisfied. If it is not, it proceeds to use the TestcaseGeneration
63
PDDLGeneration
PDDLDomain[] domains;
PDDLProblem[] problems;
PDDLDomain baseDomain;
PDDLProblem baseProblem;
+ void addState(String name, boolean isFinal, boolean
isStart)
+ void addTransition(String name, String source, String
destination)
+ void addGuard(String transition, String desc)
+ void addEffect(String transition, String desc)
TestcaseGeneration
+ Test runPlanner
(String PDDLDomainFile,
String PDDLProblemFilej
+ void generatePDDLProblemsAndDomains
(String testObjectiveFileAddress)
+String[] getPDDLDomains();
+String[] getPDDLProblems();
- void writeDomain()
- void writeProblemf
Wf^^^^^^RB^&^^ffl^J&^^^^^^^g^ffl^^^&gWWB^
TestGeneratorController
PDDLDomain[] domains;
PDDLProblem[] problems;
RedundancyChecking theRedundancyChecking;
+ void process(String testObjectiveFileAddress,
String JUnitFileAddress)
OntologyTestSuiteWriter
OntModel theOntModel;
+ write(Test test)
WMMm^^m^t^Mm^mmmwiiimvmwmBt
JUnitTestSuiteWriter
OntModel testOntModel;
OntModel implOntModel;
readlmplementationKnowledgeOnto (String
ImplementationOntoFileAddress)
+ generate(String testSuiteFileAddress, String
JUnitFileAddress)
,
mwmmmpw
WW®W%WF &$%S%®&^®W!W8!
Figure 4.5: Test case generation subsystem
to generate the tests and the OntologyTestSuiteWriter to write the tests to the test
suite ontology. Finally, after all of the test objectives are examined, it uses the
JUnitTestSuiteWriter to generate the JUnit file.
The PDDLGeneration stores a basePDDLDomain, which is constructed based on
the state machine and a basePDDLProblem, which requires the planner to generate
a path from the start state to a final state of the state machine. These two objects
are constructed by calling the addState, addTransition, addGuard, and addEffect
methods of the PDDLGeneration class. The generatePDDLProblemsAndDomains
method creates the PDDLDomains and PDDLProblems for the test objectives by
altering the basePDDLDomain and basePDDLProblem for every test objective. The
64
Test Suite Ontology and Executable Test Suite Generation
Activity diagram of TestGeneratorController.process()
?
/* Precondition: Create basePDDLDomain and
basePDDLProblem by calling the PDDLGeneration methods. V
tfiePDDLGeneration.generatePDDLProblemsAndDomains
(testObjectiveFileAddress);
domains= tf?ePDDLGeneration.getPDDLDomains()
problems=tf7ePDDLGeneration.getPDDLProblems()
Figure 4.6: The activity diagram of the test case generation subsystem
65
generated PDDLDomains and PDDLProblems are stored in the 'domains' and 'problems' properties, which are retrieved by calling the getPDDLDomains and the getPDDLProblems methods.
Then, for every test objective, the process method of the RedundancyChecking subsystem is called. If the process method returns true, the test objective is
redundant and is ignored. If it returns false, the runPlanner method of the TestcaseGeneration class is called to generate a test case. Then the write method of the
theOntologyTestSuiteWriter class is called with the generated test as its parameter.
This method adds the generated test case to the test suite ontology. The system
then proceeds to pick another test objective for processing.
When all of the test objectives are processed, the readlmplementationKnowledgeOnto method of the JUnitTestSuiteWriter is called, which loads the implementation ontology into its implOntModel property.
Then, generate method of the
JUnitTestSuiteWriter is called to load the test suite ontology in its testOntModel
property and creates the JUnit test suite accordingly.
4.2.4
System Operation
To operate the system, after creating the objects of the classes and initializing them,
the process method of the TestObjectiveGenerator is called to create the test objectives. Then the process method of the theTestGeneratorController is called to create
the JUnit test suite for the test objectives (Figure 4.7).
66
System Operation
I
/rcreate objects and set their parameters including
addresses of the input and output files*/
>««S«*'9PWA^<<^S*MSA?
"
theTestObjectiveGenerator.process
(testObjectiveFileAddress);
theTestGeneratorController.process
(testObjectiveFileAddress, JUnitFileAddress);
msm0t®m®tsm$$mm>iwM$^
Figure 4.7: The high-level activity diagram of system operation
4.3
System Operation Scenario
The system can be used by test experts to compose domain/system specific coverage
criteria. The new coverage criteria can be specified by a test expert or selected from
a coverage criteria library. The system design can be extended to support plug-in
based extension to include new coverage criteria. In the present design, to add new
coverage criteria rules, they are composed in POSL and are appended to the coverage
criteria file. The extra knowledge that is referred to the coverage criteria rules is
added to the expert knowledge ontology. The system can be extended to populate
the expert knowledge ontology from other sources such as formal documents and the
system code. Besides, the system can be extended to import the test oracles into
ontologies from their XMI format exported from the existing UML diagrams. If the
implementation knowledge ontology is used the generated test cases are executable
and can be automatically executed, otherwise the generated test cases are manually
executed.
The system can be extended to support commonly used coverage criteria and
67
coverage criteria which are developed for discovery of a specific class of errors. Development of the coverage criteria for discovery of a specific class of errors could be
done using an error taxonomy as a reference. Another approach is to specify domain
specific coverage criteria such as coverage criteria for GUI testing or concurrency
testing. This can be helpful for software companies that work in a specific domain.
However, there is a limitation on the coverage criteria that can be added to the
system without changing the code; they should only use the test objective predicates
that are supported by the system. If new test objective predicates are to be used in
the coverage criteria, the redundancy checking rule templates for the test objective
predicates should be added to the system by appending the rule in POSL to the
corresponding file. Besides, the code of PDDLGeneration subsystem needs to be
modified to support initialization of the AI Planner with new test objective predicates. Hence, for the system to be effective, a comprehensive test objective predicate
language is required to enable the test experts to define various test objectives and
the system should be extended to support the test objective predicate language.
4.4
Summary
This chapter describes the design of a system for the ontology-based test case generation method. Several specifications are transformed by the system into JUnit test
cases. The system has three main subsystems: test objective generation, redundancy
checking, and test case generation.
The test objective generation subsystem, which performs the redundancy checking phase of the method, generates the test objectives.
The redundancy checking subsystem performs the redundancy checking phase of
the method and is used by the test case generation subsystem before generating a
test case for a test objective.
68
The test case generation subsystem performs the abstract test suite ontology
generation and the executable test suite generation phases of the method. It has
five classes: theTestGenerator Controller uses other classes for test case generation
and controls the process. PDDLGeneration initializes the input of the planner by
generating PDDL domain and problem files. TestcaseGeneration generates test cases.
OntologyTestSuiteWriter adds the generated test cases to the test suite ontology.
JUnitTestSuiteWriter, generates the JUnit test suite from the test suite ontology.
69
Chapter 5
System Implementation
This chapter details an implementation of the system for our ontology-based test
case generation method, which realizes the design described in Chapter 4. In the
system implementation, the term 'test structure' is used instead of test objective,
and the term 'test structure assessment' is used instead of 'test objective redundancy
checking'.
The rest of this chapter is organized as follows: Section 5.1 describes which
implementation classes realize the design classes. Section 5.2 describes the packages,
the classes included in them, their responsibilities, their relationships, as well as how
they operate together to realize the system behavior. Section 5.3 summarizes this
chapter.
5.1
Realization of Design Classes
Table 5.1 shows the mapping of the design classes to the implementation classes.
There are 8 packages that work together to realize the system behavior: The testStructureGenerator.generator uses the testStructureGenerator.common to generate
test objectives. The testStructureGenerator.assessment reads the test suite ontology
70
uses the testStructureGenerator.common for redundancy checking. The testcaseGenerator. plannerinit initializes input of the planner and controls test case generation.
The PDDL packages provide data structures for in-memory representation of PDDL
problems and domains. The testcaseGenerator.plannerRunner provide functionallity
for running the AI planner. The testcaseGenerator.testWriter provide functionality
for writing test suite ontology and executable test suites.
Figure 5.1: Mapping of design classes to implementation classes
Design Class
TestObjectiveGenerator
RedundancyChecking
POSLReasoner
i PDDLGeneration
PDDLDomain
PDDLProblem
TestcaseGeneration
TestGeneratorController
OntologyTestSu ite Writer
JUnitTestSuiteWriter
Test
5.2
Implementation Class
:estStructureGenerator.generator.StructureGeneratorController
:estStructureGenerator.generator. Expert KnowledgeReader
:estStructureGenerator.generator.StateMachineOWLReader
:estStructureGenerator.assessment.TestSuiteOWLReader
:estStructureGenerator.assessment.AssessmentRuleGenerator
:estStructureGenerator.common.LPWnter
estStructureGenerator.common.Reasoner
estcaseGenerator.plannerinit.PDDLWriter
estcaseGenerator plannerinit.TestStructureReader
estcaseGenerator.plannerinit.PDDLConstructor
estcaseGenerator.plannerinit.datastructuresTestStructurePDDLMap
estcaseGenerator.plannerinit.datastructures.PDDLProblem
estcaseGenerator.plannerinit.datastructu res.PDDLDomain
estcaseGenerator.plannerinit.datastructures.DomainProblemPair
estcaseGenerator.plannerRunner.runPlanner
estcaseGenerator.plannerinit PlannerManager
estcaseGenerator.plannerinit.datastructures.TestBuilder
estcaseGenerator.testWriterlmplemetationKnowledge
estcaseGenerator.testWriterJUnitWriter
estcaseGenerator.testWriterOWLTestSuiteWriter
estcaseGenerator.testWriterTest
estcaseGenerator.testWnterStep
Detailed Design
This section describes the system packages, the classes included in them and their
responsibilities. It also describes the relationship between the classes, their implementation and how they operate together to realize the system behavior.
71
5.2.1
T h e testStructureGenerator.generator Package
This package includes three classes (Figure 5.2): StructureGeneratorController,
StateMachineOWLReader and ExpertKnowledgeReader.
This package uses the classes in testStructureGenerator.common jointly with the
testStructureGenerator.assessment package for performing reasoning tasks.
The
StructureGeneratorControUer's generate method calls the process methods of the
StateMachineOWLReader and ExpertKnowledgeReader.
When their process methods are called, they parse the UML state machine
and the expert knowledge represented in OWL-DL using the Jena API. Then calls
the methods of the testStructureGenerator.common.LPWriter class for writing it in
POSL format. The process method of the StateMachineOWLReader also calls the
methods of testcaseGenerator.plannerinit.PDDLConstructor class for constructing a
PDDL file from the UML state machine. Code that demonstrates the use of Jena
API for parsing OWL-DL files is shown in Appendix C.l. The StructureGeneratorController then uses
testStructureGenerator.common.Reasoner for reasoning on the POSL, coverage criteria rules to generate the test structures.
5.2.2
T h e testStructureGenerator.assessment Package
This package has two classes named TestSuiteOWLReader and AssessmentRuleGenerator(Figure 5.3). It uses the classes in testStructureGenerator.common jointly with
the testStructureGenerator.generator package for performing reasoning tasks.
The TestSuiteOWLReader has a process method that parses the TestSuite OWL
file and uses the testStructureGenerator. common. LP Writer class to convert it POSL.
It uses the Jena API for parsing the test suite OWL file. A code that demonstrates
the use of the Jena API for reading OWL-DL ontologies is shown in Appendix C.l.
72
Reasoner
from testStructureCenerator.common
p r i v o t e jdrew.oo.td.BackwgrdReasoner
reasoner;
Reasoner ( )
boolpan
reason ( String, String, String, boolean, String )
t
*jpjaw,m ^jjj0jjjj^^w^a
fggsansr"
StructureGeneratorController
String ccAddress;
String testStructureAddress;
String thePOSLAddress;
+ StructureGeneratorController ( Reasoner, String, String )
+ void generate ( String )
ekReader
ExpertKnowledgeReader
com.hp.hpl.jena.ontology.OntModel m;
< ExpertKnowledgeReader ( LPWriter,
String )
+ void process (String)
smReader
StateMachineOWLReader
com. hp. hpl. j ena. ontol ogy. OntModel in;
PDDLConstructor pddlConstructor;
StateMachineOWLReader ( LPWriter,
String )
+ void process (String)
IPWriter
LPWriter
from testStructureGenerator.common
4 LPWriter(String)
+ void i n i t ( )
void finalizeO
void addIndividual(String, String)
void addIndividualDataPropertyString(String, String, String)
void addIndividualDataPropertyInteger(String, String, String)
void addIndividualDataPropertyBoolean(String, String ind, String)
void addIndividualObjectProperty(String, String, String)
void addSubclass(String, String)
void addSubproperty(String, String)
Figure 5.2: The testStructureGenerator.generator package
73
AssessmentRuleGenerator
HashMgp < String , String > templates;
+ AssessmentRuleGenerator ( String, String )
* void generateAssessmentRule ( String )
String getquery ( String AssessmentRule )
f^W^iPJ«|j««i^Jij>li«^^
TestSuiteOWLReader
com.hp.hpl.jeng.ontology.OntModel m;
+ TestSuiteOWLReader C LPWriter )
+ void process ( String )
IPWriter
LPWriter
from testStructureGenerator.common
+ LPWriter(String)
h void i n i t ( )
void finalize()
void addlndividualCString, String)
void addIndividualDataPropertyString(String, String, String)
void addIndividualDataPropertyInteger(String, String, String)
void addlndividualDataPropertyBooleanCString, String ind, String)
void addlndividualObjectPropertyCString, String, String)
void addSubclass(String, String)
void addSubproperty(String, String)
Figure 5.3: The classes of the testStructureGenerator.assessment package
The AssessmentRuleGenerator reads and parses the assessment rule template file
and generates a test structure assessment rule for a given test structure. The generateAssessmentRule method generates an assessment rule for a given test structure,
by replacing the parameters of the test structure in the corresponding rule template.
The getquery method returns an assessment query for a test structure, which is the
head of the generated assessment rule followed by a dot.
5.2.3
The testStructureGenerator.common Package
The classes in this package are jointly used by the testStructureGenerator.assessment
and testStructureGenerator.generator packages. This package includes two classes:
LPWriter and Reasoner. The Reasoner class uses 0 0 jDREW's BackwardReasoner
for reasoning. The reasoner class has a public method called reason. When it is
74
called, it parses a knowledge-base, coverage criteria or assessment rules, and a query,
and performs reasoning. A code that demonstrates how 0 0 jDREW is used for
parsing a knowledge base and a query, and reasoning is shown in Appendix C.4.
The input to 0 0 jDREW reasoner is in POSL format. The LP Writer is used to
convert the elements in OWL-DL knowledge-based to POSL knowledge, based on
the mappings described in Appendix A. 2. Some of the LP Writer methods for writing
the POSL file are shown in Appendix C.2.
5.2.4
T h e testcaseGenerator.plannerinit Package
This package (Figures 5.4 and 5.5) contains four classes: The PlannerManager class
is a controller class that uses other classes to assess the test structure for redundancy, generate PDDL files and generate the plans. The PDDLConstructor uses
classes from testcaseGenerator.plannerinit.datastructures to create and store the inmemory representation of PDDL problems and domains. The PDDLWriter class
writes the in-memory representation of PDDL domains and problems into files. The
TestStructureReader class reads the test structures and uses PDDLConstructor to
create PDDL files for them.
The PDDLManager is a controller class. It has a process method that is called
to process the test structures. When the process method is called, it first uses an
object of the TestStructureReader class to process the test structures and modify the basePDDLProblem and basePDDLDomain objects, which are constructed
from the UML state machine to enforce the generated plan to satisfy the test structure. The PlannerManager then uses PDDLWriter to write the generated PDDL
files. It uses testcaseGenerator.plannerinit.datastructures. TestStructurePDDLMap
to maintain the connection between the test structures and the PDDL domains and
problems files. Finally, for every test structure it uses the testStructureGenerator.assessment.AssessmentRuleGenerator class to generate an assessment rule, and
75
TestStructureReader
PlannerManager
TestStructurePDDLMap
map;
PDDLDomain baseDomain;
PDDLProblem baseProblem;
+ TestStructureReader(String,
PDDLConstructor, TestStructurePDDLMap,
final PDDLDomain, final PDDLProblem)
+ void processO
+ ArrayList<String> readTestStructureFileQ
SS^^^^^^^^^^^P^^^^^?
PDDLConstructor
PDDLDomain domain;
PDDLProblem problem;
< PDDLConstructor ( )
+ void init C )
* state C boolean, boolean, Stri ng )
+ void transition ( String )
+ void from C String, String )
+ void to C String, String )
+ void precondition_EventName ( String,
String )
+ void precondition_event ( Stri ng,
String )
+ void precondition_guard ( Stri ng,
String )
+ void precondition_GuardDesc ( String,
String )
+ void effect ( String, String )
+ void effectDesc ( String, Stri ng )
+ void stateVariable ( String, boolean )
i void finalize ( )
PDDLConstructor pddI Constructor;
TestStructurePDDLMap
map;
PDDLWriter writer;
TestBuilder testBuilder;
4- PlannerManager ( TestBuilder, String,
String )
-t PDDLDomain getNewPDDLDomain ( )
+ PDDLProblem getNewPDDLProblem ( )
t- PDDLConstructor getPDDLConstructor ( )
+ void PDDLDomain getClonePDDLDomain ( )
+ void PDDLProblem getClonePDDLProblem ( )
+ TestStructurePDDLMap getCurrentMap ( )
+ TestStructurePDDLMap getNewMap C )
t void process ( String, String, String )
PDDLWriter
+ public PDDLWriter ( )
+ void writeDomin ( PDDLDomain, String,
String )
* void writeProblem ( PDDLProblem, String,
String )
P W W W
Figure 5.4: The classes of the testcaseGenerator.plannerinit package
converts the Test Suite from OWL to POSL using testStructureGenerator.assessment.
TestSuiteOWLReader and testStructureGenerator.common.LPWriter classes, and
uses the testStructureGenerator.common.Reasoner to assess the test structure for
redundancy. If the test structure is not redundant, it uses a
testcaseGenerator.plannerRunner.runPlanner object to run the planner and create
test cases.
The PDDLConstructor provides methods for constructing in-memory representation of PDDL domains and problems from UML state machines (i.e. objects of
the PDDLDomain and PDDLProblem classes). The PDDLConstructor methods are
called by the testStructureGenerator.generator.StateMachineOWLReader class. The
generated PDDLDomain object is equivalent to the UML state machine. The gen76
.^instantiates
-instant iates.>
•••.instantiates*
PDDLProblem
from
PDDLDomain
from
testcaseGenerator.plannerinit
.datastructures
testcaseGenerator.plannerinit
.datastructures
Figure 5.5: The testcaseGenerator.plannerinit package
erated PDDLProblem requires that a path from the start state of the state machine
to the final state be generated. The generated PDDLDomain and PDDLProblems
objects are basePDDLDomain and basePDDLProblem objects. They are cloned and
PDDL elements are added to their copies, by the TestStructureReader in order to
enforce the test structure. A mapping between the UML statemachines and PDDL
domain and problem is described in Section 2.3.1.
The TestStructureReader class reads the test structures and modifies the
basePDDLDomain and the basePDDLProblem to enforce the planner to generate
plan that satisfies the test structure. For adding the test structure predicates to
the system this class needs to be modified to support generation of problems and
plans for it. For instance, for the 'covertransition' test structure predicate, a 'Passed'
predicate is added to the PDDL domain and is set true when the action corresponding
to the transition, which needs to be covered is traversed. In the PDDL problem the
'Passed' predicate is added to the goals.
The PDDLWriter is used to write the PDDL domains and problems. The syntax
77
of PDDL domain and problem is described in section 2.3.1.
5.2.5
The testcaseGenerator.plannerinit.datastructures Package
This package (Figure 5.6) provides classes for in-memory representation of PDDL
Domains and Problems as well as their mapping to the test structures. The classes
included in this package are shown in Figure 5.7.
j TestStructurePDDLMap
problem
Figure 5.6: The testcaseGenerator.plannerinit.datastructures package
The TestStructurePDDLMap maps a test structure to the corresponding PDDL
Problem and Domain files. This class also generates a unique file address for the
PDDL domain and problems. A test structure is a key for the HashMap, which
returns a DomainProblemPair. The DomainProblemPair is an aggregation of a pair
of PDDLDomain and PDDLProblem. The PDDLDomain and the PDDLProblem
classes are in-memory representations of PDDL domains and problems. The design of the data structures that implement PDDL domain and problem classes is
based on the BNF of PDDL 2.1 [5]. Both the PDDLDomain and PDDLProblem
are an aggregation of PDDL Constructs. The components of a PDDLDomain are
Actions, Predicates, Functions, and Types. The components of a PDDLProblem are
Objects, Inits, and Goals. The PDDLDomain and PDDLProblem classed provide
78
TestStructurePDDLMap
HashMap<String, DomainProblemPair?
structureToIndex
TestStructurePDDLMap ( )
String getDir(structure)
String getFileName( PDDLProblem )
String getFileName( PDDLDomain )
void add(String, DomainProblemPair)
DomainProblemPair get(String)
StringQ getTestStructuresQ
DomainProblemPair
PDDLDomain
domain;
PDDLProblem problem;
DomainProblemPairCPDDLDomain, PDDLProblem)
PDDLDomain getDomainO
PDDLProblem getProblemO
Cloneable
Cloneable
PDDLProblem
TypedElement firstStateObject;
TypedElement finalStateObject;
ArrayList<TypedElement> objects;
ArrayList<Init> mits;
ArrayList<GoalDesc> goals;
String name;
+ PDDLProblem()
<- void addObjectCTypedElement )
< void initObjectlteratorO
+ TypedElement nextObjectO
+ boolean hasNextObjectO
+ ArrayList<TypedElement> getObjectsC)
+ void addlnitCInit init)
4 void initlnitlteratorC)
-i Init nextlnitO
+ boolean hasNextlnitO
+ void addGoal(GoalDesc gd)
+ void initgoallteratorO
+ GoalDesc nextGoal()
i boolean hasNextGoal C)
t- setFinalStateObjectCTypedElement )
+ void TypedElement getFinalStateObject()
t void setFirstStateObject(TypedElement)
+ TypedElement getFirstStateObject()
+ String getNameO
> void setNameCString name)
PDDLDomain
HashMap<String, Action> actions;
HashMap<String, Predicate?
predicates;
HashMap<String, Function?
functions;
HashMap<String, Type> types;
String name;
-t PDDLDomainO
+ void addAction(Action act)
i Action getAction(String name)
t void initActionlteratorO
i Action nextActionC)
+ boolean hasNextAction()
* void addPredicateCString,
ArrayList<TypedElement>)
-> void initPredicatelteratorC)
+ Predicate nextPredicateO
+ boolean hasNextPredicateO
+ void addFunction(String,
ArrayList<TypedElement>)
-r void initFunctionlteratorO
•< Function nextFunctionC)
. boolean hasNextFunction()
* void addPDDLType(String, String)
T void addPDDLTypeCString typeName)
+ Type getPDDLType(String)
J- void initPDDLTypelteratorO
Type nextPDDLType()
- boolean hasNextPDDLTypeC)
t String getName()
•• void setNomeCString name)
^$^^^^B5S^^wm^^2^.«~<??
tmzmmvmmsmmxt
Figure 5.7: The classes of the testcaseGenerator.plannerinit.datastructures package
79
methods for iterating over their components as well as adding components to them.
Both implement the Cloneable interface. In their clone methods, they call the clone
methods of their components. The other data structures that are used for implementing in-memory representation of PDDL domains and problems are included in
the testcaseGenerator.plannerinit.datastructures.PDDL package.
5.2.6
The testcaseGenerator.plannerinit.datastructures.
PDDL package
This package includes classes, which are used for in-memory representation of PDDL
domain and PDDL problem. The classes included in this package are components of
the composite PDDLDomain and PDDLProblem classes. The design of this package
is based on BNF of PDDL 2.1. Only the relevant portion of the PDDL 2.1 is
implemented. For every variable in BNF a class is added to the design. For every
rule in the BNF, attributes with type of the classes corresponding to the variable
on the right side of the rule are added to class corresponding to the left side of the
rule. For instance, for the rule below from the PDDL 2.1 BNF, two classes Effect
and CEffect are added to the class design and the Effect class has a linked list of
CEffects in its andList attribute.
<effect> ::= ( and <c-effect>? )
In cases for which a variable in BNF goes to several regular expressions, an enum
is defined, which has values corresponding to the rules that are used to expand the
variable. A Type property, which holds the type of the defined enum, is added to the
class corresponding to the variable. The Type property specifies which attributes
of the class have valid values. When an object of a class that has Type attribute
is created, the Type of the object is set, based on what rule is used to expand the
variable and the values of the corresponding attributes of the rule are set. When
80
retrieving the data of an object of a class that has Type attribute, the value of the
Type attribute is checked to determine what attributes have valid values.
The classes of this package provide methods for iterating over their components
and setting some of their attributes. They also implement the Cloneable interface.
In their clone methods they call the clone methods of their components.
5.2.7
The testcaseGenerator.plannerRunner Package
This package includes the runPlanner class which is responsible for running the AI
Planner. The AI planner used in this implementation is Metric FF [43]. Figure 5.8
shows the runPlanner class diagram. The run method executes the planner. The
planner takes two arguments: the name of the PDDL domain file and the name of
the PDDL problem file. Then, the run method parses the output and builds an
object of the Test class. The format of the command to run the planner is as follows:
ff -o domain -f problem
A portion of the generated plan for the [covertransition] [opentoclosed] test objective is listed below. The steps of the plan start with a #number starting from # 0 .
After each step, the names of the predicates that are true are listed. In the listing
below, the OPEN predicate corresponds to the open state variable of the class. The
PASSED predicate is set to true when the OPENTOCLOSED PDDL action, which
corresponds to the covertransition test objective parameter is traversed. The ACTIVE predicate indicates the active state of the UML state machine at each step.
The predicates that are added to impose the test objective are ignored when the
plan is translated to an object of the Test class.
Output:
ff: parsing domain file
domain 'STATEMACHINE' defined
81
Test
from testcaseGenerator.testWriter
String comment;
ArrayList < Step >
test;
+ Test ( String )
+ void addStep ( String )
+ String getTransitionName O
+ void addOutcome ( String, String,
boolean )
+ void beginlterationO
+ boolean hasNext O
+ Step getNextStep C )
+ String getComment ( )
Step
String transitionName;
HashMap < String , Boolean
outcome;
+ Step ( String )
+ void addOutCome ( String, boolean)
+ String getTransitionName O
+ void beginlteration Q
+ boolean hasNext O
+ String getNextVar ( )
+ boolean getvalueCString varName)
wmniw^^^^^^^w^^^w^
T
<T.nstantiates»
i
runPlanner
+
+
+
+
runP'
void
void
Test
anner ( )
init(String)
finalizeC)
run ( S t r i n g , String )
Figure 5.8: The testcaseGenerator. planner Runner package
. . . done.
ff: parsing problem f i l e
problem ' P I ' defined
. . . done.
ff: found legal plan as follows
#0: STARTTOCLOSED CLOSEDSTATEOBJECT STARTSTATEOBJECT
(ACTIVE CLOSEDSTATEOBJECT)
#1: CLOSEDTOOPEN CLOSEDSTATEOBJECT OPENSTATEOBJECT
(OPEN)
(PASSED)
(ACTIVE OPENSTATEOBJECT)
#2: OPENTOCLOSED OPENSTATEOBJECT CLOSEDSTATEOBJECT
(PASSED)
82
(ACTIVE CLOSEDSTATEOBJECT)
# 3 : CLOSEDTOFINAL FINALSTATE_1DBJECT CLOSEDSTATEOBJECT
(PASSED)
(ACTIVE FINALSTATE.IOBJECT)
time spent:
5.2.8
0.00 seconds instantiating 4 easy, 0 hard action templates
T h e testcaseGenerator.testWriter Package
This package includes classes which are responsible for writing a test case in the
OWL file and JUnit file (Figure 5.9). A Test object is an in-memory representation
of a test case and contains a linked list of objects of the Step class. A TestBuilder
class is responsible for managing OWLTestSuiteWriter and JUnit Writer to write
test suites. The OWLTestSuiteWriter class provides methods for adding a test to
an OWL test suite ontology. The JUnit Writer class provides methods for writing a
JUnit test case. The ImplementationKnowledge class provides methods for retrieving
implementation knowledge from the implementation knowledge ontology. Figure 5.10
shows the classes of this package.
.
. Steps
1 S t e p
\
ImplemetationKnowledgEle he-
Figure 5.9: The testcaseGenerator.testWriter package
83
ImplemetationKnowledge
+
+
+
+
+
+
+
implementationKnowledge ( S t r i n g , String)
String getGetterlmplName ( S t r i n g )
String getMethodlmplName ( S t r i n g )
String getClassImplName ( )
String getPackagelmplName ( )
String getCall ( S t r i n g )
S t r i n g [ ] getAUBooleanVariablelnds ( )
TestBuilder
ImplementationKnowledge imp;
JUnitWriter jUnitWriter;
OWLTestSuiteWriter o w l W r i t e r ;
+ TestBuilder ( S t r i n g ,
ImplementationKnowledge, String )
+ void buildTest ( Test )
+ void f i n a l i z e ( )
'»jK?SK?£»BS*33SSf>Sf«
JUnitWriter
String outputFileAddress;
JUnitWriter ()
+ void init ( String, String)
+ void createJunitTestCaseFile ()
+ void createfileHeader ()
+ void createTestCase ()
+ void addCreateTestObjectStep ( StringfJ)
+ void addCallStep (String, String[])
+ void addBooleanFunctionOutcome ( String, String, String[], boolean )
-t void endTestCase ( )
t void addcomment(String)
w
jSMtSW^IPSSiWiBlWPP
OWLTestSuiteWriter
String theOWLTestSuiteAddress;
+ OWLTestSuiteWriter (String, String)
+ void createTestsuite ()
+ void loadTestSuite ()
+ Individual createTestlndividual (String)
4 Individual createStepIndividual ( String,
Individual )
+ Individual createVariableValuelndividual ( String,
Individual )
+ void setStateVariable ( Individual, String )
+ void setStateVariableValue ( Individual, boolean )
+ void setNextStep ( Individual, Individual )
+ void setTransition ( Individual, String )
+ void saveTestSuite ( )
^^ijas^^ggyjia^^
Test
Step
String comment;
ArrayList < Step >
test;
+ Test ( String )
+ void addStep ( String )
+ String getTransitionName ()
+ void addOutcome ( String, String,
boolean )
+ void beginlterationO
f boolean hasNext ()
+ Step getNextStep ( )
+ String getComment ( )
wmtmmwwmmmmmmwwMmmewm,
String transitlonName;
HashMap < String , Boolean > outcome;
+ Step ( String )
+ void addOutCome ( String, boolean)
+ String getTransitionName ()
+ void beginlteration ()
+ boolean hasNext ()
-i String getNextVar ( )
+ boolean getvalue(String varName)
pppwpwwswssp
Figure 5.10: The classes of the testcaseGenerator.testWriter package
84
A Test object, which implements a test case contains a linked list of Step objects.
It provides methods for creating the Step objects and iterating over its steps. A Step
object, which implements a step of a test case, contains the value of state variables
as the outcome of the step and the name of the transition of the state machine that
is passed. It also provides methods for adding outcomes to the step and retrieving
the outcomes.
The TestBuilder class is used to build JUnit and OWL test suites from the Test
objects. The buildTest method parses a given Test and calls the methods of the
objects of OWLTestSuiteWriter and JUnitWriter to build the test suite in OWL
format and JUnit format, respectively.
The OWLTestSuiteWriter uses the Jena API for adding test cases to the test
suite. It provides methods for loading an OWL Test Suite, creating a test case,
adding the steps to the test case, and setting the properties of the test steps. The
JUnitWriter provides methods for writing a JUnit file, including methods to create
the file header, methods to create a test case and the steps of the test case. Some of
the OWLTestSuiteWriter methods for creating the OWL Test Suite and writing to
file are shown in Appendix C.3. An object of the ImplemntationKnowledge is used
by the TestBuilder to retrieve the implementation knowledge. It has methods for
retrieving the class name, the package name, the implementation names of the state
variables and methods, and the getter methods of the state variables.
5.3
Summary
The system implementation uses the Jena API for manipulating OWL ontologies, 0 0
jDREW for reasoning, and an AI planner, which is named Metric FF, for generation
of test cases. There are 8 packages that work together to realize the system behavior: The testStructureGenerator.generator uses the testStructureGenerator.common
85
to generate test objectives. The testStructureGenerator.assessment reads the test
suite ontology and uses the testStructureGenerator.common for redundancy checking. The testcaseGenerator.plannerinit initializes input of the planner and controls
test case generation.
The testcaseGenerator.plannerinit.datastructures and test-
caseGenerator.plannerinit. datastructures.PDDL packages provide data structures for
in-memory representation of PDDL problems and domains.
The testcaseGener-
ator.planner Runner provide functionallity for running the AI planner.
The test-
caseGenerator.testWriter provide functionality for writing test suite ontology and
executable test suites.
86
Chapter 6
System Demonstration and
Evaluation
In this chapter performance of the system is demonstrated and its potential for extensibility is explored (extensibility includes support for various coverage criteria and
expert knowledge). Performance of the system in generation of test cases for a traffic
light controller class, and several system limitations are delineated in Section 6.1.
Section 6.2 examines extensibility of the system by discussing examples of extension
of the test oracle with expert knowledge and definition of custom coverage criteria,
definition of rules for several coverage criteria defined in literature, and definition of
coverage criteria based on an error taxonomy. Section 6.3 the summarizes the this
chapter.
6.1
Case Study
System performance is demonstrated for test case generation for a traffic light controller class.
87
6.1.1
Case Study: Traffic Light Class
Figure 6.1 depicts the state machine of a cross road traffic light controller. The traffic
light stays green for at least the 'long time interval' in one direction and turns yellow
when a car is sensed in the other direction. Then it remains yellow for the 'short
time interval' before it becomes red. If a pedestrian crosses a road when its light is
in a green state, the light goes to blink state for the 'blink time interval'. There is
a correspondance between the state machine elements and the class under test: The
state variables correspond to the member variables. The events correspond to the
public methods. The actions simulate how the state variables are changed by the
methods. The traffic light class does not have a timer and delegates the counting
responsibility to another class which produces call events.
Figure 6.2 depicts part of the traffic light state machine ontology, which is detailed
in Appendix D.l. Part of the test suite and part of the implementation knowledge
ontologies are depicted in Figures 6.3 and 6.4, respectively. As an example coverage
criteria in POSL for all-transition coverage and all-transition-pair coverage as well
as the query, which is asked from 0 0 jDREW to generate test objectives, are shown
below.
- All Transition Coverage: coverage([covertransition],[?tr] ) :- t r a n s i t i o n ( ? t r ) .
- All Transition Pair Coverage: coverage([immediate],[?a,?b])
transition(?a),
:-
transition(?b),notEqual(?a,?b),from(?a,?state),to(?b,?state).
- Query: coverage(?predicates, ?args).
88
r
LongTimelntervalO [ ] /
lti=true;
tymwiimm -)<
lti=false;
Road1Green=true;
Road2Green=false;
Road1Yellow=false;
Road2Yellow=false;
Road1Red=false;
Road2Red=false;
Road1Pedestnan()[]/
Road1Blmk=true;
BlinkTimelntervalO [ ] /
Road1Blink=false;
ShortTimelntervalO
[]/
Road1Green=true;
Road2Yellow=false,
Road2Red=true;
SenseRoad2() [ Itirfrue ] /
Road1Yellow=true;
Road1Green=false;
^'imMft^teyj
ShortTimelntervalO [ ] /
Road2Green=true;
Road1Yellow=false,
Road1Red=true;
Road2Pedestrian() [ ] /
Road2Blink=true;
SenseRoad1()[lti=true]/
Road2Yellow=true;
Road2Green=false;
LongTimelntervalO [ ] '
lti=true;
BlinkTimelntervalO [ J /
Road2Blink=false;
LongTimelntervalO [ ] /
lti=true;
Figure 6.1: Traffic light state machine
89
-srmStateMachine
tltrdilicLiuM&M - * AV
I
\
smears
sm:AbstractState
/ \ \ ^
s
tes
, n states
.is
sm:State Variable
•"- x _^
/
tl R a a d K ireon
smttransitions
smtState
^n tiansiticns
^
i
* ———
l 3T to
v
\
isVa
I road j i O f c i
*•»
- > t1 o a d l y e i b w ,
T
* J
l L IConJition
in
"...
. -•
\U
\»
~ -'
SIT guaid
%^"'
sm:Final State
sm:Condition
" ^ ^
smtgnard
ti r o a d l g - p & n t o r o a d l y >llow
__
sm:StartState
sm-.tronT 1
„sm:Transition
ti
s-<<tor
i
!
_ ^ > - * ienseRodd2
"~JT^"
9» tl r n a r l l g o y o l l
sm: Behaviour
"
TBox Dominions
Instantiation
ABox Dptimtion
class2<
Pr°Pert>
r d i i U»S1 ••*
T noma!? •*• « y t v
classl
classt
I <3iv<"i..«i
Figure 6.2: Traffic light state machine ontology
sm ro «J I greentoroddl gr( ( n
sm stdrttorodrilgreen
it
\
si a<T *P5i r n
W i
\<n
ci
ill? st test!)
/"
tltrst K stOst(>pO 4 ^ "
smtTransition - t
ts:Value t ^ v a l u e
ts:VariableValuets:varlable
llt( si 'est! ->U p 1 o i tom<
tltost testOstppOoutc omc
TBox Definitions
Instantiation
ABox Definition
>sm:StateVariable
class2 < P r ° P e r t y
mividi i l l >*
classl
classl
Ji!_I^IJii*j.-!!^.."5...—-•_~ZJl!!yjT,!lJL'-.j
Figure 6.3: Traffic light test suite ontology
90
-4!:lrr)fficliahtSM
imp:name tlimprtratficLightlMPCIass -*•
imp:pack&geName T
imp:hasClass
imp:lmplementedClass <
-sm:StateMachine
imD:inverseOfHasStateVariable ^ m .gf B f 0 \/ a ..i B hi n
imp:classname
f
imp:lmplementedStateVariable
throadl green
?
t
imp:hasSette£A$€tfio<l
!
np inverseOfj lasStatev/anable
tlimp:roadl Green
imp:lmplementedCetterMethod
\
imp:lmplementedSetterMethod
is\a
i
f
\
r n p huMmpC u'l'-T Mrliuxi
\
^ ^ ^
imp:lmplementedMethod
thmp isRoadl Green
imp:inverseOfHasCail
impHmplementedCallMethod^-
tlimp.X'neRoadl ««
TBox Definitions:
Instantiation:
ABox Definition:
-sm:Call
u seneroadl
class2c P r ° P e r g
claSS1
mdividua11 •*
classl
individuag<-piopoity ~ ~'^dyiduajl
M»«**tJW*«!»K«
Figure 6.4: Traffic light implementation knowledge
6.1.2
Generated Test Suites
All Transition Coverage
Appendix D.2 includes the generated test objectives
and the corresponding paths in the state machine. There were 14 test objectives
identified as expected for the all-transition coverage criteria. Four test cases were
generated to satisfy the coverage criterion. Nine of the test objectives were alreadysatisfied by the test suite at the time they were introduced to the system.
All Transition-Pair Coverage
Appendix D.3 includes the generated test objec-
tives and the corresponding paths in the state machine. There were 28 test objectives
identified as expected for the all-transition-pair coverage criteria: Eight test cases
91
were generated to satisfy this coverage criterion. Three of the generated test objectives were impossible, because the guard condition of the second transition made
it impossible to pass it immediately after the first transition. Seventeen of the test
objectives were already satisfied by the test suite at the time they were introduced
to the system.
6.1.3
Limitations
Non-Optimal Redundancy Checking
The demonstration shows that the re-
dundancy checking process reduces the number of generated test cases by avoiding
generating redundant test cases. However, an examination of the generated test suite
for the all-transition coverage reveals that [covertransition], [roadlgreentoroadlblink]
test objective is covered in two test cases:
Test Objective #2: [ c o v e r t r a n s i t i o n ] ,
[roadlgreentoroadlblink]
Test #2: starttoroadlgreen - roadlgreentoroadlgreen roadlgreentoroadlblink - roadlblinktoroadlgreen roadlgreentoroadlyellow - roadlyellowtoroad2green road2greentoroad2green - road2greentoroad2yellow road2yellowtofinal
Test Objective #5: [covertransition],
[road2yellowtoroadlgreen]
Test #3: starttoroadlgreen - roadlgreentoroadlgreen roadlgreentoroadlyellow - roadlyellowtoroad2green road2greentoroad2green - road2greentoroad2yellow road2yellowtoroadlgreen - roadlgreentoroadlgreen roadlgreentoroadlyellow - roadlyellowtoroad2green road2greentoroad2green - road2greentoroad2yellow road2yellowtofinal
The test objective satisfied by test # 2 is satisfied by test # 3 too. This is because
the redundancy checking works backwards. That is, it only checks whether a test
92
case that covers the current test objective already exists in the test suite. It does not
guarantee that a test objective will not be satisfied by a test case generated later.
Thus the method does not necessarily generate a test suite of optimal size. The order
of selection of test objectives affects the size of the generated suite.
T i m e Efficiency
For larger units under test, test case generation can be slow
because of the use of reasoners. The reasoning algorithms are not fast and this
is the trade-off between efficiency and generality. In spite of the simplicity of the
traffic light example, the time spent on reasoning collectively for the transition pair
coverage was about 42 seconds. The system calls the reasoning algorithm in phase
1 for the generation of the test objectives and in phase 2 for redundancy checking of
every generated test objective. Hence for large models the system may not operate
interactively.
Modeling Complexity
Creating even a trivial UML state machine diagram is
complex and error prone. In spite of the simplicity of the traffic light class, its
state machine includes 8 states and 14 transitions. Furthermore, what complicates
the modeling is that, in order to use a model for automated test case generation,
state variables and behaviours that manipulate them must be formally described in a
machine-processible language. Additionally, creating a UML state machine ontology,
augmenting it with expert knowledge, and creating the implementation knowledge
ontology are intricate tasks.
The complexity of modeling is a challenge in generation of test cases from formal
models. Some aspects of the complexity can be tackled with utilization of automated
tools. The ontology-based representation of the UML state machine can be automatically generated from the XMI representation of existing UML state machine of
the current system. To reduce the complexity of modeling expert knowledge, it may
be possible to provide for tools to populate expert knowledge ontology automatically
93
in the presence of code or a formal document that conveys the knowledge in another
format. Likewise, to deal with the complexity of implementation knowledge ontology
creation, the ontology can be populated by reverse engineering the code.
P D D L Expressiveness
In this system, AI planning is used for the generation of
paths corresponding to the test cases, from the start state to a final state of the
UML state machine. The expressiveness of the input language of the AI planner,
which is PDDL 2.1, highly limits extensibility of the current system. The traffic
light state machine did not require a highly expressive language, merely involving
manipulation of boolean variables, which is straightforward with PDDL 2.1. However the primitives supported by PDDL 2.1 are only limited to boolean and integer
variables, which makes modeling complex data structures (such as arrays, stack, and
queues) intricate, if possible at all. However, the system is designed so that its path
generator module can be replaced by other powerful technologies.
T h e Limitation of Logic Programming
In the demonstration, two simple cov-
erage criteria rules, namely, all-transition and all-transition-pair coverage criteria,
were used. However, writing rules can be complicated and the expressiveness of the
rule languages (e.g. POSL in the current system) are limited due to the limitations
of the reasoning algorithms. The constraints on the expressiveness of the rule languages limit the coverage criteria rules and redundancy checking rules that can be
supported by the system. For instance, in Horn logic no universal identifier can be
in the body of the rules. Hence, rules like the example below can not be expressed
in POSL.
Predicate2(x) :- for all x, Predicatel(x)
This limitation can be overcome for finite domains by listing all possible values
of x in a list and recursively iterating over the list to check the values of Predicate 1
94
with them. But such an 'extensional' approach can not be very efficient for large
domains.
6.2
Extensibility
This section explores the potential extensibility of the system for supporting various
coverage criteria. Ways to incorporate expert-defined coverage criteria, standard
coverage criteria from the literature, and coverage criteria definitions based on error
taxonomies are presented.
6.2.1
Examples of Extension of Test Oracle with E x p e r t
Knowledge
Test expert's knowledge about use of an unreliable library is modeled in an ontology
depicted in Figure 6.5. A coverage criterion that tests every call to the unreliable
library once is defined below. The test structure of the head of this coverage criterion
indicates that at the transition t, the state variable sv must have value val.
ek:UntestedLibrafy^k!beIon^t^,ck:Method
eJo&si
elotxses
ek:VariableValue
/ N.
sm:Behaviour
f
alue ekifc^Var
\
sm:Value srruStateVariable
k
smraction
sm:Transition
Figure 6.5: Expert knowledge ontology TBox: use of an unreliable library
coverage([AtTransitionStateVariableHasValue],
[?t,?sv,?val]):-
u n t e s t e d l i b ( ? l ) , metliod(?m), belongsto(?m,?l) ,beliaviour(?b), uses(?b,?m),
t r a n s i t i o n ( ? t ) , a c t i o n ( ? t , ? a ) , variablevalue(?vv), risky (?m,?vv), value(?val),
s t a t e v a r i a b l e (?sv), hasvar (?vv,?sv), h.asvalue(?vv,?val).
Another example is a coverage criteria that defines if a state variable has a boundary value in a state, a transition that has a behavior that uses the state variable
95
should be tested [28]. Figure 6.6 visualizes the expert knowledge and the coverage
criteria can be as follows:
coverage([AtTransitionStateVariableHasValue], [?t,?sv,?val]) :state(?s), vaiablevalue(?vv), hasvar(?vv,?sv), hasvalue(?vv,?val),
hasboundary(?sv,?val), transition (?t), from (?t,?s), behaviour(?b),
action(?t,?b), usevariable(?b,?sv).
hasboundary(?sv,?val):-lowerbound(?sv,?val) .
hasboundary(?sv,?val):-upperbound(?sv,?val).
ek:hasLowerBound
sm:StateVariable
• ek:hasUpperBound "
ek:us :Var
asVar
ek:VariableValue
a
sm:Behaviour
A
hasVarij bleValue
sm action
sm:Transition-
->sm:Va!ue
n
ek:has 'alue
sm:from
->sm:State
Figure 6.6: Expert knowledge ontology TBox: Boundary values
Other coverage criteria can be implemented by representing the knowledge in
an ontology and defining rules that refer to vocabulary defined by the ontology.
For instance, to implement all-content-dependance-relationship coverage [24], which
requires that every use of a variable be sequenced after every definition of the variable
in a test case, the definition-use relationships among the behaviors and guards can
be added to the ontology. Another example is faulty transition pair coverage [20],
which required error states, the transitions leading to the error states be modeled in
an ontology; then a rule that generates objectives for passing the faulty transitions
is generated.
96
6.2.2
Unit Testing Coverage Criteria from the Literature
In the traffic light experiment, the use of two coverage criteria (i.e. all-transition coverage and all transition-pair-coverage) were demonstrated. There are other coverage
criteria in the literature, which can be used to generate test cases for more complex
systems. Appendix E includes several coverage criteria specified in POSL. However,
the expert knowledge ontology required to use them is not provided; in order to
use them, the knowledge referenced by the coverage criteria should be defined in an
expert knowledge ontology.
6.2.3
Test Design Based on an Error Taxonomy
Error taxonomies can serve as a checklist for test experts to design test cases. Figure
6.7 depicts a portion of a error taxonomy provided by Beizer[6]. With this system,
there are two ways to utilize a error taxonomy for designing test cases:
• First, based on a leaf node in the bug taxonomy a condition that leads to an
error can be specified by the test expert in the expert knowledge ontology.
The coverage criteria can be designed to generate a test objective whenever an
error-prone condition occurs. For instance, for testing deadlocks, the conditions
that are suspected to lead to a deadlock can be specified by a test expert in
the expert knowledge ontology and the coverage criteria to generate the test
objectives based on these conditions can be designed. The deadlock prone
conditions that are specified in the expert knowledge ontology can be simply
specifications of the cases that are suspected by the test expert to generate a
deadlock.
• The second method to design test cases is to use test objectives directly as a
language to manually enumerate specifications of test cases that are required
to be generated.
97
) root
(
[
j 3xxx Structural Bugs
J 31 xx Control Flow Bugs
311x General Structure
(
The arbitrary path in the state
machine that is suspected to be
u n a c h i e v a b l e s h o u l d be
specified.
Jj 3116 Dead-end code
The condition that could cause
the code to never stop running
should be specified.
Figure 6.7: Extraction of expert knowledge from a portion of Beizer bug taxonomy
[6]
6.3
Summary
This chapter demonstrates the system's potential extensibility. Several cases of extending the system for representing various expert knowledge to support coverage
criteria are studied. There are performance limitations including: time efficiency,
modeling complexity, expressiveness of PDDL, the expressiveness of POSL, and nonoptimal test suite generation. However, the system supports extensions for various
coverage criteria from the literature. Furthermore, it supports addition of custom
knowledge by test experts and the specification of custom domain/system-specific
coverage criteria.
98
Chapter 7
Conclusions
This thesis is an initial study on the use of knowledge engineering techniques in
software testing, and focusing on model-based unit testing. The ontology-based test
case generation method uses ontologies for generating test objectives and redundancy
checking. Test cases can be generated from the test objectives using prevalent test
case generation methods such as AI planning as demonstrated in this work, graph
traversal, and model checking. This work presented a novel approach for test case
generation, which is based on a separation of concerns and the manipulation of a
number of specifications.
In this system, what needs to be tested is denned by coverage criteria rules. The
knowledge that is used by coverage criteria rules is added to the expert knowledge ontology, which is linked to the state machine ontology. First, test objectives which are
high level descriptions of structures of test-cases are generated. The non-redundant
test objectives and state machine model are then translated into PDDL problems
and domains to be processed by an AI planner. The plans generated by the AI
planners are then written into a test suite ontology, which allows reasoning about
whether a test objective already exists in the test-suite ontology, and are translated
into JUnit executable test cases using an implementation knowledge ontology.
99
Several components of the system are highly modifiable. Separation of concerns
made this possible by externalizing knowledge and using general reasoning algorithms
and test case generation algorithms. Specifically with this approach:
1. The abstract test case generator can be interchanged to use other AI planners,
model checkers, or graph traversal algorithms.
2. The executable test case generator can be interchanged to generate test cases
in various programming languages.
3. The coverage criteria rules can be extended by adding required information to
the expert knowledge ontology.
This flexibility and the developments made to achieve it will benefit users, the
testing research community, and the ontology research community.
The ontology based test case generation method provides three main benefits
for users: (1) It increases the test experts' control over the generated test suite,
potentially increasing its quality. (2) It enables the definition of test requirements
and the generation of test cases early in the software engineering life-cycle, before
implementation decisions are made. (3) It retains the test experts' knowledge via
the TBox of the expert knowledge ontology, which is reusable.
The system can also be extended to produce reports on errors. Moreover, with the
advent of other knowledge-based software engineering systems, such as traceability
management by Zhang [46], the integration of software management systems and
automated software engineering systems can be fruitful for generating reports and
controlling the software development life-cycle. For instance, high level requirements
can be connected to the design, code, developers, test requirements, and discovered
errors, so that the quality of the software and the software development process can
be tracked.
100
While knowledge engineering has been exploited in other software engineering
research domains, the use of knowledge engineering in test case generation is a new
topic. Providing an enriched test oracle enables blending white-box and black-box
testing into gray-box testing. While rigid test oracles and coverage criteria hinder
a test expert in using their knowledge to control automated testing, an extensible
coverage criteria allows them to add knowledge to the test oracle and define coverage
criteria accordingly. Either arbitrary test cases can be defined or rules can be used
to generate test objectives.
Rules can be defined based on a model's structural
elements and/or generally accepted, domain-specific, and/or system specific error
prone aspects of a software. It can be useful for the testing research community to use
a system as the modifiable framework for defining coverage criteria and testing their
efficiency. Another benefit of the system for the testing research community is that
the test case generation technology can be changed. Different test case generators
can be tested and evaluated without re-programming for the specification of what
needs to be tested and re-identification of test objectives. Furthermore, by changing
the implementation knowledge ontology and the executable test case generator, test
cases can be generated in various programming languages.
The system uses an integration of Description Logic for describing models, and
Logic Programming for reasoning on coverage criteria, which is an ongoing research
topic in the knowledge representation research community. This work can serve as a
practical example for this community. The limitations in expressiveness of knowledge
that the current integration methods pose in this work can be studied to serve as a
practical example for evaluating them.
Although the system is extendable, manipulating the coverage criteria and knowledge requires knowledge engineering skills. Further research into test experts' mental
model, and a method of presenting the system to a test expert abstractly, so that
it can be easily learnt, is required. A coverage criteria library and a plug-in based
101
extension of the system can be helpful. Further research needs to be done into how
test experts describe test cases and a language of test objectives needs to be devised accordingly, for the test expert to specify the structure of test cases with test
objectives. Also, this work concentrated on model-based unit testing and the next
step is using knowledge engineering in integration and system testing. Finally, how
testers use the system in a real project settings should be studied, to evaluate the
error-finding capability of the system and its usability.
102
References
[1] Elyes Lehtihet. www.tssg.org/public/ontologies/omg/uml/2004/
UML2-Super-MDL-041007.ovl, May 2005.
[2] Benjamin N. Grosof, Ian Horrocks, Raphael Volz, and Stefan Decker. Description logic programs: combining logic programs with description logic. In
WWW
'03: Proceedings of the 12th international conference on World Wide Web, pages
48-57, 2003.
[3] Harold Boley, Benjamin Grosof, and Said Tabet. RuleML Tutorial, May 2005.
http://www.ruleml.org/papers/tutorial-ruleml-20050513.html.
[4] UML 2.0 Superstructure Specification. Technical report, Object Management
Group (OMG), August 2005.
[5] M. Fox and D. Long. PDDL2. 1: An extension to PDDL for expressing temporal
planning domains. Journal of Artificial Intelligence Research, 20(2003):61-124,
2003.
[6] B. Beizer. Software testing techniques. Number Appendix. Van Nostrand Reinhold Co. New York, NY, USA, 1990.
[7] R.S. Pressman. Software Engineering: A Practitioner's Approach. Boston, 2005.
[8] K.V. Hanford. Automatic Generation of Test Cases. IBM Systems
9(4):242-257, 1970.
103
Journal,
[9] T. Illes and B. Paech. An Analysis of Use Case Based Testing Approaches Based
on a Defect Taxonomy. INTERNATIONAL
TION PROCESSING-PUBLICATIONS-IFIP,
FEDERATION
FOR
INFORMA-
227:211, 2006.
[10] J. Ryser, S. Berner, and M. Glinz. On the State of the Art in Requirementsbased Validation and Test of Software; University of Zurich, Institut fur Informatik, Zurich. Berichte des Instituts fur Informatik, 98, 1998.
[11] TJ Ostrand and MJ Balcer. The category-partition method for specifying and
generating fuctional tests. Communications of the ACM, 31(6):676-686, 1988.
[12] M. Sarma and R. Mall. System Testing using UML Models. In Asian
Test
Symposium, 2007. ATS'07. 16th, pages 155-158, 2007.
[13] S. Benz. Combining test case generation for component and integration testing.
In Proceedings of the 3rd international workshop on Advances in model-based
testing, pages 23-33, 2007.
[14] J. Bach. Risk-based Testing. Software Testing and Quality Engineering Magazine, 1(6), 1999.
[15] H. Zhu, P.A.V. Hall, and J.H.R. May. Software unit test coverage and adequacy.
ACM Computing Surveys (CSUR), 29(4):366-427, 1997.
[16] A.E. Howe, A. Mayrhauser, and R.T. Mraz. Test Case Generation as an AI
Planning Problem. Automated Software Engineering, 4(1):77-106, 1997.
[17] J. Offutt and A. Abdurazik. Generating Tests from UML Specifications. LECTURE NOTES IN COMPUTER
SCIENCE, pages 416-429, 1999.
[18] G. Friedman, A. Hartman, K. Nagin, and T. Shiran. Projected state machine
coverage for software testing. In Proceedings of the 2002 ACM SIGSOFT
inter-
national symposium on Software testing and analysis, pages 134-143, 2002.
104
[19] S. Rayadurgam and MPE Heimdahl. Coverage based test-case generation using
model checkers. In Engineering of Computer Based Systems, 2001. ECBS 2001.
Proceedings. Eighth Annual IEEE International
Conference and Workshop on
the, pages 83-91, 2001.
[20] F. Belli and A. Hollmann. Test generation and minimization with" basic" statecharts. In Proceedings of the 2008 ACM symposium on Applied
computing,
pages 718-723. ACM New York, NY, USA, 2008.
[21] A. Paradkar. Plannable Test Selection Criteria for FSMs Extracted From Operational Specifications. In Proceedings of the 15th International Symposium on
Software Reliability Engineering (ISSRE'04), pages 173-184, 2004.
[22] S.J. Russell, P. Norvig, J.F. Canny, J. Malik, and D.D. Edwards.
Artificial
intelligence: a modern approach. Prentice hall Englewood Cliffs, NJ, 1995.
[23] Object Management Group. XML Metadata Interchange (XMI) specification.
http://www.omg.org/technology/documents/formal/xmi.htm, 2007.
[24] Y. Wu, M.H. Chen, and J. Offutt.
UML-Based Integration Testing for
Component-Based Software. In Cots-Based Software Systems:
Second Inter-
national Conference, pages 251-260, 2003.
[25] E. Bernard, B. Legeard, X. Luck, and F. Peureux. Generation of test sequences
from formal specifications: GSM 11-11 standard case study.
SoftwarePractice
& Experience, 34(10):915-948, 2004.
[26] J. McQuillan and JF Power. A Survey of UML-Based Coverage Criteria for
Software Testing. Technical report, Technical Report, National University of
Ireland, Maynooth, Co. Kildare, Ireland, 2005.
[27] R.V. Binder. Testing object-oriented systems. Addison-Wesley, 2000.
105
[28] N. Kosmatov, B. Legeard, F. Peureux, and M. Utting. Boundary Coverage
Criteria for Test Generation from Formal Models. In Proceedings of the 15th
International
Symposium on Software Reliability Engineering, pages 139-150,
2004.
[29] T.R. Gruber et al. A translation approach to portable ontology specifications.
Knowledge acquisition, 5:199-199, 1993.
[30] R. Studer, V.R. Benjamins, and D. Fensel. Knowledge engineering: principles
and methods. Data & Knowledge Engineering, 25(1-2): 161-197, 1998.
[31] S. Bechhofer, F. van Harmelen, J. Hendler, I. Horrocks, D.L. McGuinness, P.F.
Patel-Schneider, L.A. Stein, et al. OWL Web Ontology Language Reference.
WSC Recommendation,
10, 2004.
[32] N.F. Noy, M. Sintek, S. Decker, M. Crubezy, R.W. Fergerson, and M.A. Musen.
Creating Semantic Web Contents with Protege-2000. IEEE
SYSTEMS,
INTELLIGENT
pages 60-71, 2001.
[33] J.J. Carroll, I. Dickinson, C. Dollin, D. Reynolds, A. Seaborne, and K. Wilkinson. Jena: implementing the semantic web recommendations. In
International
World Wide Web Conference, pages 74-83. ACM Press New York, NY, USA,
2004.
[34] S. Bechhofer, R. Volz, and P. Lord. Cooking the Semantic Web with the OWL
API. Lecture Notes in Computer Science, pages 659-675, 2003.
[35] The rule markup initiative, August 2008. http://www.ruleml.org.
[36] H. Boley. POSL: An Integrated Positional-Slotted Language for Semantic Web
Knowledge, http://www.ruleml.org/submission/ruleml-shortation.html, 2004.
106
[37] Ontology Definition Metamodel. Technical report, Object Management Group
(OMG), August 2009. http://www.omg.org/docs/ptc/07-09-09.pdf.
[38] M. Ball. OO jDREW: Design and Implementation of a Reasoning Engine for
the Semantic Web. Technical report, Technical report, Faculty of Computer
Science, University of New Brunswick, 2005.
[39] M. Ball, H. Boley, D. Hirtle, J. Mei, and B. Spencer. The OO jDREW Reference
Implementation of RuleML. LECTURE
NOTES IN COMPUTER
SCIENCE,
3791:218, 2005.
[40] A.M. Memon, M.E. Pollack, and M.L. Soffa. Hierarchical GUI Test Case Generation Using Automated Planning. IEEE TRANSACTIONS
ENGINEERING,
ON
SOFTWARE
pages 144-155, 2001.
[41] M. Ghallab and P. Traverse Automated Planning: Theory and Practice. Morgan
Kaufmann, 2004.
[42] D. McDermott, M. Ghallab, A. Howe, C. Knoblock, A. Ram, M. Veloso,
D. Weld, and D. Wilkins.
PDDL-the planning domain definition language.
The AIPS-98 Planning Competition Comitee, 1998.
[43] J. Homann. The Metric-FF Planning System: Translating\ Ignoring Delete
Lists" to Numeric State Variables. Journal of Artificial Intelligence
Research,
20:291-341, 2003.
[44] Competition results, 2002.
http://planning.cis.strath.ac.uk/competition/.
[45] A. Paradkar. A quest for appropriate software fault models: Case studies on
fault detection effectiveness of model-based test generation techniques. Information and Software Technology, 48(10):949-959, 2006.
107
[46] Y. Zhang, R. Witte, J. Rilling, and V. Haarslev.
An Ontology-based Ap-
proach for Traceability Recovery. In 3rd International Workshop on Metamodels,
Schemas, Grammars, and Ontologies for Reverse Engineering (ATEM
Genoa, October, volume 1, pages 36-43, 2006.
108
2006),
Appendix A
The Syntax of t h e Specifications
A.l
State Machine OWL Ontology TBox
<?xml version= 1 0 7 >
< r d f RDF
\mhir d f = 111 I
n
u >
•> u i l u ^ x s p = 1 1 >
» V <\
Mill 1- OWl= I I I )
w
i
vii I n - x s d =
t )
\ \ !
( c
o
>>
1'J i.
i 1 i \ 11
1
) c
1
II 1
i i SO'i 0"- (
Ol ' J'UJz f ~ o i '
t
(10
V [ ^i< 1 i m
- V
vtv I n - s w r l = u t i p
• ;
or
S )i
Mill n- s w r l b = ! n |v
\ n i
r > Ifl f
1
r 1
\ i n l n s r d f s = nJ f i>
\ i
i oi
_'0')( u
r (1
< K in i
xml h l H =
ill
I < ! - 1 11 > It -1 1
11 ( llIlK
' >
<owl Ontology r d f a b o u t =
lit
I I1
i pu
i t n it hi It
> 1 />
< o w l ( 1 i-<> r d f I P = i u \
iiilit
(
/>
<owl
( las-
r d f !i)= 1 n
IM i < >
<rdfs
subClassOf>
< o w l C 1 i i - r d f 11)= \ K i n
S
</rdfs
subClassOf>
< / o w l ( 1 !-•->
< o w l ( U s - r d f 11)= l
! f >
<rdfs
resources
subClassOf
rdf
i<
/>
i
f
I -1 -
I |HI
lit
i
<
I -
1 Ml
I t
ii i d i
»I 1 (
1
\ >I
1
i
it
/«-.
M i
>
</owl ( l a ->
<owl ( 1 i - rdf
<rdfs
li)=
subClassOf
--,1
rdf
'• t
i >
resource= I
>
</owl
( li-
<owl
<owl
( 1 i s - r d f 11)= i i
( l,i - r d f 11)= -.
<owl
( I i - - r d f \\i=
<owl
<owl
f I ,-~ r d f !!;= 1 1 / >
ObjectProperty
r d f [[)= 1
</owl
<owl
<rdfs
<rdfs
r d f 11)=
>i ' ii i
'
I
/>
\l i. ii < / >
i i ii / >
u
ii
/>
domain r d f r e s o u r c e =
range rdf resource=
>
r
•
l> l
l>
I
i
/>
1
i
ObjectProperty>
ObjectProperty
range
\
r d f 1))= -
range rdf r e s o u r c e s
domain r d f r e s o u r c e =
i
f Ii
</owl
ObjectProperty>
<owl O b j e c t P r o p e r t y
r d f 11)=
<rdfs
t f "•"
( 1 li->
<owl
<rdfs
<rdfs
1
rdf resources
it
1
>
|>1 '
ii)
i t
I
i li>i 1
lit
l i
{ 1 i
t
t
/>
/>
V
->
i >
I I
<rdfs
domam>
< O w l ( 1 !->•->
109
1 f
>
1
1
I
CI
/>
i
/-
<owl unionOf rdf parseType= < 1 Ir ( i o n >
I -f 1 i p u t -1 a f t
111 XI l i n n
o I-- M it ' />
<owl Cl,ii<- rdf about= filn
\ \ t tr '- - i ippu
u
U
t
<owl ( 1 iss rdf about= l i l t
U "" IVit I tit1Init
int t uoi1 1 [ IIEU
i n a1^! s l im
/>
</owl unionOf>
</owl C ! t-~>
< / r d f s domam>
<owl inverseOf>
<owl O b j e c t P r o p e r t y rdf 11)= If; / >
</owl inverseOf>
</owl O b j e c t P r o p e r t y >
<owl O b j e c t P r o p e r t y rdf 1)1= V t n r >
< r d f s domain rdf r e s o u r c e s I I
I i i- mp it
i < in < li n
1 1r i
i i />
< r d f s range rdf r e s o u r c e = i It
I s< r
i ii| i t ~-i > I < m n ' i )<
i i i n n i U i i />
</owl O b j e c t P r o p e r t y >
<owl O b j e c t P r o p e r t y rdf about= I u
I -t
ipu' s in
n i i d n n m I Oi >
< r d f s range>
<owl ( I is->
<owl unionOf rdf parseType= ( o l h t t i
i >
<owl ( l a s - , rdf about= ' 11 <
1 t » 11 p 11 -i 1i1 n " l i n t o 1' M iti />
<owl ( l a s s rdf about= i i I •
(si
11 pul si H
ni it I irn ' \ I 1 i n i I "- H t / >
</owl umonOf>
</owl ( ! is->
< / r d f s range>
< r d f s domain rdf r e s o u r c e = h i
I -t i - m,) it ~
< itlmi
1i iu
ioi
/>
<owl mverseOf rdf r e s o u r c e = t i l
' ti
i ipi i
x< nni
n
> in / >
</owl O b j e c t P r o p e r t y >
<owl O b j e c t P r o p e r t y rdf 10= I i i I I K i >
< r d f s range rdf r e s o u r c e s t It
Is
«, i i p n t M I M i u'i i o i l I i n si t ion / >
< r d f s domain rdf r e s o u r c e s fiU
i - i
i ID t
I < nirliiiK <
s i . \
u t />
</owl O b j e c t P r o p e r t y >
<owl O b j e c t P r o p e r t y rdf 11)= *-1 i t - >
< r d f s range rdf r e s o u r c e s t 1
1 ( i - input
tit'
iiit'iiu
u i \ h- n
H \< / >
< r d f s domain rdf r e s o u r c e s t i l l
' - i h iipit
t < n j'hint < I ^ '<
H I HI / >
</owl O b j e c t P r o p e r t y >
<owl O b j e c t P r o p e r t y rdf ll)= ( IUK >
< r d f s domain rdf r e s o u r c e s fil<
I ti- n
n
t
m < Imic oi — 1 i i
i >i / >
< r d f s range rdf r e s o u r c e s
it
' u
i p i i si 11 < i n t i i
oi
( uh in
/>
</owl O b j e c t P r o p e r t y >
<owl O b j e c t P r o p e r t y rdf 11)= Out >
<owl inverseOf>
<owl O b j e c t P r o p e r t y rdf 10= h u n / >
</owl inverseOf>
< r d f s range rdf r e s o u r c e s I n
I Mt
H i t -> . t c i ,, i K
I
I - i n n />
< r d f s domain>
<owl ( Ois->
<owl unionOf rdf parseType= ( ; l ! t t t x i >
<owl ( l a s - rdf a b o u t s
11,
1 ii
i pu
>t < ID H ' uw n I -, i t M , i ( / >
•Cowl ( l i s s rdf a b o u t s f i l«t
t pu
- i
u 1 i it
I v I t < />
</owl umonOf>
</owl ( l<is^>
< / r d f s domam>
</owl O b j e c t P r o p e r t y >
<owl O b j e c t P r o p e r t y rdf a b o u t s s
i
j
i
i
i In
>
< r d f s domain rdf r e s o u r c e s I I
i
i
'
/>
< r d f s range>
<OWl ( 1 l s s >
<owl unionOf rdf parseType= v
i
c
>
<owl ( I is- rdf about=
i
I
,
,
,
In
I ->
is
i
/>
<owl ( l i s - rdf a b o u t s
11 (
i
i i
s p
i
iii
i
/>
</owl umonOf>
</owl ( I- >
< / r d f s range>
<owl mverseOf rdf r e s o u r c e s ( I
i
i
pi
i
mi
t i />
</owl O b j e c t P r o p e r t y >
<owl D a t a t y p e P r o p e r t y rdf 11)= l in -> n
\ '
>
< r d f s range rdf r e s o u r c e s Hip
i \
'
_'H)1 V l S i n v - ,
i
/>
< r d f s domain rdf r e s o u r c e s | il
I s i
i i >• i
i t
ntlint
i i t i i i t l
/>
110
</owl: DatatypeProperty>
<owl D a t a t y p e P r o p e r t y rdf:II>= i k i i . u i o u j <k»<
T
< r d f s . domain rdf • r e s o u r c e = l i h
~t i •- input
< r d f s : r a n g e rdf: resource= iitp
',',., v i otp,
< / o w l : DatatypeProperty>
<owl • D a t a t y p e P r o p e r t y rdf:II>= C J i:, -• d<
>
< r d f s : r a n g e rdf: r e s o u r c e s b l i p
\,\ i u i oi,.'
< r d f s . d o m a i n rdf r e s o u r c e = ' f i 1c
I M: i •- input
< / o w l : DatatypeProperty>
< o w l . D a t a t y p e P r o p e r t y rdf: 10= n.tjix >
< r d f s range rdf: r e s o u r c e = M t p «'»" u > o-p
< r d f s domain>
>
~tit<
HHIJIIH <• I •< Hi lu \ -Mir / >
2l'0l WIL^u'wuui^-l t n ^ / >
JOu I W U ^ i l n . i i. i i
/>
- i , . i o iiMrlmn o , I' ( 01 dii 10.1 / >
2' l (J t XM1.V ii'ii.a - t i n ; ; ,
/>
< o w l : ('),is^>
< o w l : unionOf rdf • parseType= Colli i I nm >
< o w l : CUm-. r d f : a b o u t = i i K
1-.M- u pir M I K n.-u hi HO O,\ !,--• << • < \ ,i 11<> b'< ' / >
<owl : ( ] , ) . - ' rdf : about= ' 1 1 r
I -a s , 1 p i' -' i h in if Lux o>\ 1'. ( -i! I / >
< / o w l : unionOf>
< / o w l : CI<i'-<.>
< / r d f s : domain>
</owl: DatatypeProperty>
</rdf:RDF>
<!
I
( U ,it ( d
! .iif o r d
A.2
>i 1 'I
< iln
I'l Hi l
>
«.
, i I n ( t\\I
I 1 ' 1,
ll
i
! 1.
fall [a
i if!
II'
1)
|>l o w " i
I
Syntax of Coverage Criteria Rules
The general syntax of a coverage criteria rule in POSL is shown below:
coverage([PredicateNamel, PredicateName2, . . . ] , [?C1, ?C3, ? V a l u e l , . . . ] ) :Classl (?C1), Class2(?C2), Class3(?C3), Propertyl(?Cl,?C2), Property2 (?C2,?C3),
0therRuleHead(?C3,?Valuel),... .
0therRuleHead(?C3,?Valuel):- Attributel(?C3,?Valuel),
Classl, Class2 and Class3 are classes defined in the ontologies. ?C1, ?C2 and
?C3 represent three instances of these classes, respectively. Propertyl is an object
property, which it's domain includes ?C1 and it's range includes ?C2. Property2
is an object property, which it's domain includes ?C2 and it's range includes ?C3.
Attributel is a data property, which it's domain includes ?C3 and it's value is represented by ?Valuel.
A part of the ontologies which define the vocabulary used by this coverage criteria
is an ontology as illustrated in Figure A.l. The namespaces are not specified in
the figure: while Classl, Class3 belong to behavioral model ontology, the other
properties, class and attribute can belong to either expert knowledge ontology or
behavioral model ontology. The classes and values that are among the test objective
parameters must belong to the behavioral model ontology. The other elements can
belong to either the test expert ontology or the behavioral model ontology. The
expert knowledge is an extension point of the system, while the test objectives are
hard-coded. Hence, the elements defined by the expert knowledge ontology can not
be used as test objective parameters.
An example coverage criterion is defined below. This coverage criteria rule is
designed based on the requirements. The body of the rule specifies the selection
111
Class2
Prop rtyl
Proi>erty2
Class3
Attributcl
>• string
Classl
TBox Definitions:
class2<—property
c|ass1
Figure A.l: An ontology describing the vocabulary of a coverage criteria rule
condition, and the head of the rule specifies the test case that is selected. It means
that if method ?m has a parameter ?p, and ?p is received from a user, and it can
have a wrong value ?val, and the ?p is received as a parameter in the event of the
transition ?t that executes ?p in its action, create a test that traverses transition ?t
when the value of ?p is ?val.
Rule: coverage([atTransitionEventParamHasValue],[?t,?p, ? v a l ] ) : method (?m), parameter(?p), hasParameter(?m, ?p), fromUser(?p),
wrongInput(?p,?val), c a l l ( ? e ) , hasParameter(?e,?p), t r a n s i t i o n ( ? t ) ,
behavior(?b), executes (?b,?m), a c t i o n ( ? t , ? b ) , event(?t, ? e ) .
Semantic: Generate a t e s t case that executes the method at some step for
a l l possible wrong inputs :- A method receives an input from the user, and
the input can be wrong based on the business logic.
Table A.l describes how the statements in SHOIQ (D) ontology are mapped to
the statements in Horn logic [2]. SHOIQ (D) is a description logic that OWL-DL
is based on it. and Horn logic is a rule language that POSL is based on it. This
relationship is used to convert the expert knowledge ontology and the behavioral
model ontologies to POSL in order to integrate them with the coverage criteria
rules.
A.3
Expert Knowledge Ontology TBox
The expert knowledge is an extension point of the system that provides for addition
of the knowledge to the behavioral model ontology. This ontology is linked to the
behavioral model ontology and extends it with the knowledge, which is used for test
case selection. The following listing illustrates how a general test expert ontology
can extend the behavioral model ontology.
112
Table A.l: Mapping between SHOIQ (D) and Horn Logic statements (from Grosof
et al. [2])
SHOIQ (D>
CCD
QCP
T CVP.C
T CVP .C
a:C
<a,b> : P
C^D
P+CP
HORN
D(x) *- C(x)
P(x,y) *- Q(x,y)
C(y)^-P(x,y)
C ( y ) ^ P (y,x)
C(a)
P(a,b)
D(x) «- C(x)
C(x) «- D(x)
P(x,y) - Q(x,y)
Q(x,y) *- P(x,y)
P(x,y) *- Q(x,y)
Q(yA)«- P(x,y)
P(x,z) *- P(x,y) AP(y,z)
c, n c2 CD
D(x) * - C2(x)
CED,nD2
C, (x) *- D(x)
C2(x)*-D(x)
D(x) - C, (x)
D(x) *- C 2 (x)
C(y) *- P(x,y) A D(x)
P-Q
P-Q
C, U C2 CD
C CVP.D
AC,(X)
<7xml version= 1 0 7>
< r d f RDF
MU1US= i 1 ' y
in
1 ii i o i
i \'ji n k i u n "\ • O 1
Ainln
p r o t e g e = h >p
p i o t r x si i i f o i 1 r i ' u i>hi
n
(i
Miilus r d f = n t l p
<\<
•
ir
) "i n_ >i
<li - i o i
xmiii-. x s d = it t p
i n
•> <i\f. -*' »0 1 \ M i ^I ' l l
x m l n s r d f s = In p
-• i i >\ ^ JOuO u i i<\i
i „ ,i
Miihi- o w l = n i p
>\ v > oi
?hi J p
x m l b<isi = l i i p
n<
i < li < ( m
> I ' n o li .1
1 >
<owl O n t o l o g y rdf a b o u t =
>
<owl i m p o r t s rdf r e s o u r c e = ) > «
I u
li p u
- 1 1 I
11
</owl Ontology>
< o w l ( l , i ~ , r d f II)= 1 > >< K
n t( '
/>
< o w l ( 1 c - r d f 11)= ( x >t « I i \> " < i j - 1 />
< o w l ( I i~s r d f I P = ]•••>(
hi
11 ( ' i -•. , >
< r d f s s u b C l a s s O f rdf r e s o u r c e = h i i
i
1
li
II
\(
I i
( r
i !-•
ii«
.i
</owl C 1 is >
<owl ObjectProperty rdf 1U= O
<rdfs domain rdf resource= i
<rdfs range rdf resource=
p
</owl ObjectProperty>
<owl ObjectProperty rdf II>= o>
<rdfs domain rdf resource= t
\(
[l
1 1 ( 1 1-
il
i i<
<>III o <
)ii
K 111!
11
(
i 11
/>
ll
/>
Uu>
<rdfs range rdf resource= > \pi
</owl ObjectProperty>
<owl DatatypeProperty rdf ll)= I n
<rdfs domain rdf resource= [ s|
<rdfs range rdf resource= li i p
p
/>
/>
'
/>
/>
i
113
1 >
i< 1
'00]
/>
\
I-Hiu
/>
</owl : DatatypeProperty>
<owl : D a t a t y p e P r o p e r t y rdf:II)= Da al'i > p ' M \ 2 >
< r d f s : r a n g e rdf • r e s o u r c e = I r t p
>vvn v ,i (.14 2u')l X * [[.->< h, uw- H«I]I HI / >
< r d f s - d o m a i n rdf: r e s o u r c e = ( > ' r
I M I - 11 pu 1 r i i ' t i]ia']ii>< o\ I "\ ( 'I l~* 1 M M f l ^ f !1 < \ J U < l i ! ] l ( U l M O l ( i i ^
</owl • DatatypeProperty>
< / r d f .RDF>
<•
( u a i o d , ,Hi I ' K , H » (
-i
l u i o i d
A.4
i dij
' / >
\ ii t (Ml. I \ I ; J o J.i
i
Jul'.
H'1
'>,
>
,))"i',V'
>
Syntax of Test Objectives
Table A.2: Syntax of test objectives
2
3
Name
Cover transition
Cover state
Immediate
4
After
5
Full predicate
6
At transition
state variable
has value
#
1
Arguments
transitionl
Syntax
[covertransition], [transitionl]
statel
transitionl,
transition2
transitionl,
transition2
condition 1,
predicatelvalue,
clause lvalue,
clause2 value,
[coverstate], [statel]
[immediate], [transitionl ,transition2]
transitionl,
statevariablel,
valuel,
statevariable2,
value2, ...
[after], [transitionl ,transition2]
[fullpredicate],
[conditionl,
clauselvalue, clause2value,...]
predicatelvalue,
[attransitionstatevariablehasvalue], [transitionl,
statevariablel, valuel, statevariable2, value2,...]
A test objective denotes the specification of a test-case. It consists of two parts:
predicates and parameters. A test objective follows the syntax shown below. The
predicateList and the parameterList are lists of predicates and parameters separated
by comma. A predicate in the predicateList has one or more parameters, which are
listed respectively in the parameterList. The use of 'list' syntax provide the flexibility
to add predicates and parameters to a test objective. Table A.2 describes the syntax
of several test objectives.
[predicateList],[parameterList]
114
A.5
Test Suite OWL Ontology TBox
<7xml version= 1 0 ? >
<rdf RDF
Mill lis p r o t e g e = hi f p
p < it< t
i in oi>! ( I n phi int I pioii
\ m l n s r d f = M 11)
i i \ \ * r i I ll'i ()' 2) i << I -\ rit i
ixinliis x s d = I il i
\ \\\ \ ! o
'OOl M i l ^i Inn
v
mliis r d f s = n u p
t \ s \ i r i,.' JO'iU 0 r d ' - )( n i
\ i n l n - owl= l i t )
*\
v . (iij ..Qui 0
til
xmliis=
I]I
I <r i n ) it
(Si s u i '
(i I
xml b<ist = f i I
I - ( n HID it 11 -t
i 111 < I >
<owl O n t o l o g y r d f a b o u t = il<
I ir
i pn
t< t - i i oi I >
<owl i m p o r t s r d f r e s o u r c e = f i It
I ~< i
i it) i
i ' i in ' n
t
/>
</owl Ontology>
<owl ( l i s - r d f ll)= s u p / >
<owl ( I i s - r d f i n = t -t / >
<owl ( U s s r d f !l)= \ i n bit J u . / >
<owl O b j e c t P r o p e r t y rdf ID= I is
I >
< r d f s r a n g e r d f r e s o u r c e = f It
I ' i s i pi I M«t< m i
u m l f D it on / >
</owl ObjectProperty>
<owl O b j e c t P r o p e r t y rdf ID= li t \ n i ibl
>
< r d f s r a n g e r d f r e s o u r c e = i l < l s ( r s i ipul «l u i i i i< n< < \ i ^
I < \ ir bl
/>
< r d f s domain r d f r e s o u r c e = l i l t
I si r input
s
ml
u
i n l I n t m />
</owl ObjectProperty>
<owl O b j e c t P r o p e r t y r d f ll)= i ,> >
< r d f s r a n g e r d f r e s o u r c e = i It
I >
i i pn t I u t i i i i i
i
I t< \
bit / >
< r d f s domain r d f r e s o u r c e s l i l i
I < i - n i p n si i
i < Inn t v l - . I i i
it i / >
</owl ObjectProperty>
<owl O b j e c t P r o p e r t y rdf ll;= o i l t c n
>
< r d f s r a n g e r d f r e s o u r c e = I It
I
ip l it i
u i
i
i I
Ii
/>
< r d f s domain r d f r e s o u r c e = I It
\ -(iin > it
(
i 11 t o
I | />
</owl ObjectProperty>
<owl O b j e c t P r o p e r t y rdf II)= i \ l
p >
< r d f s domain r d f r e s o u r c e = l i l t
I s< r in > M <
-u I t t. <. t p / >
</owl ObjectProperty>
<owl O b j e c t P r o p e r t y rdf 11;= i s s i ( p >
< r d f s range rdf r e s o u r c e = i i l t
t tiipi t ic I - m t > i I p / >
< r d f s domain rdf r e s o u r c e = f l i t
1 cis m > t (
suite < I < t />
</owl ObjectProperty>
<owl D a t a t y p e P r o p e r t y rdf 11)= 1 - >o
u \ h t >
< r d f s domain r d f r e s o u r c e = (11 > I
r i n> it
i it < >vt
i I t \ i it / >
< r d f s range rdf r e s o u r c e s
it,
t\<
to
(HI \ h S m r i I o t i / >
</owl DatatypeProperty>
< / r d f RDF>
<'
< I I <i
If I
I i '
II
>
I
()\\1 1 i
i
I I
f
<
I
I
A.6
Redundancy Checking Rule Template Syntax
A general rule consists of two sections- a head and a body The body of the rule
describes the characteristics of a test case that satisfies the corresponding test objective The head of the rule is a predicate that means the test objective is satisfied
A redundancy checking rules follow the form below
The test objective is satisfied by the test suite :the structural characteristics of a test case that satisfies the test objective.
115
The body of the rule describes the characteristics of test case in the test suite
ontology that satisfies the test objective. The characteristics of a test case in the test
suite ontology are defined by the the vocabulary from the abstract test suite ontology
and the behavioral model ontology. The general syntax of a redundancy checking
rule in POSL is shown below. The redundancy checking rule template indicates the
parameters of the test objective by # followed by the position of the parameter in
the parameter list.
Test Objective:
[predicatename] , [parameter!., parameter2, parameter3]
Redimdacy Checking Rule Template:
e x i s t O : - t e s t ( ? t ) , h a s s t e p ( ? t , ? s t e p l ) , h a s c a l l ( ? s t e p l , parameterl),
a r g ( # l , ?variablevaluel), variable(?variablevaluel, #2),
value(?variablevaluel, #3).
Redundancy Checking Rule:
e x i s t O : - t e s t ( ? t ) , h a s s t e p ( ? t , ? s t e p l ) , h a s c a l l ( ? s t e p l , parameterl),
argCparameterl, ?variablevaluel), variable(?variablevaluel, parameter2),
value(?variablevaluel, parameter3).
The redundancy checking rule above is generated for a test objective with predicatename as predicate and patameterl, parameter2, and parameter3 as parameters.
If the body of the rule is unified with knowledge defined by the test suite, behavioral
model, and expert knowledge ontologies the test objective is already satisfied. The
body of the rule defines that for the test objective to be satisfied by the test suite
the following conditions must hold: There is a test referred by ?t, which have a step
referred by ?stepl; parameterl is the name of the transition of stepl; ?variablevalue
is an argument of the parameterl and the name of its variable is parameter2 and the
name of its value is parameter3.
The knowledge referred by the body of the rule is defined by the test suite ontology and the behavioral model ontology. This knowledge includes the tests in the
test suite ontology, the steps of the tests and the order of the steps, the values of
the state variables after the steps are executed, the transitions corresponding to the
steps and their arguments. The transition is from the behavioral model ontology and
indirectly connects a step of a test case to the knowledge about it's guard, action,
source state, destination state, etc.
Table A.l describes how the statements in SHOIQ (D) ontology are mapped to
the statements in Horn logic [2]. SHOIQ (D) is a description logic that OWL-DL
is based on it. and Horn logic is a rule language that POSL is based on it. This
relationship is used to convert the test suite ontology and the behavioral model
ontology to POSL in order to integrate them with the redundancy checking rules.
The semantics of a test objective and the corresponding redundancy checking rule
template, which have the same structure as the rule described above is as follows,
(transitionl), (inputvariablel), and (wronginputvaluel) are the parameters of the
test objective. # 1 , # 2 and # 3 indicate where the test objective parameters are
replaced in the redundancy checking rule template. The number following the #
116
indicates the position of the parameter in the parameter list of the test objective.
By replacing the three parameters of the test objective in the redundancy checking
rule template, a redundancy checking rule is generated for the test objective as shown
below
Test Objective:
A test case that executes the (transition!) transition
with the (inputvariablel) input having the (wrongmputvaluel) value.
Redundancy Checking Rule Template:
The test objective is satisfied by the test suite:- There is a test case that
has a step, which passes a #1 of the state machine and an argument
of the transition have the name #2 and the value #3.
Redundancy Checking Rule:
The test objective is satisfied by the test suite:- There is a test case that
has a step, which passes (transitionl) of the state machine and an
argument of the transition have the name (inputvariablel) and the
value (wrongmputvaluel) .
A.7
Implementation Knowledge OWL Ontology
TBox
< 7 x m l v e r s i o n = i ') ? >
< r d f RDF
MI 1111 r d f= 1 t1 *
\
. 0
1 iy ' i) ' '
1(
ni *
nilns x s d = 1 i
\ v \ {
/>()0] \ 11 ] ,,
1 I! > 1 1 1 ill 11 I 11
i 1< 11
\ 1
\ m l n = ri 1
1 (j
v
\ « « l 01
K 'id t
id
- h( i 1
inhis r d f s = ni i |
v , 0
_'002 0
]
xinla- owl= 1 i )
\
xml h i-r =
11 < I r
111 1 11 i n f i l l
11
1 on
n 1 >
<owl O n t o l o g y r d f a b o u t =
iI
1 -< i
i ,m
n p < t i it I H
1 mm
<owl i m p o r t s r d f r e s o u r c e = t i l
I
i 11,111
1 t
</owl Ontology>
s
/>
<owl ( 1 !-- r d f IP= 11 i p ' < 1 H n ( '
hi
/>
<owl ( 1 i s - rdf IH= 111 p i 111 1 1 < f s. 1 ( \
<owl ( 1 iss r d f Il>=
)l 1 t 1 Si 1 r 1 1 ' d >
<rdfs subClassOf>
<owl ( ] 1- r d f ]])= i , h
1 1
I
/>
< / r d f s subClassOf>
< / o w l ( is >
<owl I 1 1, rdf !!>=
1
i
>
< r d f s s u b C l a s s O f rdf r e s o u r c e =
I
t \
>
^ ' />
/>
< / o w l ( 1- >
<owl O b j e c t P r o p e r t y r d f IJ'=
< 1
< r d f s domain r d f r e s o u r c e = i
< r d f s range rdf r e s o u r c e =
1
</owl ObjectProperty>
<owl O b j e c t P r o p e r t y rdf Il)= 111 '
< r d f s range rdf r e s o u r c e =
I
it 1 I 1
l((ii
M h <! / >
< r d f s domain r d f r e s o u r c e = ( I
1
I 1! ( II' It
-II \ 1 ) > />
</owl ObjectProperty>
>
11
11
(
1 11 1
1 i|>
r
11
t
11 1 1
>
1 1 c 1
1 1 1 h 11
117
't r
><
1 <
1 1 (
1
1 i\
/>
1
I
>\ 1
1 1
11
/>
<owl O b j e c t P r o p e r t y r d f 11)= 1 U - M K i \ d HUI >
< r d f s r a n g e r d f r e s o u r c e = 1 It
! < i - i n p u t n n p h m it i i<
v
l l l j p ' t II t I] t' ' S ( l t<
<rdfs
domain
rdf
< 11 o 1
resource=
i ii()Ie I K l i t t < i S ! 11 ( \
< 1'
/>
lilt
11 i i b b
I s v- i n p i t
lnpltii'nt
lion <> l-"«
/>
</owl ObjectProperty>
<owl O b j e c t P r o p e r t y rdf 10= i n u i - i ol _ t i - M I < \ I i l l < >
< r d f s r a n g e r d f r e s o u r c e = ! 1<
I l r t ^ L ipt i it ipb nit n u > t o i>
111! ) ( H h II t c <|S I it I \
<rdfs
<owl
<owl
</owl
</owl
<owl
<owl
<owl
</owl
<rdfs
1 1b (
/>
domain r d f r e s o u r c e = f i 11
T-.ii- i n n t
t i
in u In in i> i t it < V o i i >h / >
inverseOf>
I n v e r s e F u n c t i o n a l P r o p e r t y r d f 11)= i i-^ it< \ i n ib < / >
mverseOf>
ObjectProperty>
O b j e c t P r o p e r t y r d f 11)= n \< ' i , t I b - ( I ' I >
inverseOf>
I n v e r s e F u n c t i o n a l P r o p e r t y rdf I b = I -( 11 / >
mverseOf>
r a n g e r d f r e s o u r c e s ( 1<
I -< i - i ipnt it ipl< in u i o i t i n ipb in iititPI t i ( /<•
>
<rdfs
</owl
<owl
<rdfs
<rdfs
<owl
<owl
<owl
domain r d f r e s o u r c e = til<
I t i - mp i
i '< n i ' d i i H
ObjectProperty>
D a t a t y p e P r o p e r t y rdf 11)= nim< >
I
range rdf r e s o u r c e = ' n i p
T \ >
_'0'i| \ \ll WUit it
domain>
C las->
u n i o n O f r d f p a r s e T y p e = ( lli<1ioti >
( l<i^- r d f a b o u t = f i 1 < 1 -< i
i ]> it i in >b i i< t t ii ion o\ 1
\
11 1 tl
/>
<owl ( 1 1-- r d f a b o u t =
<owl ( U - - r d f a b o u t =
</owl umonOf>
I II ) i lilt II I I l S | li
r j 11
11t
I -< r I »l
</0Wl
i |m
i pn
I '
i
/>
i _, / >
i i i i i ' c i i d I i i i m ov
in pit n < i t <' Mt 11 o< / >
in p < i i i I I it i >\ i u ip < im it < t< i
/>
< lrl-->
< / r d f s domain>
</owl DatatypeProperty>
<owl D a t a t y p e P r o p e r t y rdf 10= > >t I . \ i n >
< r d f s domain rdf r e s o u r c e = 1 11 < i i t - t it> u
nn il nit n
ii l
v
it ul n < il
1( i -
>
< r d f s range rdf r e s o u r c e =
up
\
* i f" < 2001 \ \ I I S i ini i - f r „ / >
</owl DatatypeProperty>
<owl I n v e r s e F u n c t i o n a l P r o p e r t y r d f a b o u t =
lit
I t r i ip
M I D I I I K H IHOI O
1 i ( i'I >
< r d f t \ p< r d f r e s o u r c e = l i t t p
\i
i oi
'002 n" t
1 n i f t i i i i l l ' t I I I < />
< r d f t \ p t r d f r e s o u r c e = lit t p
\ i
it
2002 ff
Clip i [ i -.
/>
<owl i n v e r s e O f r d f r e s o u r c e ^ 1 l<
l -i r- i ipt i ii ipb IK nt u
( l(
I l\ ( t
<rdfs
<rdfs
(
o l l i t-(
11
/>
range rdf r e s o u r c e = ( 1 <
domain r d f r e s o u r c e s iiW
I '
t-r
i i pi i -t u < i i u i i <
( il I / >
i i it i n > l i « t H O i o I j n > i n' i
t
I
/>
</owl I n v e r s e F u n c t i o n a l P r o p e r t y >
<owl I n v e r s e F u n c t i o n a l P r o p e r t y rdf
I -v i
*
I l< >
< r d f t \ ')c r d f r e s o u r c e = h j
< r d f I \ »( r d f r e s o u r c e s li t i
\
< r d f s domain rdf r e s o u r c e s (il
1
I < !l< it
<owl i n v e r s e O f
iv
I
'-I 1 ( \
rdf
(
1
l1
i
i ( \
l l
<
i i t i
l<
i l l
du
I
i
_< '2
' 0
i , u i
1<
n
i
11
iu i
in
' l
i
1
< i i 11'
Ob
1'
<
i
\I
,i
it i
/>
/>
i|
I
i i
K i
I
/>
< r d f s range rdf r e s o u r c e s
'
I
</owl I n v e r s e F u n c t i o n a l P r o p e r t y >
< / r d f RDF>
<l
il
/>
resources
i -
about=
K)\
|
'
<
I
>
118
i
I
i t
>U I
i
/>
<
I <)
/<-
A.8
The structure of the JUnit code
T h e structure of a generated JUnit test is as follows:
package unittests;
import junit.framework.TestCase;
import myPackage.myClass;
public class myClassTest extends TestCase {
@Test public void testO {
myClass objectName = new myClassO;
assertTrue("comment",obj ectName.getStateVariablelValue());
assertFalse("comment",objectName.getStateVariable2ValueO) ;
objectName.method();
assertTrue("comment",obj ectName.getStateVariablelValue 0 ) ;
assertFalse("comment",obj ectName.getStateVariable2Value());
}
}
This example shows a testsuite that tests the class myClass. A test suite is
created by extending the TestCase class of the jUnit framework. The name of the
extended class in the example above is myClassTest. A test case of the test suite is
defined by defining a method to the myClassTest. First an object of the class under
test is created. After the constructor of the class under test is called, the assert
methods are used enforce the values of the variables. To assert the value of boolean
state variables, he assertTrue and assertFalse method called. If this not the case, the
the test case fails. After that, a method of the object is called. This is followed by
assertion of the value of state variables. In the Eclipse IDE a JUnit test suite can
be executed and verified automatically. JUnit also provides for checking whether
exceptions are thrown when they are expected.
119
Appendix B
Elevator Door Class Ontologies
B.l
Door State Machine OWL Ontology ABox
< 7 xml v e r s i o n =
t) ?>
<rdf RDF
\ i n l n - rdf= li I >
\ n % > f > <i'j) uJ
\iiilns xsd= I ! ,1
u
i
20(11 v i l
xniln- r d f s = n i p
\\
\ > >r 2 t i 0 ' ) 0
x i n l n - owl= ] U p
> < ' c ^ 20H2 0 ~
s.mIn 1 - s m u r i = ' i I <
I •- i - i n p u
- t i t
\llll||v=
] 1(
t -,( I
I II >l t
0 ( (I
22 ri I - i! ix
vrlii n
i 1 - t i n *
AI
u if I n n
o 1
(>•]'
xml !)<)•><= (i 1
I i- nip it ii (i
*I >
<owl Ontology rdf about= i i l i
I <
i put <l it
<owl imports rdf resource= lilt
1 -< i i i > p i (
I;
< / o w l Ontology>
< s m u n Behaviour rdf II)= (If f ' l u d
r >
< s m u n B e h a v i o u r - d e s c rdf d a t a t y p e = h j
m \
>0pen=f a l s e , < / s m u r i B e h a v i o u r - d e s O
< / s m u n Behaviour>
< s m u n T r a n s i t i o n rdf 10= - t u t t o
>
< s m u n From>
< s m u n S t a r t S t a t e rdf 11)= t t'< >
<smun Outrdf resource= ' i l l I
- i p n <lo< i
</smuri S t a r t S t a t e >
< / s m u r i From>
< s m u n \< t Kni>
<smuri Behaviour rdf 11)= m i >
< s m u n B e h a v i o u r . d e s c rdf d a t a t y p e = 'i i
v
>Open=f a l s e , < / s m u n B e h a v i o u r _ d e s c >
< / s m u r i Behaviour>
< / s m u n \ ( t uni>
<smuri To>
< s m u n S t a t e rdf 11>= I > t<
>
< s m u n Out>
<smuri T r a n s i t i o n rdf II)=
i
>
<smuri To>
< s m u n S t a t e rdf 1D= |
>
<smuri Out>
< s m u n T r a n s i t i o n rdf 11)= p t
i >
< s m u n \<i in rdf r e s o u r c e = n
i i >u
< s m u n I \ i tit>
< s m u n Call rdf Il)= pi
' - k
>
s
<smuri ij ii JK rdf d a t a t y p e =
tit
<
'')I
>pressclosekey</smun
<smun
Call.desc
i~"
n I >
< u ulniit
'
>r
o J />
2 0 0 ] X" 1 1| - f ' « t 1>-
i 1 ~i t t t o
-td / >
S t
t s I i i •-1
20''i \
>i
< 1 <
I1 v
fini -t
/>
i
IDI.IO
rdf d a t a t y p e = ' t P
' o _
120
'
>,
JO X ' I > K t s
I M I
X/smuri : Call-desO
</smuri : Call>
< / s m u r i : I \ ( ut>
< s m u r i : T o rdf : r e s o u r c e ^ I m
I '•'•v-, m put dor>: t v.'-< 1 osi d >- M" / >
< s m u r i : F r o m r d f : r e s o u r c e = f l(
U < ^ uipi.i <'o"i o? U o | n n- > < / >
</smuri : Transition>
< / s m u r i : Out>
< s m u r i : I n rdf: resource= I i I (
1 -,< i - i up )< do<o c\\ • < lo-^c d 11 "P' n / >
</smuri State>
< / s m u r i :To>
< s m u r i : Ac I ion>
< s m u r i : B e h a v i o u r r d f : l l ) = op< n t • >< dnoi >
< s m u r i : B e h a v i o u r . d e s c rdf : d a t a t y p e = ' In t\> ,'«v,« v i o r " , 200 I X M I V I H I ' L U M I I I M '
>0pen=true ;</smuri : Behaviour.desc>
</smuri • Behaviour>
< / s m u r i : \< I ion>
< s m u r i :From r d f : r e s o u r c e = i 1 <
I •' rs nipu t <ioor o\ 1 - < l<>»' d-t >i t' / >
< s m u n : F\ < ut>
< s m u r i . C a l l r d f U)= > ' < ' - o p ( ,k<. >
< s m u r i • jwiino r d f . d a t a t y p e = h u p
'••'•'•<< *•> ' " ' 2<>() i \ M I N \> in i - I - T '
> p r e s s o p e n k e y < / s m u r i :iuinr>
</smuri • Call>
< / s m u r i : I',. onl>
</smuri : Transition>
< / s m u r i Out>
< s m u r i : Out>
< s r a u r i : T r a n s i t i o n r d f : i O = ' c i o I'lMni, i i! >
< s m u r i : F r o m r d f • r e s o u r c e s f'lc
l - < i ~ m p m <k)<,i CI»IJ . l o ^ ( K i , n i / >
< s m u r i : l u nt>
< s m u r i : C a l l rdf:II)= d i l H - />
< / s m u r i • I A < >n>
< s m u r i :To>
< s m u r i : F i n a l S t a t e rdf-1T '= l i » > l t a t t 1 >
< s m u r i : I n r d f . r e s o u r c e = f 11 <
I -< i - nip n d o n ' c\ , ( | « i d nf i al / >
</smuri . FmalState>
< / s m u r i To>
</smuri • Transition>
< / s m u r i . Out>
< s m u r i : In r d f : r e s o u r c e = ( i l l
l < v n i p ' t <[',,•- ov ' *op M : 0( O- d / >
< s m u r i : In r d f : r e s o u r c e s f i d
i -i-ii inp j | ' d u o ; owl. »t n ' I oi lu» a / >
</smuri : State>
< / s m u r i :To>
< s m u r i : I vc nt>
< s m u r i : C a l l r d f : l l ) = nc„ >
< s m u r i : ivitm r d f : d a t a t y p e = ' b l i p
\ \w n I o, , 21'V 1 W l l W i n n > - - i ' ' ' '
> n e w < / s m u r i : n,nri(>
</smuri : Call>
< / s m u r i . ll\ < m >
</smuri • Transition>
<srnuri . S t a t e M a c h i n e rdf \\)= •lu.i'-i i n
in ' >
<smuri : S t a t e s rdf' r e s o u r c e s ( ' '
< ~(
Miii
d< o r (> i i i i ,
>l i '
/>
<smuri T r a n s i t i o n s rdf: r e s o u r c e s
i t
-' i - i ' p j
C o
o
<>' <.
• (i'
/>
< s m u r i T r a n s i t i o n s rdf • r e s o u r c e s j ,
i n P 11 Ji <•
»
< i o» > c
i
/>
<smuri : S t a t e s rdf: r e s o u r c e s i ,
I •
• - I ' - di. i u l
- 1 ( - « i t I.
/>
< s m u r i : T r a n s i t i o n s rdf r e s o u r c e s 111
' r- i n n >t ( | ( (,
vo-. v '
|>( n / >
, 'ii,
do,,
,, i ' , , , - < . - i I < / >
< s m u n : S t a t e s rdf • r e s o u r c e s
i -i
• p i
d< ..'
•-,
')',
- , , t,
/>
< s m u n • S t a t e s rdf r e s o u r c e s
.
( ,
<smuri • S t a t e _ V a n a b l e s >
< s m u r i . s t a t e V a r i a b l e rdf ID= o.u > >
.'li
Vli
< s m u r i . ii.iiiK r d f d a t a t y p e s , ,,
• >
> 0 p e n < / s m u n . naim>
'0 1 v MJ s K,I i
< s m u r i : I n i t B o o l e a n V a l u e rdf d a t a t y p e s
>true</smuri . InitBooleanValue>
</smun stateVanable>
</smun : State_Vanables>
< s m u n . T r a n s i t i o n s rdf : r e s o u r c e s
ii(
- < i - iHI
/>
< / s m u n . StateMachine>
121
< / r d f RDR>
<'
< 1 H ( fl
-t i n t o i o
B.2
It 11 I ' m i t
du
>
»
\ 11 1 OW1
I ' l l J !\
i
i
I
t.tlll'l
1 •>'
t
>
|H()1
Door Test Suite OWL Ontology ABox
< r d f RDF
\itilns
Mdlri-
rdf=
xsd=
I Up
I t )
i n
w-,\
\ ! , r
\ i o
\mliiS
j 0=
(
l -(
II p u
1 f
, 9 0 ' ) 0 ' 22
t> f - v n t
M)n
\ M I ~>< IK n
K M
-
I
\ H--H
< •< I
\mln- r d f s = lilip
v i ' nt
2 '00 0' id 1 ^ n i n
v
m l n s owl= l i p
v \ 5 . i ' ^ JliOi 0 t i m l ' >
<owl O n t o l o g y rdf a b o u t s lu ,i
-- -H \ > I h <om >
<owl i m p o r t s r d f r e s o u r c e = lil<
1 -<i^ input -t u in if In i r.< I />
<owl i m p o r t s rdf r e s o u r c e s (He
! ic r^ i n p u i <loi
o
/>
<owl i m p o r t s r d f r e s o u r c e s MU
( I I » iirnil
i
-uiti
U />
</owl Ontology>
< j 0 t e s t rdf a b o u t = n t i p
\
\ w h ( > n l c -t 1 >
<j 0 h a s s t e p >
<j 0 s t e p rdf about= h t t p
-v wil< h ( o n u i i - l i - l t |>l >
< j 0 outcome>
< j 0 v a r i a b l e v a l u e rdf a b o u t s hit p
u s \ i'< 1 ( >u t < -1 I 1 < I t fi 1 1 1 ! IT \ 11 u< 11 >
< j 0 h a s b o o l e a n v a l u e rdf d a t a t y p e s lu 11
< * \
i o
OiM \ 1! S (ii 1 1 . 1. '
>false</j 0 hasbooleanvalue>
< j 0 h a s v a r i a b l e rdf r e s o u r c e s ( i
i
•-•,11111 d >. r < 1 1 i( II / >
</j 0 vanablevalue>
< / j 0 outcome>
< j 0 h a s c a l l r d f r e s o u r c e s f 1 11
'
1- HI it I< 0
\ 1 < 1 (>' > 1 n
/>
</j 0 step>
</j 0 hasstep>
<j 0 h a s s t e p >
<j 0 s t e p rdf a b o u t s ' n i p
,
u l d i m i U » i -t \>2 >
< j 0 n e x t s t e p r d f r e s o u r c e s In i>
m i \ 1< u <oii 1 c <- 1 i <-) 11
i />
< j 0 outcome>
< j 0 v a r i a b l e v a l u e rdf a b o u t s li ^^
\\\\
n ) rrm u t 1 >' 1 1 ' ; ' 0 - . I ' 11 ^ lit 0 >
< j 0 h a s b o o l e a n v a l u e rdf d a t a t y p e s
rip
\ \ <5 t\<, 211OI V I I S rn <i b > ,\< i n
>false</j 0 hasbooleanvalue>
< j 0 h a s v a r i a b l e rdf r e s o u r c e s f 1,
1 1 11 pin 1 1
1
) •> / >
</j 0 vanablevalue>
< / j 0 outcome>
<j 0 h a s c a l l rdf r e s o u r c e s lii
1 s< - iu 1 1 d t
1
) 0
/>
</j 0 step>
</j 0 hasstep>
<j 0 h a s s t e p >
<j 0 s t e p r d f a b o u t s n i |
%
il h < i < 1 u p
>
< j 0 n e x t s t e p rdf r e s o u r c e s | >
1 %
1 1 r
/>
< j 0 outcome>
r
ii( 1 >
< j 0 v a r i a b l e v a l u e rdf a b o u t s n i p
i u 1 s t)
<
! ( 1 l l )l 1
<j 0 h a s b o o l e a n v a l u e rdf d a t a t y p e s u p
\
O'l N i l 1 - , n \
>true</j 0 hasbooleanvalue>
<j 0 h a s v a r i a b l e rdf r e s o u r c e s 1
\ ,
1 11
1 1 1 n1 > > />
</j 0 variablevalue>
< / j 0 outcome>
! C
<j 0 h a s c a l l rdf r e s o u r c e s 1 <
(
1 ,
i
/>
</j 0 step>
</j 0 hasstep>
<j 0 h a s s t e p >
<j 0 s t e p r d f a b o u t s , t t t
(1
n u t
r pO >
< j 0 n e x t s t e p rdf r e s o u r c e s I t )
I 1 <oi
< 1 •/>
< j 0 outcome>
122
rdf:about= liilp
www > a l ( l tom-f 1, - i - l ,1 1 1 I (»< ' m <<J 1 \ „ r v • IK 0
< j . 0 h a s b o o l e a n v a l u e rdf : d a t a t y p e = i M p
,-\\\" " i 01 S , 2 ' H ) l \ ' M ! S c . ' i ' i w b j o l t i n
> f a l s e < / j .0: hasbooleanvalue>
<j .0 . h a s v a r i a b l e rdf : r e s o u r c e = f i ' o
l-rii p u t tl< OI ( ) . 1 - n p e n / >
</j .0: variablevalue>
< / j . 0 : outcome>
< j . 0 . h a s c a l l r d f : r e s o u r c e = lii<
I s o i ^ ' u i p u i d o o i . • * , ' » - ( a l i o i l o - u ! />
</} . 0 - s t e p >
</j .0 : hasstep>
</j .0.test>
rdf.about= http
\ w » val^ti n m - t i - t O >
<J . 0 ' t e s t
hasstep>
<J .0
v\\u» w i h l i <om<t<>M0 U p i' >
<J . O . s t e p r d f : a b o u t = 'slip
r d f . r e s o u r c e = li!<
I s c r - m p ' i l d o o i . v 1 / -1 n 1 l o < l o - o d 7>
<J . O h a s c a l l
<. . 0 o u t c o m e >
ill 0
\ v " *<tlH. C <. n r I ( -> ')<• <ir i 1111 1 o - < d ' , 11 \
<. . 0 . v a r i a b l e v a l u e r d f . a b o u t = hi i p
t M I - i i . p n t iK OI vU\ I OfX 11 / >
<. . 0 : h a s v a r i a b l e r d f : r e s o u r c e = t i l t
< j . 0 : h a s b o o l e a n v a l u e r d f : d a t a t y p e = 'uttp / >v,v, -» ! oi o ' " 0 1 X \ ! L S ( l K i h d , 1. J O ! . r , u
> f a l s e < / j .0: hasbooleanvalue>
</j .0: variablevalue>
</j
0:outcome>
<. . 0 • n e x t s t e p >
<J . 0. v a r i a b l e v a l u e
<<.
<.
<.
<.
>
.0
.0'
.0
.0
. 0:
step
r d f . a b o u t = ' Ml i p
- M \ * \ <u> h I U " I I J ( ( - I I ) M ' |I
hascall
rdf. r e s o u r c e =
lili
1 -' i - i t i p i t
i1m>' < " , f i n
outcome>
variablevalue
rdf.about= Imp
v w
• i h h < < \> i - - t 0 '
hasvariable
r d f : r e s o u r c e = ('If
I < • » i t i p u ' <]• ,.l f \ 1,
< j . 0 . h a s b o o l e a n v a l u e r d f : d a t a t y p e = n1n>
\ , M , V ! in ;> J O U l
> f a l s e < / j 0: h a s b o o l e a n v a l u e >
</j .0. variablevalue>
< / j . 0 outcome>
</j .0: step>
< / j 0: n e x t s t e p >
</j
0:step>
</j 0 : hasstep>
<j . 0 h a s s t e p rdf: r e s o u r c e = Irip
</j .0-test>
</rdf:RDF>
» «
>>!<l.
123
u n r ^ t i ^ 1 0-t ( p
< v o l
o
na'
/>
>
/>
u l l f . l n , > 11
..,;>< n' / >
V 1 I '-.'IMI.
>
• i < ' IK U
li . o h
,t,
>
B.3
Door Implementation Knowledge Ontology
<7xml v e r s i o n = ) o 7>
< r d f RDF
\mln^= fill
\ < i >. ) u,> it i 01 i n i
1
xmlii^ p r o t e g e = n i p
> t< < -i 11 "ota < In p i n
I p it
\niln 1 - r d f = M i i
v \ i '
10'1) U2 2 n i l
mi
11^
xmliK imp= f ii
{ t -. u p it in p l d i i n 1 n u n
\1
xuilns x s d = 111 )
T
5
„ 200 i \ \ l l - * l n
\mlu-. r d f s = H i p
\v
> or 2000 I
1 ' •- c u m i '
> mills owl= litl i
i- \
5 u „ 21)1)2 07 o\ 1
\nilns s m u r i =
il<
1 »< i - j n p u
-id t
iidinu o 1
xuilui d o o r o n t o = l i l t
1 t
inpti (In i m 1
xml b i - ( = ' i 1
1 - i ^ in i it do I IT p
11 >
<owl O n t o l o g y r d f a b o u t = 11 1 < 1 < r
i p it <l >< r i i > i 1 >
<owl i m p o r t s r d f r e s o u r c e = 1 i 1<
1 - J - i ii| it i u >1( mi n I i n i 1 / >
<owl i m p o r t s r d f r e s o u r c e s ( i ] (
I s< r inptil d oi
/>
</owl Ontology>
<imp i m p l e m e n t e d G e t t e r M e t h o d rdf 11 )= i )pi ir K , ( l t < i \1 ll > >
<imp n inn r d f d a t a t y p e = l i t i p
\
i > <>' mil \ I ii h i j
i i i»
>is0pened</imp
IHIIIO
</imp implementedGetterMethod>
<imp i m p l e m e n t e d M e t h o d r d f 11)= Opt i M \ 1 ic -^inij It i u nl < \ k h <1 >
<imp h a s C a l l >
<rdf D e s c r i p t i o n rdf about= i i ' i
1^
i pu d < i n\ I \> < s | • n < >\
<imp m v e r s e _ o f _ h a s C a l l r d f r e s o u r c e = l i l t
I -t i - i i »i i doorm p \
O < llkt \ I
111 p t \\r utt (I 1 t)
/>
</rdf Descnption>
</imp hasCall>
<imp iiirni r d f d a t a t y p e s h t i p
>o
Qui \ l ^ V i u - . t n
> P r e s s 0 p e n K e y < / i m p ntiii<>
</imp implementedMethod>
<imp i m p l e m e n t e d S t a t e V a r i a b l e rdf 11)= Opt i s u \ i n
>
<imp ij inu r d f d a t a t y p e s I m p
\ n
\
>n(ii \ t t i i
,,
> 0 p e n < / i m p n HIK>
<imp h a s G e t t e r M e t h o d r d f r e s o u r c e = (i (
I c
i >i
1 o HI p
i Dp i < H,( ttc Ui t 1) d / >
<imp h a s S t a t e V a n a b l e >
<rdf D e s c r i p t i o n rdf about= 1 < 1 <
ii ,m
U>o
t 1 <> n >
<imp i n v e r s e . o f . h a s S t a t e V a r i a b l e r d f r e s o u r c e = l i t
1 <
inu i 1
( ) ,'. 1 S |
1 S \
U
1 t
orin p
/>
</rdf Descnption>
</imp h a s S t a t e V a n a b l e >
</imp implementedStateVariable>
<imp i m p l e m e n t e d C l a s s rdf ll)= 1)> i ( 1 <- >
<imp packageName r d f d a t a t y p e = 1 I
» \
> E l e v a t o r < / i m p packageName>
<imp ii mi r d f d a t a t y p e s I m p
o
20)
lr
\ Ih 1 u
'<-
inn \ \ ' s i i
> D o o r < / i m p n UIK>
</imp implementedClass>
< r d f D e s c r i p t i o n rdf about=
I
<imp m v e r s e - o f . h a s C a l l >
<imp i m p l e m e n t e d M e t h o d r d f 10= < i r\
<imp i JIIH r d f d a t a t y p e s
MI
>PressCloseKey</imp i imO
<imp h a s C a l l r d f r e s o u r c e = t 1
r
</imp implementedMethod>
</imp i n v e r s e . o f _ h a s C a l l >
</rdf Description>
< / r d f RDF>
<'
' 1
1 1
1
H
i pu
1
>
i t - 1n i i u
I > \ I -( ,
ii t
1
1
ID >
i
I
1
124
>
1
'
1
/>
r
c \'
B.4
Door JUnit Test Suite
paokngi u n i t t e s t s ;
impor I j u n i t . f r a m e w o r k . T e s t C a s e ;
i m p o r t E l e v a t o r . Door ;
public
( I J ^ DoorTest
p n bl i< v ojil t e s t O () {
- < .1 r i l i < i 1 - u i i H i
IID'II ini'J i>ii>l)li ii'
evKrirN
- -vMl iH !.,<.< .1
p r o h i t m {t
Door uot = m « Door() ;
assertFalse(
-Op< n, ,|
}
public
TestCase
{
_< O v ( 1 i r I [)!-! 1 1 O'l
.--!<'
IIIKIO-H,.
(I I Ml,) t
,uot.isOpened());
\ mil t e s t 1 () {
' < , i \ < J i i an -1 r i " i i
Ion \\"J
pi <> iK i i
(If)
r t i a l -I t l-ni
t ' I l <> i p t n
i1 o l i b ii 2 t
Door uot = uov Door() ;
a s s e r t F a l s e ( -Oixm< ,1 r. I i
,uot.isOpened());
uot.PressOpenKey();
a s s e r t T r u e ( i - O p t m i 1 h i i IK ' , u o t . isOpened ( ) ) ;
uot.PressCloseKey();
assertFalse(
-()])< lud i - i 1 -< , u o t . isOpened ( ) ) ;
}
}
125
(lu'i(li'>o|nii.
i|oin< n
Appendix C
Code for Using J e n a A P I , OO
J D R E W A P I and POSL
generation
C.l
Reading an OWL File with Jena A P I
The code that demonstrates the use of Jena API for parsing OWL-DL files is shown
below.
public void process ( String modelOntoAddress ) {
tin- modelOntoAddress = modelOntoAddress,
11
I
i i o !i ]
m = ModelFactory createOntologyModel ( OntModelSpec 0WL_DL_HEM ) ,
m read ( input , null ) ,
String rdfns =
Mljj
<\> , i orp ]<*)'> 02 ^1
<1f -ii)M\ n-. ,
1 i < i~
' 1(>I
f
11
it • > i
i <
Extendedlterator classes = m listClasses ( ) filterDrop
p u b l u boolean accept ( Object o ) {
rr ( ui n ( (Resource) o ) isAnon ( ) ,
( nr>v Filter
} } ),
vlnU
( classes hasNext
'e
'<
\
OntClass
u
t
,
( ) ) {
il i ) I
thisClass =
1
i l l s
(OntClass) classes next
( ),
i i ' 11 i
i
Extendedlterator instances = thisClass listlnstances ( ) ,
»bil< ( instances hasNext ( ) ) {
Individual thislnstance = (Individual) instances next ( ) ,
\ i ] i
I r
i
1
,
Stmtlterator properties = thislnstance listProperties ( ) ,
String stateVariableMame =
,
booh in stateVariablelnitValue
=i,i]-\ mi( ( properties hasNext ( ) ) {
Statement thisProperty = (Statement) properties next ( ) ,
RDFNode object = thisProperty getObject ( ) ,
Property predicate = thisProperty getPredicate ( )
String predicateURI = predicate getURI ( ) ,
i I ( predicateURI startsWith ( rdfns ) ) {
i out i n ue ,
}
if
( ( object
H I M i n n of L i t e r a l
126
) )
<
In t
( ) {
String objValue = ( (Literal) object ) getValue ( ) toString
).
String datatypeURI = ( (Literal) object ) getDatatypeURI ( ) ,
String datatype =
,
tr\ {
datatype = datatypeURI split (
) [1 ] ,
}
<<il(h ( E x c e p t i o n e ) {
e printStackTrace ( ) ,
}
if
( l datatype equals (
> \ '
d 11
I
ll ( datatype equals ( ,ul
i 1-c il ( d a t a t y p e e q u a l s (
ol
(K(
i< ( (
1
) ) {
> <>
i lei
ipi '
l'<i
t v }x ^
r
) ) {
M i
p o< < -^^ <»l>n< }
) ) {
p < i <*<-
,
if ( d a t a t y p e
}
equals
(
moli ui
)) {
P< " ' - -
oi)ii
}
}
c I s<
if ( o b j e c t i n st an ft of R e s o u r c e
n ()\\1 pi i pc r
}
) {
pi f < -^
b(M
< li < i
}
}
11
i
- i I
Extendedlterator subclasses = thisClass listSubClasses ( ) ,
u. h ill ( subclasses hasNext ( ) ) {
pi<Ki
tltt-S jl«c! i-.~OntClass thisSubclass = (Outclass) subclasses next ( ) ,
}
}
11
i
ill
p i o p s I ( 11 s
Extendedlterator allOntProperties = m listAHOntProperties ( ) ,
tin ( , allOntProperties hasNext ( ) , ) {
OntProperty ontProperty = (OntProperty) allOntProperties next ( ) ,
if ( ontProperty isAnon ( ) ) { <niiumi( , }
1 ( ontProperty getDomain ( ) '= null ) {
OntResource domainRes = ( (OntResource) ( ontProperty getDomain ( ) )
).
il ( l domainRes isAnon ( ) ) {
}
I Nl'
{
it
IK 1
1 1
( domainRes
I
! h ! >U-
l
< I
i
isClass
( ) ) {
'
<o
u p p ri o« >< i (
><
I
< (If ->
Extendedlterator anonClassIter = null,
>I ( domainRes asClass ( ) isUnionClass ( ) ) {
anonClassIter = domainRes asClass ( ) asUnionClass ( ) <—»
listOperands ( ) ,
}
}
}
}
il ( ontProperty getRange ( ) '= null ) {
OntResource rangeRes = ( (OntResource) ( ontProperty getRange ( ) ) )
}i' ( ' rangeRes isAnon ( ) ) {
<
<{
( rangeRes isClass ( ) ) {
is
Extendedlterator anonClassIter = mil
|f ( rangeRes asClass ( ) isUnionClass ( ) ) {
anonClassIter = rangeRes asClass ( ) asUnionClass ( )
listOperands ( ) ,
}
(
127
pi
p( 1
Extendedlterator subproperties = ontProperty listSubProperties
for ( , subproperties hasNext ( ) , ) {
ur ><•
MI>P*OI)'I
OntProperty subProperty = (OntProperty) subproperties next
( line ) ,
( ),
}
C.2
Writing a POSL File
Some of the methods of the LPWriter class for writing the POSL file are shown
below.
U>
I In
public \oi«l addlndividual ( S t r i n g clss , S t r i n g md) {
w n t e ( c l s s toLowerCase () +
+ u d toLowerCase () +
}
ikl
I
I
id 1 < s i ) j)
\
i
<}
i
i
i
e I II i I
n ),
pr< p i t
i •
i os
* ii i
public \ 11111 addlndividualDataPropertyString (String predicate, String u d , String •
val) {
write(predicate toLowerCase() +
+ ind toLowerCase() +
+ val +
^l l in ,
i) ) ,
}
idtl n i n n , i pi >p i \
I j i u i ul i d>i< I
public \oid addlndividualDataPropertylnteger (String predicate, String ind,
val) {
write (predicate toLowerCase () +
+ m d toLowerCase () +
+ val +
1
<
i
>
"
ii <
String
) i
}
14 ,i no i i m i! i
Jn
f oi in
n 1 u1i i
public W I K I addlndividualDataPropertyBoolean (String predicate, String ind, String
val) {
write(predicate toLowerCase() +
< + ind toLowerCase() +
+ val +
o4o
» ).
}
44
n obi' f
> >' f
^ >' K >>\
i 1 ^n i i
public \ H K I addlndividualObjectProperty (String predicate, String m d , String <—>
object) {
write(predicate toLowerCase() +
+ m d toLowerCase() +
+ object +
i
).
}
• u4 - 11 I
i
i 1 i
pit )'tf
old addSubclass (String parentClass , String childClass) {
write(parentClass toLowerCase() +
'
+ childClass toLowerCase()
+
' ),
}
i I •> h i
'
i
pu tin
old addSubproperty ( S t x m g parentProp , String ciildProp) {
write(parentProp toLowerCase() +
>
+ childProp toLowerCase() +
).
}
128
|
C.3
Creating the OWL Test suite with J e n a A P I
Some of the OWLTestSuiteWriter that use Jena API for creating OWL Test Suite
and writing it to file are shown below
piivati \oul createTestsuite () {
m = ModelFactory createOntologyModel ( OntModelSpec 0WL_DL_MEM ) ,
Ontology ont = m createOntology ( ns ) ,
ont addlmport ( m createResource (
fill
I <i
in out
(
u < o
) ),
ont addlmport ( m createResource ( modelOntologyURI substring ( 0 , <—'
modelOntologyURI length ( )-l ) ) ) ,
ont addlmport ( m createResource (
lil<
I <- r n i I M < (
n < In i' > I )<
).
saveTestSuite
( ),
}
publii \ oid loadTestSuite ( ) {
m = ModelFactory createOntologyModel ( OntModelSpec 0WL_DL_MEM ) ,
Buf f eredReader testSuiteReader =
new Buf f eredReader ( nc \< FileReader
testSuiteOWLAddress ) ) ,
m read ( testSuiteReader , null ) ,
testSuiteReader close ( ) ,
( «—>
}
publu
Individual
createTestlndividual
( String name ) {
OntClass c = m getOntClass ( TestSuiteTBoxURI +
i
Individual 1 = m createlndividual ( ns + name , c ) ,
)
r( t b i n 1,
}
p u b h < Individual createStepIndividual ( String name, Individual
OntClass c = m getOntClass ( TestSuiteTBoxURI +
t )
),
Individual 1 = m createlndividual ( ns + name , c ) ,
test addProperty ( m getProperty ( TestSuiteTBoxURI +
h s
11
Hi
f>iibli<
i l l
M
I i
i( JI
1
i 1
id
ll ( 1
t l
irn
c > 1
I
I
'
l
f 1
I i
n i
l
Ii
'l j<-
| ii |
C
<
i
*-1
test
>
) {
) , 1 ) ,
I
i ( u( t
i1-
'
U
i
I I r |
I
l H
1 ,
Individual
createVariableValuelndividual
( String name, Individual
step*
){
OntClass c = m getOntClass ( TestSuiteTBoxURI +
i I
Individual l = m createlndividual ( ns + name
c )
Property p = m getProperty ( TestSuiteTBoxURI +
step addProperty ( p
i )
r I i n I
\
i
)
)
}
(
(ii
setStateVariable
( Individual
variableValue
String
variableName
{
variableValue addProperty ( m getProperty ( TestSuiteTBoxURI +
I i
i
<
) , m
getlndividual ( modelOntologyURI + variableName ) )
}
129
)<
p u b l i c void s e t S t a t e V a r i a b l e V a l u e ( I n d i v i d u a l v a r i a b l e V a l u e , b o o l e a n
{
variableValue.addLiteral ( m.getProperty ( TestSuiteTBoxURI +
lia-booU diiv .1 In' ) , value );
value
)
}
public void setNextStep ( Individual preStep , Individual nextStep ) {
preStep . addProperty ( m. getProperty ( TestSuiteTBoxURI + ' n<*l-~l.|>' ) , <—
nextStep );
}
publn void setTransition ( Individual step, String transitionName ) {
Individual i = m.getlndividual ( modelOntologyURI + transitionName );
step. addProperty ( m. getProperty ( TestSuiteTBoxURI +
h<<.-<< II ) , 1 ) ;
}
pubht \ oid saveTestSuite ( ) {
fop = ncv FileOutputStream ( myFile );
m write ( fop , 'RD! \MI U1BK1 \ );
fop . close ( ) ;
}
C.4
Using OO J D R E W for Reasoning
A code that demonstrates how OO JDREW is used for parsing a knowledge base
and a query, and reasoning is shown below.
POSLParser poslParser = new POSLParser ( );
BackwardReasoner
reasoner = now BackwardReasoner ( );
DefmiteClause
query;
P >r *-i
; in
<T
o
v
< o 'i
kin
t ' ' i
poslParser parseDCFile ( knowledgeBaseFileAddress );
P i -l
1 U)ilM,l
poslParser
•u
)<
kilo" 1' 11 '<
loadClauses
I "-
:
Ml
parseDefmiteClauses
lit
reasoner
'111
v
l|ll
-il(
(coverageCriterionRuleStnng
' i -1 < 'id i u \ i ' n;(
( poslParser.iterator
,
t-J
( .>
. 1 < ' ( i ' U
query = poslParser.parseQueryStnng
I
111
'>
I
l' v
If
< id 'ion
I
ij
n ii'ij ' !
( ) ) ;
' I
( queryString );
I
Iterator < BackwardReasoner GoalList > resultlter = reasoner
iterativeDepthFirstSolutionlterator ( query ) ,
' i i
I|I
);
i i -
111
ior ( , resultlter hasNext ( ) , )
{
BackwardReasoner.GoalList gl = resultlter next ( ) ,
[ • >
{
outr write (
i + gl toStnng ( ) ) ;
}
i iwli (IOException e)
{
e printStackTrace ( );
}
130
^
•• ' i,i' k>' i "' i b*'
\1
131
Appendix D
Traffic Light Example
D.l
Traffic Light State Machine OWL Ontology
ABox
<7xml v e r s i o n =
! 0 7>
< r d f RDF
xmlns rdf= I m p
M
i o n 1°'))
)2
22
m f
xmlns owl= 1u >
*v
»
' <> „ - 0 u 2 ( i - o x 1
M l
1 11
xmlns j 0 = 1 (
I -(
i pin - r'i
xmlns=
! 1 f.
I -< r
s n > i '< t \ ! I i 1 > 1 1
V
-v nt I
(
11-
v!
xmlns xsd= 1', .
xmlns r d f s =
11 i p
xml b a s e =
lili
I
u% \ i .>>*, 2 0 0 1 \ \ 1 1 V l < 11
i 1 •»( i ii i '
.i
{ or 21'Hi n
< i - m p m (i
il
' I 111 o \
>
n pu
r 1 f 1 c' J f j l t
<owl Ontology rdf about=
fn
1 -' <owl imports
rdf resource^
lilt
! -t 1
111') l l
-1 1
II) If
</owl
0ntology>
< j
0
<j
0 li.iiiH
s t a t e V a r i a b l e
rdf
rdf
ID=
datatype=
IOI.I
»b
h l ' p
ilk
>
)R
' J J lit
1 / >
>
w n
A 5 oi
\ is.
\
_0ii' \ M
1 ^< l l . 1 I W M 1 1 1 !
1
'
>road2Blmk</j
0 niniO
</j
0
stateVariable>
<j
0
Call
rdf
ID=
M I t i . i 12 >
< j 0 iidiiu r d f d a t a t y p e =
>senseRoad2</j
0 ndim >
</j
<j
<j
0
lulp
rdf
rdf
ID= on
datatype=
>longTimeInterval</j
</j
0 Call>
<j 0 State rdf
<J
0 0ut>
ID=
0
'
1 -< ll' 1 I t
11
^
{ oi
\
1 - ll. 1 *
M l '
2ufn
inirn>
>
o n i l 1)1 J I K - ' I
Transition
Action>
<3
<j
0
0
Behaviour rdf ID= - 1 .
>
Behaviour.desc
rdf
datatype=
ID=
l!
' >il
1 1 )'|T 1
I
>
1
>
\
1
- In
\
1 -
1
.1
I
r m , < / j 0 Behaviour_desc>
Behaviour>
Action>
<j
0
Event
<j
0
From
rdf
rdf
resource=
resource=
< j 0 To r d f r e s o u r c e =
</j
0 Transition>
i'
resource=
i
<j
resource=
!
Out r d f
i
1
1
t 1
'
'
<
1
1
1 i'
</j
0 0ut>
<j 0 Out r d f
0
>
i
v
0
0
0
rdf
i i 1 t
blip
<j
<j
</j
\
Call>
0 Call
0 l. UPC
>LTI=
</j
0
> Oi<_ 2 U 0
(
I |
i
' r-
I
t r-
1
f
til
i
M l '
I ] « l l OS i
I
f1 ( '1 It O I
132
1 i 1
II
f
'
d
I O U !
h
'
I
/>
1 ! 1 < 1 1
,
1
l l
s.
k 1 O
11 r. 1
I
1
M l ) "
I '< 1
• />
< />
1 k
/ >
... />
< j 0 I n r d f r e s o u r c e = 1 111
I ~i i
n pu
i n i f I ' i , li I o i l r u n
I li .i i ' < r o w1 l i n k
/>
<J 0 I n >
< j 0 T r a n s i t i o n r d f ID= I O H I I M I M I
>"• n l l h l n i
>
< j 0 To r d f r e s o u r c e = f i 1 <
I -in
n pu
< a t I u i ^]i t v\ 1 i o ill J b u n n-t u < / >
<j 0 Event>
< j 0 C a l l r d f ID= r o > 4 ' p< <U M i i > i >
< j 0 i.<imc r d f d a t a t y p e = In I p
v^
{ o r , i o n ] W I H I U I V -i n p
> r o a d l P e d e s t n a n < / j 0 iwiuO
</j 0 Call>
</j 0 Event>
< j 0 From>
<J 0 S t a t e r d f I D = r o i i l h ' f < r i - . l i ' <
>
< j 0 Out>
< j 0 T r a n s i t i o n r d f I D = r o i ( t ' » r < ( n h i r o i d * re ( n >
< j 0 To r d f r e s o u r c e = 1 i K
I •w I i pu
r 11 f i ( l li t o « 1 * o i a l s i u i
I < i />
< j 0 A c t i o n rdf r e s o u r c e = i I (
I < I - i tpi I I t 11 11 < 11 <> JI i i I - ( 1' l / >
< j 0 From r d f r e s o u r c e = I i I <
I MI» niml
t r <i I (i < 11 p li I A 1 . ( u l l ^ i ' i h ' i i / >
< j 0 E v e n t r d f r e s o u r c e ^ 'i ' <
( ^{ i
i pui
I "" t u ' i {Ji ( o 1 1 > i„t m i hi i i 11 / >
</j 0 Transition>
< / j 0 Out>
<J 0 I n >
< j 0 T r a n s i t i o n r d f I D = i i ) \2\ c l\< \ i o r > id 1i j c t n >
< j 0 From>
< j 0 S t a t e rdf ID= ro < > , ) n
t u >
< j 0 Out r d f r e s o u r c e = t •
1 ( i11>u i ' r v 1 >' I
it o I r o srl 2> t 11 o ' o i i I ,
u
/>
< j 0 Out r d f r e s o u r c e = f s
I m i i
i 1 hi
>\\ i i ti N < I 1 \\ [ o I II I
/>
< j 0 Out r d f r e s o u r c e = (i <
I - i
i . 1i I
it
I n i I ^ i 1 » I >r
IK.riiii / >
<J 0 I n >
< j 0 T r a n s i t i o n r d f ID= r , dl
rr
' o n i<12 <M<
>
< j 0 E v e n t r d f r e s o u r c e = 'i i
I <
n pu
i
l i t l ill o 1 i M < i - / >
< j 0 Gaurd>
< j 0 C o n d i t i o n r d f ID= l l i r o i i d i l i u
>
< j 0 G u a r d . d e s c r d f d a t a t y p e = li i p
>
' ol
'())[
V I v in i i u i i
>LTI=triu < / j 0 Guard_desc>
</j 0 Condition>
< / j 0 Gaurd>
< j 0 From>
< j 0 S t a t e rdf ID= n n d ^ u n t
< >
<j 0 In>
< j 0 T r a n s i t i o n r d f ID= i > m i
< < n *( r >id 2 I < u >
< j 0 Event rdf r e s o u r c e =
I I<
I Mr
i MI t
f
i f i
I
i 11 K I r
r
1 />
< j 0 A c t i o n rdf r e s o u r c e =
I<
I f
i i] i l 1 i i f < '
i'
I
/>
(
< j 0 To r d f r e s o u r c e =
i I'
!-.(-r p i
i i l « i It
* l
_ r
JM
( />
< 3 0 From r d f r e s o u r c e = 1 i 11
I -. j — < ii > j t I I << I 1 J i 11 li
\
I ' m 2< ' t u ! i < / >
</j 0 Transition>
<h
0 In>
< j 0 In rdf r e s o u r c e = r i l '
I -t
'f
i r ]j
o
< x!2( r< < n ! i
id ( i '
/>
<j 0 In>
< j 0 T r a n s i t i o n r d f ID= i
II i l l i i i
<l '
t<n >
<j 0 Action>
!
< j 0 B e h a v i o u r r d f ID=
n ^ i i
i >
< j 0 B e h a v i o u r _ d e s c rdf d a t a t y p e =
MM
^ ,.
I 111 \
1 --f ha t
> r o a d 2 G r e e n = t in
roadlYellow=f a l s e , roadlRed=M u < / j 0 Behaviour_desc>
</j 0 Behaviour>
</j 0 Action>
<3 0 To r d f r e s o u r c e =
il
' <
,
f
'
'
/>
< j 0 From>
< j 0 S t a t e r d f ID=
i
Ii
i
>
<j 0 In>
< j 0 T r a n s i t i o n r d f I D = i'
I) >
i
'
I i
>
< j 0 From r d f r e s o u r c e = I i I
i
n
i« i i
Ii !
li i
< K I
c II I < / >
<j 0 Event>
< j 0 C a l l r d f ID=
i II i > i I
>
< j 0 n.iirw r d f d a t a t y p e =
III
i
\> i < I
00 I \
I -v linn i -•
133
>senseRoadl </j 0 uimi>
</j 0 Call>
</j 0 Event>
- ( I I [) U
tT
t 1 I. ' I , i ] I t
l l !
l i U O i l )
J f 11
/ >
<j 0 Gaurd rdf resource= ilf
<j 0 Action>
< j 0 B e h a v i o u r r d f ID= o 1 I «. t I
>
<j 0 Behaviour_desc
1
rdf d a t a t y p e = I n i
i v
', i
ittu \ IS-'I ^I M I i
>LTI=f a l s e , r o a d l Y e l l o w = t l tt< , r o a d l G r e e n = f a l s e , < / j 0 B e h a v i o u r _ d e s c >
</j 0 Behaviour>
</j 0 Action>
< j 0 To r d f r e s o u r c e =
11 i
I ( r ii|ni
1 i f > 11 ' i lit
i I i • cl I v- c 11 i M I > / >
</j 0 Transition>
< / j 0 In>
< j 0 Out r d f r e s o u r c e = f <
I s( rs
i i i"l u 1 j I I oi 1 ro id I *i < l|o\ I < i f xl_< > (c n / >
</j 0 State>
< / j 0 From>
<j 0 Event>
< j 0 C a l l r d f ID= l i i . i u i i i n i i i m ll >
< j 0 narm r d f d a t a t y p e — h i t ]
u\\\ \5 ot
_0u I \ \11 '-•i hu 11~ ti i
>shortTimeInterval</j 0 n imo
</j 0 Call>
</j 0 Event>
</j 0 Transition>
</j 0 In>
<J 0 I n >
O <) _
K i l l
>
<j 0 Transition rdf ID= in
> >lmk
i » u > t o n I 11 t / >
f i f i li i
<j 0 To rdf resource= hl<
1 < < I> i
<j 0 From>
< j 0 S t a t e r d f IDs ro ilJhi n l M i n >
<j 0 In>
<j 0 T r a n s i t i o n rdf IDs
> d-b inkto c d 2 ih1 k >
< j 0 To r d f r e s o u r c e s
i i < I <'
ip i '
ff < i 1 i
\ ' r j nL'I 111
1 .t. / >
I < r
i tt p j t I I I i l I I I l i t
<j 0 From rdf resource= fil
<} -Ml nl t Hi / >
1 <
11 p n
(
It
<j 0 Event rdf resources 'i <
n
i i int. < i
r
j
/ >
<j 0 Action rdf resource^ l<
' M'
I null
I'
I If 1
i
/>
</j 0 Transition>
</j 0 In>
ik ( i i(l„> i i n k
< j 0 Out r d f r e s o u r c e = I j t
I t
iipi t i i t i ( I
it o I r<
<j
<j
/>
0 Out r d f
/>
0 Out rdf
resource=
resource=
!
<
I -t r
i pn
<
I Mt
utpt i
Ii
if I
i o I r
11
I
<i*-\
i
i
i <
11
i
(i
)
i
<l
i d 21 I i i'
/>
, i > \
"-( r<lpt I i I i t I i
< j 0 Out r d f r e s o u r c e s i <
- i 1 -.
1 1 t 11 I „ ' t
I i id2
< j 0 Out r d f r e s o u r c e = I ' i
<
t
(f C ] I t
u d2i
< j 0 In rdf r e s o u r c e s
ill
< j 0 In rdf r e s o u r c e s
j\>
I I
!
t i c 1 i
r
ii 'f
<J 0 I n >
< j 0 T r a n s i t i o n rdf ID= i < ''
i )i i I >l li
>
<j 0 Action>
< j 0 B e h a v i o u r r d f IDs n it i
i itl- >
<j 0 Behaviour_desc
rdf d a t a t y p e s ! t
<
N i l
i
v
>road2Blmks< i t , < / j 0 Behaviour_desc>
</j 0 Behaviour>
</j 0 Action>
< j 0 To rdf r e s o u r c e s
,
I
t i
I
i i
<j 0 Event>
< j 0 C a l l r d f ID=
2
t M II i >
< j 0 ii n i r d f d a t a t y p e s
I '\!
\
>
'in. i \ [ -n h< i \ i
> r o a d 2 P e d e s t n a n < / j 0 ninn>
</j 0 Call>
</j 0 Event>
< j 0 From r d f r e s o u r c e s I n
i < , ii>
- I it i 11 I > h t
</j 0 Transition>
134
I i ) < 21 i n l I
n 11 / >
m i s 1 ii ( t U rt t I / >
1 i 1 I r< u<21> i t k / >
r < I I ' J 2 sill
/>
> <l I
\
i
/>
L
/>
</j 0 In>
</j 0 State>
< / j 0 From>
<j 0 Event>
< j 0 C a l l rdf ID= )li k i i rn i t< r\ j I >
< j 0 n INK rdf d a t a t y p e = lit i p
vw \ i < I
> b l i n k T i i n e I n t e r v a l < / j 0 nirai>
</j 0 Call>
</j 0 Event>
<j 0 Action>
< j 0 B e h a v i o u r r d f ID= s ( pi H i l i h l n k >
<j 0 B e h a v i o u r . d e s c rdf d a t a t y p e =
Imp
\*
i y'liQ i \ I *>< hrim
Mil,
>road2Blmk=false,</j 0 Behaviour.desO
</j 0 Behaviour>
< / j 0 Action>
</j 0 Transition>
</j 0 In>
< j 0 Out r d f r e s o u r c e s f ] (
< j 0 Out r d f r e s o u r c e = [ l<
I (
l put
<j
/>
0 Out r d f
resources
I i<
I. <-( ' S
Hi | HI I
On I \ \ l l "->• lu T 11
1 <
M i l l
I
\ I
I 1 rl I ( I
i1
iii
t i l or o \<U\
t
> 1 ]
I >
< Ho
1 < ( I I O
Kli.
lit
/>
1 i > 1 h I I
1
W
(
it
/>
</j 0 State>
< / j 0 From>
< j 0 To rdf r e s o u r c e s l i l t
I ( •* it put i
( ' t i i t o\
rt 12 }U % I
<j 0 Action>
< j 0 B e h a v i o u r r d f I D s i () > \ < H o \ >
<j 0 Behaviour_desc rdf d a t a t y p e s
1)1 |
\ » i I ]
J)0 I \ 11 -• h( I ) i <- i i
> L T I s f a l s e , r o a d 2 Y e l l o w s ( i in t r o a d 2 G r e e n s f a l s e , < / j 0 B e h a v i o u r _ d e s c >
</j 0 Behaviour>
</j 0 Action>
</j 0 Transition>
< / j 0 In>
< j 0 In rdf r e s o u r c e s
il
I
ipi
I u i hl o '
( < i I >r
< />
'"><)*
/>
</j 0 State>
< / j 0 From>
< j 0 To r d f r e s o u r c e s 111,
1 11 u pn
t
(ru i h t
j | roii
I< < I i t />
< j 0 Event rdf r e s o u r c e s t i
I. -> i
i pu
t i t i ( 11 h i t 1 ' l i > t i i n i i1 i i
i />
<j 0 Action>
< j 0 B e h a v i o u r r d f ID=
><'1
I
i >
<j 0 B e h a v i o u r . d e s c rdf d a t a t y p e s it
j
>o(l \ I I M I n M i
s
> r o a d l G r e e n = l ' ut , r o a d 2 Y e l l o w s f a l s e , r o a d 2 R e d s t r u t < / j 0 B e h a v i o u r _ d e s c >
</j 0 Behaviour>
</j 0 Action>
</j 0 Transition>
</j 0 In>
< j 0 Out r d f r e s o u r c e s
I
1
t i l t
/>
t
< j 0 In rdf r e s o u r c e s
< j 0 Out r d f r e s o u r c e s
<j 0 In>
I ! )
it
< j 0 T r a n s i t i o n rdf ID== r
< j 0 Event r d f r e s o u r c e
< j 0 From rdf r e s o u r c e s
< j 0 To r d f r e s o u r c e s
<j 0 Action>
< j 0 B e h a v i o u r rdf I D s
pi)
i k >
i
<j 0 Behaviour_desc rdf d a t a t y p e s
> r o a d l B l m k = f a l s e < / j 0 Behaviour_desc>
</j 0 Behaviour>
</j 0 Action>
</j 0 Transition>
</j 0 In>
< j 0 Out r d f r e s o u r c e s
1
f
i
it
I
iI
I
<
I
i
t I
I
'
I
I
/>
l
>
> />
/>
f
135
( II 1
t 1
I 1 *) t K
/>
/>
/>
<j . 0:In>
< j . O T r a n s i t i o n r d f : ID=
t a n i <>< i>< •' , i ' « i i >
<j . 0' A c t i o n >
< j . 0 : B e h a v i o u r rdf:ID= ir.T >
< j . 0 : B e h a v i o u r . d e s c rdf : d a t a t y p e = J i t p
w\ \, t'3 or<t /U()i \ M I ^ 1 • .n .-- 1 i i n « *
>LTI=f a l s e ;
roadlGreen=1rue;road2Green=false;roadiYellow=false;road2Yellow=false
roadlBlink=f alse;road2Blink=false;</j . 0:Behaviour.deso
</j .0 : Behaviour>
</j .0: Action>
< j . 0: From>
< j . 0 ' S t a r t S t a t e r d f : ID= < t w t < - i , i t < >
< j . 0 . 0 u t rdf : r e s o u r c e = f . U
(-<!i.iihi b«i!( iu'lt-i in lnHojdKinoii
/>
</j .0 : StartState>
< / j . 0 : From>
<j .0- Event>
< j . 0 : C a l l r d f : ID= t w ">
< j . 0 • n a m i r d f : d a t a t y p e = I m p ' vo >• i i ( i j 2 0 0 1 \ \ i [ H l i ' i . u i ,
tiiii«,
>new</j . 0:naiiH>
</j .0:Call>
</j
0:Event>
< j . 0 : To r d f . r e s o u r c e = ' II ! <
1 -or •- i n|>ui I i a (f i < 11 < li I i m l . ' i i u i 1 » i i » i ) ' i , n
/>
</j .0: Transition>
</j .0:ln>
< j . 0 : Out r d f : r e s o u r c e = f i , < , I s r i s t i , i f t i t l . j i i i l O P 1 j r o < u l I G r u u T o K i <1 I ( . r ( < 'i / >
</j .0: State>
< / j . 0 : From>
<j
0:Action>
< j . 0 : B e h a v i o u r r d f : I D = • i a • t r o a d I l>l i n k >
< j . 0 : B e h a v i o u r . d e s c rdf : d a t a t y p e = h l i p
, \w v,i <>i$> _'()()' \ \ U Si 1 tm.i."- 1 r i n>>
>roadlBlink=l rup ; < / j . 0 :Behaviour.de s o
</j . 0.Behaviour>
< / j . 0- A c t i o n >
< / j . 0: T r a n s i t i o n >
</j.0:In>
</j .0: State>
< j 0 : s t a t e V a r i a b l e r d f : I D = i .x'd 1 < i c. * n >
< j . 0 : i i a m < r d f . d a t a t y p e = 111 I p
v " » *•.! o r
200 1 W I - C I K I I ' * - n n »
> r o a d l G r e e n < / j . 0 : naiitO
</j .0: stateVariable>
< j . O s t a t e V a r i a b l e r d f : ID= j o i o l b ! i>" >
< j . 0 : nam<> r d f : d a t a t y p e = l n i | i ' v \ w \ i iii ;• „'0ul \ \ 1 s, lici-ia- - n m >roadlBlink</j .0 : iwmO
</j .0 : stateVariable>
< j . 0 ' s t a t e V a r i a b l e r d f • I D = r o . d 2 . , i ( M, >
< j . 0 : i I n i rir' r d f d a t a t y p e = I m p
»', u< v\ t o . 20u l W i l ^ h n u . i'i,
> r o a d 2 G r e e n < / j . 0:hdii)<>>
</j 0: stateVariable>
<J . 0 : C a l l r d f ID= < M> U >
< j 0 iiaitu r d f : d a t a t y p e = I n l p
' v v •. > < c 2 n 0 ' \ * i -(In m j - M I I
> d e l e t e < / j . O nam<>
</j
0:Call>
<j .0.stateVariable rdf.ID= I I
>
< j 0-naiiM r d f • d a t a t y p e = I m p
>>
i ', '>r
/n>ii W h d m i i - r e v
> L T K / j . 0 ' narm>
</j 0: stateVariable>
< j . 0 : s t a t e V a r i a b l e r d f . I D = <>, 11 r ^ • >
< j 0 : n a i n r r d f d a t a t y p e = Im<>
*'• ^
oi<
*0i • \ ! "v a< <> — - > a
> r o a d l R e d < / j 0:naii><>
</j .0- s t a t e V a r i a b l e >
< j . 0 • S t a t e M a c h m e r d f • ID=
' ! ' K i i' >
< j 0 • S t a t e . V a r i a b l e s rdf r e s o u r c e = ',
'
*' t ' < 11 ' . •<
>-i<. M "
/>
< j 0 . S t a t e s r d f r e s o u r c e = > ](
l -< '
"> I I > i
<j
0.State_Variables>
< j 0 : s t a t e V a r i a b l e r d f : ID= it, K' J\ • II .'
>
< j 0 . ii.imo r d f d a t a t y p e = m t p
P>'\ v > <i
_noi
> r o a d 2 Y e l l o w < / 3 0'iiam<>
136
* I «1
>'
<< '
\ v 1 *•. In r , v ,
>
i.a
*' i >!
< '
/>
</j 0 stateVariable>
</j 0 State_Variables>
<j 0 State_Variables rdf
resources
(il<
t
i
in >ui
t i i !I1< 1 „ h t
/>
<j
<j
i up m 1 i
0 Transitions rdf resource= h U
I <'
r ) (II > i n I t or t id 1 bl i n I / >
0 S t a t e _ V a n a b l e s r d f r e s o u r c e s f i ] ( I n - input
I
ff
K
11 c h i
I r i fill
n 1 i o i 1 I <-,] t )
\\!
h
i I toulibl i
/>
<j
<j
0 S t a t e s r d f r e s o u r c e s 1 It
t ( i - i i pi I I r i ( I < I < n I o \ 1 s u I ^ I i l ( / >
0 Transitions rdf resources [ ^
I ,i
i pn
i f f j ( t it
1
r 1(12,.. i ' tit o r o H12,, i n / >
<j 0 States>
< j 0 F m a l S t a t e r d f I D s t u il
Hi >
<j 0 In>
< j 0 T r a n s i t i o n r d f ID= n u l 2 \ f I n n i r i ' i i il >
< j 0 F r o m r d f r e s o u r c e s ( i)
I sf t1. in p 11 t t l ff it 1 i h i o 1 o ' 2 ( 1 >\\ t
( />
<j 0 Event r d f r e s o u r c e s
i ,
\ d
i put
I >f' i ( 11 h i
I d< h t / >
< j 0 To r d f r e s o u r c e s c, } ,
( - j^ i u j
t, H e i g h t
(
( u i1 t I t < / >
</j 0 Transition>
</j 0 In>
lit
\ i o i< H I u l It n 11 / >
1 ( r
I i ft i 1
<j 0 I n r d f r e s o u r c e s I
hi t \
t o ) 1 l\ II n * I t fi i 11 / >
<j 0 In r d f r e s o u r c e s i
I
t t i I fie I
<j 0 In r d f r e s o u r c e s f
It
\ i « 1 < t ( u t fin 1 / >
I - ( <- t H i
<j 0 I n r d f r e s o u r c e s
hf ( v I
u 1 _ P i 11 1 f t i il / >
\ i.
t t t if I
<j 0 I n r d f r e s o u r c e s
] t O
r (11 H' ill 1 Of nil />
t -•< - t 111 (.
</j 0 FinalState>
</]
0 States>
<j 0 Transitions r d f resources I (
\
( 1
! pil
t i l
1 I
^
I
12 II
t i l
(<!)/>
I ~f I <. [ 1
1 f 1 1 1
<j 0 T r a n s i t i o n s r d f r e s o u r c e s h ' (
i t i l . f If u t ( I I / >
< j 0 S t a t e s r d f r e s o u r c e s t I,
1 (
i p 111
ri fi 1
\ I r > \ i 1 t si t ( / >
<j 0 S t a t e _ V a r i a b l e s >
<j 0 s t a t e V a r i a b l e r d f IDs r i j l \ t l l o \
>
< j 0 IHIIK r d f d a t a t y p e s I n t p
» l r 2n01 \ \ I I - H I K M I
ri
>roadlYellow</j 0 n tinO
</j 0 stateVariable>
</j 0 State_Variables>
<j 0 S t a t e _ V a n a b l e s rdf resources til
1 t
mint ti i ft 1
whit
/>
<j 0 Transitions rdf resources li t
1 (r
i p n t i i l if i „ t
I
r
I I i ( i ( c (1 U I y / >
<j 0 States r d f resources f 1
\ - i
input
tr
1(1
( I t <I2 t( < i
Mi
/ >
<j 0 Transitions r d f resources |
t
t u t
1 n '
n pti
if
1 jit n k n i J I < it / >
<j 0 S t a t e s r d f r e s o u r c e s t |<
I w r^. i i p i I t i
lift,,
'
<i I i ) I
< />
<j 0 Transitions rdf resources (
t -<
i pti
t H 1 „U t
V '
)
( 2 ih 11 1 ( id > r< ( II />
<j 0 States rdf resources
1
1t
ipi \
i
t
I 1 . 1
U( / >
11
<j 0 State_Variables rdf resources fiJ
(i
i
I I II
/>
<j 0 Transitions rdf resources 1 (
i
1(11
( id _ ( i />
<j 0 States rdf resources
i
I
<j 0 Transitions rdf resources |
r
(1 ) i 11
1 ' ink />
<j 0 Transitions rdf resources
i I I I i />
<j 0 Transitions rdf resources I
I
ft
1 I hi />
<j 0 Transitions rdf resources
i
o
i
i I
o
>
v
i (
11
I
p t r
i p
i
i
I
i i
(
I
i on
(
1 <
i> i
/>
<j 0 Transitions rdf resources
i
i i
It />
<j 0 Transitions rdf resources
i
I
i
I
<j 0 S t a t e _ V a n a b l e s >
<j 0 stateVariable rdf
/ >
IDs
tl
|_t< j >
137
( I
It
( I t
1
/>
< j . 0 : ij.iiiip rdf : d a t a t y p e = ' I n 11> / \\v,u v\ 5 oi;> 20U I, WiLSi hrm<i''<-1 i i n ',
>road2Red</j . 0:rumu>
< / j .0: stateVariable>
</j .0: State_Variables>
<j . 0 : State_Variables
rdf:resource=
file
{ -.( r^ i n p i n
t i ,i! 11< 1, ;>, h i '.\.l,- r oa<] 1 r ( d / >
</j .0: StateMachine>
</rdf :RDF>
<!
Created with Protege (with OWL Plugin 3.3.1, Build 430)
stanford.edu — >
D.2
http://protege. <-»
Test Objectives and Corresponding P a t h s for
t h e All Transition Coverage
Test Obj ective #1: [covertransition], [road2blinktoroad2blink]
Test #1: starttoroadlgreen — roadlgreentoroadlgreen — roadlgreentoroadiyellow *•
— roadlyellowtoroad2green — road2greentoroad2green — <—•>
road2greentoroad2blink — road2blinktoroad2blink — road2blinktoroad2green — <road2greentoroad2yellow — road2yellowtofinal
Test Obj ective #2: [covertransition], [roadlgreentoroadlblink]
Test #2: starttoroadlgreen — roadlgreentoroadlgreen — roadlgreentoroadlblink —<roadlblinktoroadigreen — roadlgreentoroadiyellow — *-J
roadlyellowtoroad2green — road2greentoroad2green — road2greentoroad2yellow<— road2yellowtofinal
Test Objective #3: [covertransition], [road2greentoroad2green] Already satisfied.
Test Objective #4: [covertransition], [road2greentoroad2blink] Already satisfied.
Test Objective #5: [covertransition], [road2yellowtoroadlgreen]
Test #3: starttoroadlgreen — roadlgreentoroadlgreen — roadlgreentoroadiyellow *
— roadlyellowtoroad2green — road2greentoroad2green — <—»
road2greentoroad2yellow — road2yellowtoroadlgreen — roadlgreentoroadlgreen*— roadlgreentoroadiyellow — roadlyellowtoroad2green — «-j
road2greentoroad2green — road2greentoroad2yellow — road2yellowtofinal
Test Obj ective #6: [covertransition], [roadlblinktoroadigreen]: Already satisfied,
Test Obj ective #7: [covertransition], [road2greentoroad2yellow]: Already satisfied*
Test
Test
Test
Test
Obj ectiv #8: [covertransition], [road2blinktoroad2green]: Already satisfied.
Obj ectiv #9: [covertransition], [road2yellowtofinal]: Already satisfied.
Obj ectiv e #10: [covertransition] [starttoroadlgreen]: Already satisfied.
Obj ectiv #11: [covertransition] [ roadlyellowtoroad2green ] : Already <->
sati sf ied
Test Obj ectiv e #12: [covertransition], [roadlblinktoroadlblink]
Test #4: starttoroadlgreen — roadlgreentoroadlblink — roadlblinktoroadlblink —*
road lblinktoroadlgreen — roadlgreentoroadiyellow — <—>
roadl yellowtoroad2green — road2greentoroad2green — road2greentoroad2yellow*
— ro ad2yellowtofinal
Test Obj ectiv #13' [covertransition], [roadlgreentoroadlgreen]: Already satisfied*
Test Objective #14: [covertransition]
satisfied
[roadlgreentoroadiyellow]. Already
138
D.3
Test Objectives and Corresponding P a t h s for
the All Transition Pair Coverage
Test Objective #1 [immediate], [road2blmktoroad2green , road2greentoroad2blmk ]
Test #1
starttoroadlgreen — roadlgreentoroadlgreen — roadlgreentoroadlyellow*
— roadlyellowtoroad2green — road2greentoroad2blink — <—>
road2blinktoroad2blink — road2blinktoroad2green — road2greentoroad2blink —*
road2blinktoroad2green — road2greentoroad2yellow — road2yellowtofinal
Test Objective # 2 [immediate], [roadlyellowtoroad2green, road2greentoroad2blink ] •
Already satisfied
Test Objective # 3 [immediate], [starttoroadlgreen , roadlgreentoroadlyellow] No
plan found
Test Objective # 4 [immediate], [roadlgreentoroadlgreen, roadlgreentoroadlblmk]
Test # 2 starttoroadlgreen - • roadlgreentoroadlgreen — roadlgreentoroadlblmk — <
roadlblmktoroadlgreen — roadlgreentoroadlyellow — <—*
roadlyellowtoroad2green — road2greentoroad2green — road2greentoroad2yellow*
— road2yellowtof m a l
Test Objective #5 [immediate], [roadlblmktoroadlblmk , roadlblmktoroadlgreen]
Test # 3 starttoroadlgreen — roadlgreentoroadlblmk — roadlblmktoroadlblmk — <
roadlblmktoroadlgreen — roadlgreentoroadlyellow — *-^>
roadlyellowtoroad2green — road2greentoroad2green — road2greentoroad2yellow*
— road2yellowtofinal
Test Objective # 6 [immediate], [road2greentoroad2yellow, road2yellowtoroadlgreen]
Test # 4 starttoroadlgreen — roadlgreentoroadlgreen — roadlgreentoroadlyellow «
— roadlyellowtoroad2green — road2greentoroad2green — «—»
road2greentoroad2yellow — road2yellowtoroadlgreen — roadlgreentoroadlgreen^
— roadlgreentoroadlyellow — roadlyellowtoroad2green — •->
road2greentoroad2green — road2greentoroad2yellow — road2yellowtofmal
Test Objective # 7 [immediate], [road2greentoroad2yellow , road2yellowtof m a l ] *-*
Already satisfied
Test Objective # 8 [immediate], [road2blinktoroad2green , road2greentoroad2green]
Test #5 starttoroadlgreen — roadlgreentoroadlgreen — roadlgreentoroadlyellow +
— roadlyellowtoroad2green — road2greentoroad2blink — <-»
road2blinktoroad2blmk — road2blmktoroad2green — road2greentoroad2green — *
road2greentoroad2yellow — road2yellowtofmal
Test Objective # 9 [immediate]. [starttoroadlgreen, roadlgreentoroadlblmk]
Already satisfied
Test Objective #10 [immediate] [ road2blmktoroad2green , road2greentoroad2yellow<
] Already satisfied
Test Objective #11 [immediate] [roadlgreentoroadlblmk , roadlblmktoroadlgreen]
Already satisfied
Test Objective #12 [immediate] [ r o a d l g r e e n t o r o a d l y e l l o w , roadlyellowtoroad2green*
] Already satisfied
Test Objective #13 [immediate] [ road2greentoroad2green , road2greentoroad2yellow<- J
] Already satisfied
Test Objective #14 [immediate] [ s t a r t t o r o a d l g r e e n , r o a d l g r e e n t o r o a d l g r e e n ] <—»
Already satisfied
Test Objective #15 [immediate] [ road2blmktoroad2blink , r o a d 2 b l m k t o r o a d 2 g r e e n ] <Already satisfied
Test Objective #16 [immediate], [roadlblmktoroadlgreen r o a d l g r e e n t o r o a d l b l m k ]
Test # 6 starttoroadlgreen — roadlgreentoroadlblmk - r o a d l b l m k t o r o a d l g r e e n
roadlgreentoroadlblmk — roadlblinktoroadlgreen — roadlgreentoroadlgreen «•
— roadlgreentoroadlyellow — roadlyellowtoroad2green — ^-^
road2greentoroad2green — road2greentoroad2yellow — road2yellowtofinal
Test Objective #17 [immediate] , [road2greentoroad2green
road2greentoroad2blmk]
Test # 7 starttoroadlgreen — roadlgreentoroadlgreen — roadlgreentoroadlyellow *
— roadlyellowtoroad2green — road2greentoroad2green — «—>
road2greentoroad2blmk — road2blmktoroad2green — road2greentoroad2yellow «•
— road2yellowtofinal
Test Objective #18 [immediate], [ roadlyellowtoroad2green , road2greentoroad2yellow<] No plan found
Test Objective #19 [immediate], [ r o a d l b l m k t o r o a d l g r e e n , r o a d l g r e e n t o r o a d l g r e e n ] *
Already s a t i s f i e d
139
Test Objective #20 [immediate], [road2yellowtoroadlgreen, roadlgreentoroadlblink]
Test # 8
starttoroadlgreen — roadlgreentoroadlgreen — roadlgreentoroadlyellow*
— roadlyellowtoroad2green — road2greentoroad2green — *->
road2greentoroad2yellow — road2yellowtoroadlgreen — roadlgreentoroadlblink*
— roadlblinktoroadlgreen — roadlgreentoroadlgreen — <—>
roadlgreentoroadlyellow — roadlyellowtoroad2green — road2greentoroad2green*
— road2greentoroad2yellow — road2yellowtofinal
Test Objective #21 [immediate] [road2greentoroad2blmk , road2blinktoroad2green ]
Already satisfied
Test Objective #22 [immediate] [road2yellowtoroadlgreen , roadlgreentoroadlgreen*
] Already satisfied
Test Objective #23 [immediate], [roadlgreentoroadlblink, roadlblmktoroadlblmk ]
Already satisfied
Test Objective #24 [immediate
[roadlblinktoroadlgreen, roadlgreentoroadlyellow*—
] Already satisfied
Test Objective #25 [immediate
[roadlgreentoroadlgreen, roadlgreentoroadlyellow*—
] Already satisfied
Test Objective #26 [immediate
[road2greentoroad2blink, road2blinktoroad2blmk ] *
Already satisfied
Test Objective #27 [immediate
[road2yellowtoroadlgreen, roadlgreentoroadlyellow*
] No plan found
Test Objective #28 [immediate
[ roadlyellowtoroad2green , road2greentoroad2green*-^
] Already satisfied
140
Appendix E
Unit Testing Coverage Criteria
from t h e Literature in POSL
1. All Transition Coverage
% All Transition
% Coverage Criteria
coverage([covertransition] ,[?tr]) — transition (?tr)
%Redundancy Rule (covertransition TR)
exist() —test( 7 t), hasstep (7t,7st6P ) , hascall(7t ,TR)
2. All Transition Pair Coverage
% All Transition Pair
% Coverage Criteria
coverage([immediate], [7tl, 7 t2]) — transition ( 7 tl), transition(7t2) , <—>
notEqual( 7 ti, 7 t2) , from(7tl,7state) , to( 7 t2, 7 state)
%Redundancy Rule (immediate,TR1,TR2)
exist() —test( 7 t), hasstep( 7 t, 7 stepl), hasstep( 7 t, 7 step2), hascall(7stepl,
TR1), hascall(7step2, TR2), next(7stepl, 7 step2)
3. Faulty Transition Pair
% Faulty Transition Pair
% Coverage Criteria
coverage([immediate], [7tl, 7 t2]) — transition ( ? tl), transition (7t2 ) , «—>
faulty( 7 t2), notEqual( 7 tl, 7 t2), from( 7 tl, 7 state) , to( 7 t2, 7 state)
%Redundancy Rule (immediate,TR1,FTR)
exist() —test( 7 t), hasstep(7t,7stepl ), hasstep (7t,7step2) hascall (7 stepl , <—>
TR1), hascall(7step2 , FTR), next (7stepl , 7 step2)
4 All Content Dependence Relationship
% All Content Dependence Relationship
% Coverage Criteria
'transition ( 7 tl ) , <—
coverage ([ after—bypass ], [7tl,7t2, 7 blist] )
transition ( 7 t2 ) , tdefine(7tl,7myvar) tuse (7t2 , 7 myvar) , bypass(7*
blist ,7tl , 7 myvar)
141
tdef ine (7t , 7 myvar) - action( 7 t, 7 b), def m e (7b , 7 myvar)
tuse (7t ,7myvar ) — action (7t , 7 b) , use (7b ,7myvar )
bypass (7blist ,7tl , 7 myvar) — transition (7 f irsttransition) , *->
hasf irsttransition (7 sm ,7 f irsttransition) , testalltobypass (7blist ,7tl , 7«-»
myvar,7firsttransition)
testalltobypass ([7 blist |7testtransition] ,7tl ,7myvar ,7testtransition) — «-»
notequal(7tl,7testtransition) , define (7testtransiti on ,7 myvar ) , checknext«-»
(7blist,7tl,7myvar,7testtransition)
testalltobypass (7blist ,7tl ,7myvar ,7 testtransition) — equal (7tl , ' ^
testtransition) , checknext(7blist,7tl,?myvar,7 testtransition)
testalltobypass (7blist ,7ti ,7 myvar ,7 testtransition) — not—define C+->
testtransition,7myvar) , checknext(7blist ,7tl,7myvar,7testtransition)
checknext ([] , 7 tl ,7myvar ,7 testtransition) — 7nexttransition (7 testtransition , +-»
nulltransition)
checknext (7blist ,7tl ,7myvar ,7 testtransition) — 7nexttransition(7<-J
testtransition , 7 nextt) , testalltobypass (7blist ,7ti ,7 myvar ,7 nextt)
false — define(7testtransition,7myvar) , not—define(7testtransition,'myvar)
%Redundancy Rule (after ,TR1,TR2)
exist - test( 7 t), hasstep (7t ,7 stepl) , hasstep (7t ,7 step2 ) , hascall (7 stepl , <->
TR1) , hascall(7step2,TR2) , after-bypass(7stepl ,?step2)
after—bypass(7stepl,7step2) - nextstep(7stepl,7step2)
after— bypass (7stepl ,7 step2) - nextstep (7 stepl ,7 step3) , after-bypass (7 stepS*-^
, 7 step2), hastransition(step3, 7 tr3) notin(7tr3, [Bl, B2, B3,
])
notm( 7 tr3, [7h|7x]) - notequal (7h ,7 tr3) , notm( 7 tr3, [7x])
notm( 7 tr3, [])
2-way Criterion
%2—way Criterion
% Coverage Criteria
7
coverage ([ immediate ] ,[7tl,t2l)
— transition ( 7 tl), transition (7t2) , actions
7
(7tl , 7 bl) , action( 7 t2,b2)
to( trl , stl) <_J
, from(trl,stl) , from(tr2 , stl) ,
to(tr2,stl), behaviour (7bl ) , behaviour(7b2), mdependant (7bl , 7 b2)
mdependant (7bl , 7 b2) — not dependant (7bl,7b2 ) , not dependant (7b2 , 7 bl)
dependant( 7 bl 7 b2) -define( 7 bl ,7myvar) ,use(7b2 ,7 myvar)
%Redundancy Rule (immediate ,TR1 ,TR2)
exist () —test( 7 t), hasstep( 7 t , 7 stepl) , hasste p( 7 t ,7step2) , hascall (7stepl <->
TR1), hascall(7step2, TR2),next( 7 stepl , 7 step2)
Session-Oriented Criterion
%Session—Oriented C r i t e r i o n
% Coverage C r i t e r i a
in ( 7 t r , [ 7 t | 7 t a i l j )
equal ( 7 t a i l , 7 t r )
in ( 7 t r , [ 7 t j 7 t a i l ] )
- in ( 7 t r , 7 t )
nonselftloop([7trs|7tr] , 7st) - out(7st,7tr) , to(7tr
, 7 s t 2 ) , not i n ( 7 t r , 7 t r s ) , n o n s e l f l o o p ( ' t r s , 7 s t )
n o n s e l f l o o p ([] , 7 s t )
selftloop ([?trs | 7 t r ] , 7 s t )
not m ( 7 t r , 7 t r s )
s e l f l o o p ([] , 7 s t )
-
7
s t 2 ) , not
s e l f loop ( 7 t , 7 s t ) , out ( 7 s t , 7 t r ) , t o
142
Equal(7st
(7tr,7st),
%updated variables by a set of tranitions
update (['trs | 7 tr] , [7vl|7v2]) - updatedByOne (7 tr ,7 vl) , update (7trs ,7 v2)
update ([] ,[])
updatedByOne (7tr ,[7v3 | 7 v4 ]) - define(7tr, 7 v4) , not m( 7 v4, 7 v3)
updatedByOne (7t , [] )
%Disjomtness of two sets of variables 7 vl and 7 v2
disjoint ([7vl | v 7 ] , 7 v2) - disjoint (7vl ,7 v2 ) , not m ( 7 v , 7 v 2 )
disjoint ([] , 7 v2)
coverage([immediate ,after], [ 7 tl, 7 t2, 7 t2, 7 t3]) - state( 7 st), nonselfloop(7<^
nstr, 7 st]), self loop(7str ,7 st) , update (7str , 7 vl) , update (7nstr ,7 v2) , <-^
disjoint (7vl , 7 v2) , m ( 7 t l , 7 str) , in(7t2 , 7 nstr) , in( 7 t3, 7 str)
%Redundancy Rule (immediate ,TR1,TR2, after,TR2,TR3)
exist () -test('t) , hasstep (7t ,7 stepl) , hasstep (7t ,7 step2 ) , hascall (7 stepi , <-^
TR1), hascall(7step2, TR2), next(7stepl, 7 step2),
hasstep( 7 t, 7 step3) , hascall(7step3,TR3) , after(7step2,7step3)
after(7stepl , 7 step2) — nextstep(7stepl,7step2)
after(7stepl , 7 step2) — nextstep(7stepl ,7step3) ,after(7step3 ,7step2)
143
7. Full Predicate Coverage
%Full Predicate Coverage
the predicate is in CNF but it is not efficient •—
but it is complete
coverage([FP],'params)
% 7Params is a list of a condition and true/false values
% [conditionl, predicateVal , clauselVal , clause2Val , clause3Val , ]
istestclause(clausel, 7 p) — isclause(7clausel), hasParent(7clausel,7p)
testclausevalue (true)
testclausevalue (false)
testcondition(7c) — condition(7c)
coverage([FP] , [7 cond|7params) —
testcondition(7cond) , istestclause(7thetestclause , 7 cond) , testclausevalue (7<—
tcvalue) ,
getpredicatevalue (7predicatevalue ,7thetestclause ,7tcvalue) , parentof clause (7<thetestclause , 7 andexp) ,
f lr st child (7fch,7andex )
setchildrentrue(7fch, 7clausevalues , ,7tcvalue, 7thetestclause)
getpredicatevalue(false,7ell,true) — isnegated(7ell)
getpredicatevalue(true,7ell,true) — not isnegated(7ell)
getpredicatevalue(true,7ell,false) — isnegated(7ell)
getpredicatevalue(false,7ell,false) — not isnegated(7ell)
getvalue(true,7theclause) —isnegated(7theclause)
getvalue(false,7theclause) —not isnegated(7theclause)
setchildrentrue(7andex, [ 7 tcvlaue| 7 val], ,7tcvalue) —
firstchild (7andex , 7 fch) , istestclause (7fch) , nextsiblmg( 7 fch ,7nch ,7 tcvalue ) <•
, setsiblingtrue(7nch,7val)
setchildrentrue(null, [], ,7tcvalue, 7thetestclause)
setchildrentrue(7ch, [ 7 tcvalue| 7 val], , 7 tcvalue, 7 ch) —
nextsiblmg (7 ch ,7 nch) , setchildrentrue (7nch ,7 val , 7 tcvalue ,7 ch)
Se t next siblmgt rue (7ch , [7 valh | 7 valr ] ,7 tcvalue ,7 thetestclause) —
not Equal(7ch, 7 thetestclause), nextsibling(7ch,7nch) ,
setchildrentrue (7nch,7valr,7 tcvalue,7thetestclause) , getvalue(7valh,7 ch)
%Pick a condition and one of its clauses as the test clause Set the values *
of the test clause to true or false and specify the value of the <—'
condition Set the value of the other clauses in that and—clause to true*lf they are not negated and to false if they are negated
%Because it is in CNF we are done'
%Redundancy Rule
exist ([FP] , [conditionl ,testconditionvalue ,clauselvalue,clause2value,
exist([FP], [7cond, 7predicateval | 7clausevalues])
checkconditionvalue(7 stepO,7 cond,7 predicateval) ,
firstchild(7cond , 7 f c h ) ,
checkclausevalues(stepO, 7 fch, 7clausevalues)
])
—
checkclausevalues(7thisstep, null , [])
cheekekausevalues(7thisstep,7fch,[7 valh,7 clausevalues])
checkvalue(7thisstep,7fch,7valh),nextsibling(7fch,7nch)
checkclausevalues(7thisstep, 7 nch, 7 clausevalues)
—
checkvalue (7 thisstep 7 clausel, 7 value) — hasdesc (7 clausel ,7 descl) , <—>
hasoperator (7descl , s) , hasoperandl (7descl ,7 svl) , hasoutcome (7 thisstep , 7<vvl), hasvalue(?vvl,7vail) , variable(7vvl,7svl) , hasoperand2(7descl ,7 <—>
sv2) , hasoutcome (7thisstep , 7 v v 2 ) , hasvalue (7vv2 ,7 val2 ) , variable (7vv2 , 7 <sv2) ,
Equal (7value,true) , smallerthan (7vail,7val2)
144
% Write rules for < > <= >= =
for 7value=true & 7value=false
checkvalue(7thisstep,7clausel , 7 value) — hasdesc(7clausel , 7 descl) , «—>
hasoperator (7descl , none) , hasliteral (7descl ,7 literall) , hasoutcome (7«—>
thisstep , 7 vv) , variable (7vv,7literall) , hasvalue(7vv,7literalvalue) , «—>
Equal (7value,7literalvalue)
%Take the list of the clauses of a condition and the step that comes before <-J
it and compare the value of the clauses with what it should be
%The negation of the clause is not considered here It is a part of the <—»
expression Check also whether the condition is true or false by «—>
investigating whether the transition , which bears the condition , has «—>
been fired or not
checkconditionvalue(7stepO,7cond,true) — test ( 7 t), hasstep( 7 t, 7 stepO) , <—>
nextstep(7stepO,7stepl) , hascall(7stepl ,7 trl) , hasGuard(7trl,7 cond)
checkconditionvalue (7prestep ,7 cond , false ) — test ( 7 t), hasstep (7t, 7 prestep )•<->
, hascall(7prestep , 7 trl) , to (7 trl ,7 stl) , out (7stl ,7 tr2 ) , hasGuard (7tr2 , 7 ^
cond), nextstep( 7 prestep, 7 otherstep),
has call(7otherstep,7tr2) , notEqual(7trl,7tr2)
145
Vita
Valeh Hosseinzadeh Nasser
Universities attended:
Amirkabir University of Technology, 2002-2007, B.Sc. in Computer Engineering
University of New Brunswick, 2007-2009
Publications:
1. Valeh.H Nasser, Weichang Du, Dawn Maclsaac: Knowledge-based Software
Test Generation, The 21st International Conference on Software Engineering
and Knowledge Engineering SEKE09, Boston, USA July 1 - July 3, 2009.
acceptance rate 38.0%
2. Valeh H. Nasser, Y. Biletskiy: A Comparison of Horn Logic and Description
Logic using the Leveled Criteria based Framework in Semantic Web Perspective, The Second Canadian Semantic Web Working Symposium CSWWS09,
Kelowna, BC, Canada 24 May 2009.
3. Valeh.H Nasser, Weichang Du, Dawn Maclsaac: Ontology-based Unit Testcase Generation, Proceedings of the Sixth Annual Research Exposition 2009
EXPO09, Fredericton, NB, Canada April 08, 2009
http://www.cs.unb.ca/itc/ResearchExpo/Expo2009-proceeding.pdf.
Документ
Категория
Без категории
Просмотров
0
Размер файла
5 932 Кб
Теги
sdewsdweddes
1/--страниц
Пожаловаться на содержимое документа