close

Вход

Забыли?

вход по аккаунту

?

Supervisory control of time Petri nets using net unfoldings

код для вставкиСкачать
Supervisory control of time Petri nets using net unfoldings
BY
HAISHENG WANG
M.S., Monash University, Australia, 2005
B. S., Beijing Polytechnic University, China, 1999
THESIS
Submitted as partial fulfillment of the requirements
for the degree of Doctor of Philosophy in Computer Science
in the Graduate College of the
University of Illinois at Chicago, 2010
Chicago, Illinois
UMI Number: 3446112
All rights reserved
INFORMATION TO ALL USERS
The quality of this reproduction is dependent upon the quality of the copy submitted.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed,
a note will indicate the deletion.
UMT
Dissertation Publishing
UMI 3446112
Copyright 2011 by ProQuest LLC.
All rights reserved. This edition of the work is protected against
unauthorized copying under Title 17, United States Code.
®
ProQuest
ProQuest LLC
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, Ml 48106-1346
ACKNOWLEDGMENTS
I am heartily thankful to all who have aided in one way or another for writing of this
thesis. Especially to my supervisor, Professor Ugo Buy, whose encouragement, supervision
and support from the preliminary to the concluding level enabled me to develop an understanding of the subject. This thesis has been improved greatly from the feedback of the
committe members, Professor Tadao Murata, Professor Sol Shatz, Professor Prasad Sistla
and Professor Houshang Darabi. It is a pleasure to thank my parents, who gave me the
moral support. But most of all a loving thank you to my wife, Ubonsri, and my son, Aiden,
for their understanding and gracious support.
Lastly, I offer my regards and blessings to the teachers ever taught me, and staff ever
assisted me in Computer Science department. It is impossible to compelete this thesis
without all of their help in any respect.
HW
m
TABLE OF CONTENTS
CHAPTER
PAGE
1
INTRODUCTION
1
2
LITERATURE SURVEY
2.1
Verification
2.1.1
Real-time models
14
14
15
2.2
20
Supervisory Control
3
DEFINITIONS
3.1
Petri Nets
3.2
Time Petri nets
24
24
26
4
STUBBORN UNFOLDING
4.1
Stubborn Sets
29
33
4.2
4.3
4.3.1
Unfoldings
Stubborn Unfoldings
Algorithm Correctness
35
45
52
4.3.2
Complexity
62
4.4
Empirical Results for Stubborn Unfolding
63
5
6
TRANSITION LATENCIES
67
5.1
Atlantis algorithm
67
5.1.1
Correctness of Atlantis
79
5.2
5.3
Discovery algorithm
Complexity of Atlantis and Discovery
87
91
5.4
Net latencies
93
SUPERVISORY CONTROLLER GENERATION
97
6.1
6.2
6.3
Framework for supervisor generation
Supervisor definition and application examples
Challenger algorithm
97
99
104
6.3.1
6.3.2
6.4
MEA Solver
Parallelism Solver
Simulator
105
109
113
7
CORRECTNESS OF SUPERVISORY CONTROLLER ....
117
8
EMPIRICAL RESULTS
135
iv
TABLE OF CONTENTS (Continued)
CHAPTER
9
PAGE
CONCLUSIONS AND FUTURE RESEARCH
9.1
Future work
145
147
CITED LITERATURE
150
VITA
159
?
LIST OF TABLES
TABLE
PAGE
I
Comparison on the buffer examples
62
II
Data for stubborn unfoldings
64
III
Data for supervisory control test on Corbett examples
139
IV
Data for supervisory control test on Dining Philosophers examples.
140
V
Time complexity on Dining Philosophers examples
143
Vl
LIST OF FIGURES
FIGURE
PAGE
1
Framework for supervisory controller generation
10
2
A Petri net
25
3
Petri net example along with its unreduced and reduced state spaces
(70)
34
4
Unfolding of Petri net in Figure 2
37
5
Algorithm for constructing stubborn unfoldings
47
6
Stubborn unfolding of Petri net appearing in Figure 2
50
7
Example illustrating partial unfolding construction
55
8
Four buffer example in (21)
58
9
Four buffer unfolding comparison in Figure 8
59
10
Illustration of precedes-by-match relation
71
11
Pseudocode of Atlantis algorithm
75
12
Maximal Td closed subset
84
13
Pseudocode of Discovery algorithm
90
14
Example of a time Petri net (left-hand side) and its untimed unfolding (right-hand side)
94
15
Example of a time Petri net
99
16
Supervisor subnet for time Petri net in Figure 15
102
17
Valid cases satisfying the MEA
105
18
Three cases in which the MEA can be violated
106
LIST OF FIGURES (Continued)
FIGURE
19
PAGE
Example Petri net (left-hand-side) and its unfolding (right-handside) for MEA solver and Parallelism solver
110
20
Pseudocode of simulator
114
21
Illustration of Lemma 9
126
22
Illustration of Theorem 10
134
23
Evaluation toolset
136
vin
LIST OF ABBREVIATIONS
BFS
Breadth First Search
CtITPN
Control Time Petri net
CtIPN
Control Petri net
DEDS
Discrete Event Dynamical Systems
DES
Discrete Event Systems
DFA
Deterministic Finite Automata
DFS
Depth First Search
FSA
Finite State Automata
FSM
Finite State Machine
FMS
Flexible Manufacturing System
LTL
Linear Temporal Logic
LUE
Limited Unfolding Exploration
MEA
Merge Exclusion Assumption
MMCPS
Maximumly Marked Concurrent Place Set
MTL
Metric Temporal Logic
PECS
Parallelism Enable Control Set
PN
Petri Net
LIST OF ABBREVIATIONS (Continued)
PNML
Petri Net Markup Language
PEP
Programming Environment based on Petri Nets
RT
Real-Time
SECS
Synchronization Enable Control Set
SU
Stubborn Unfolding
TPN
Time Petri Net
?
SUMMARY
We define a method for the automatic generation of supervisory controllers that force
a software system to perform a given computation by a given deadline. The computation
must be executed by a prespecified delay with respect to the previous execution of the
computation. Although supervisor generation occurs off-line with respect to software system
execution, the resulting controllers automatically take into account variable task durations
in an effort to increase the flexibility of computation schedules in the controlled software
system.
We model the controlled software system as a time Petri net. Our new unfolding al-
gorithm then produces a so-called Petri net unfolding, in order to analyze explicitly the
causal relationships on the execution of transitions in the underlying net. Based on the
unfolding, we generate automatically a controlled model which extends the original net by
the addition of a supervisory controller. The controlled net is guaranteed to fire a target
transition with the given frequency. In this setting, control supervisors are used to force a
target transition t¿ to fire within ? time units since the previous firing of t¿. Supervisor
generation is based on the concept of transition latency, computed on the unfolding net.
The latency of a Petri net transition t is the time interval during which t must be disabled
in order for target transition t¿ to fire by its deadline.
We use two algorithms, Atlantis and Discovery for computing the latency of each unfolding node. If countdown timer passes the i's latency number, firing transition t may cause
xi
SUMMARY (Continued)
the deadline of target transition to be missed. Therefore, the supervisor controller is defined
to disable a transition when its latency is greater than the time left to fire target transition
by its deadline. Our limitation is that the supervisor control problem must conform to a
so-called merge exclusion assumption. When this assumption does not hold, we sometimes
apply Challenger to generate a dynamic controller. In addition, the dynamic controllers
that Challenger generates can increase the amount of parallelism allowed in supervised net.
We have implemented a toolset for all these algorithms and reported the empirical results
by simulating the execution of the controlled model.
XIl
CHAPTER 1
INTRODUCTION
Enforcing deadlines in real-time systems is a difficult problem because these systems
have to perform computations with strict time constraints and often have complex in-
teractions with their environments (44). Due to their environment dependency, a small
unpredictable change in their environment can cause a missed deadline, which could lead
to many serious results, including loss of life, if a real-time system is used in safety-critical
applications, such as aircraft navigation, traffic control, manufacturing process control and
robotics automation. Since it is impossible to control the environment in which a real-time
system operates, unpredictable system behavior is inevitable. This research is to explore
the well-known problem of deadline compliance in real-time systems. The only solution is
to make sure that once an unpredictable event occurs, some mechanism is needed to be
applied so that such a severe consequence (e.g., a missing deadline) would not result. One
way to enhance the reliability of real-time systems is to use supervisory control theory. In
general, supervisory control theory is a method to automatically synthesize supervisors.
The supervisors can restrict the system behaviors in order to fulfill the given specifications.
In this particular problem, the critical specification required is the deadline on the execution
of a computation.
Several real time applications can benefit from the use of supervisory controllers. For
example, when designing medical devices with real-time constraints (e.g. life supporting
1
2
devices), missing deadlines can lead to loss of life. Some devices such as hearing aids allow
an interval of acceptable delays (m to n), out of which device malfunctions may result in
harm to the human organs (56). In this case, the purpose of the supervisory controller is
to prevent unacceptable delays from occurring.
Another important example is flight control and navigation systems. This real-time
application is very important because today the sky is quite busy with 50,000 flights each
day (42). The task of preventing these aircraft from colliding with each other (e.g., when
they approach an airport) is becoming more and more challenging (63). In order to prevent
an accident, the movement of the aircraft must be tracked and modeled accurately. If two
aircrafts are at risk of colliding, those aircraft need to change their direction or altitude to
avoid the accident. Applying a controller in such a system means to detect the possibility
of collision and to prevent an accident by notifying aircraft to change their direction.
Supervisory controllers are also widely used for performance monitoring and real-time
control in flexible manufacturing systems (FMS) (30; 84), in which materials can be dynam-
ically routed through available work stations to build a product. Productive use of FSM
relies on the real-time resource allocation. The jobs that compete for resources can lead to
the situation called "deadlock" . A deadlock situation is undesirable because the work can-
not be progress due to a cyclic pattern of resources requests. None of the work-stations can
complete their tasks. In many cases, the controller or scheduler of these systems introduces
deadlock with incorrect control strategies. A supervisory controller has to avoid generating
deadlock in controlled systems.
3
Given that more and more critical applications rely heavily on the use of real-time
systems, their reliability is critical. Real-time software must be able to continue its function
even if an unexpected event occurs. If a system cannot continue operating, due to any
unpredicted order of behaviors, not only a dangerous consequence could occur, but also a
large amount of time and money would be lost in order to bring back the system. Hence a
high level of reliability is truly desirable for many real-time applications (9) .
To date, the reliability of real-time softwares has been explored using a variety of ap-
proaches. Automatic verification can improve software reliability by checking whether a
system satisfies given properties. In general, verification techniques not only face complexity issues, but also they are limited to only detecting the problem in later stages of software
development, which will cost more time and money to fix than if that error was found in
earlier stages (57; 78). The interesting question for many researchers is whether a less complex method can be built that enhances the reliability of real-time software at earlier stages
of developement. For this reason, nowadays, supervisory control has gained more attention
from many researchers. Supervisory control does not seek to alert a software designer of an
error possibility, but it avoids the error by generating an appropriate supervisor. The most
important aspect generating the supervisory controller is that this can be done more effi-
ciently by using static analysis, than verification which often results in the state explosion
problem.
Testing and debugging are also traditional methods used to improve software reliability.
But testing and debugging are more challenging for real-time software. Testing is difficult
4
because erroneous patterns may not repeat themselves during testing even if the system
is fed the same set of inputs. This would also cause trouble during the debugging stage
since it is hard to reproduce the same sequence of events that lead to the error. Testing is
also difficult due to the "probe effect" which is an unintended alteration in system behavior
caused by instrumentation needed to check that system. As a result, static analysis can
enhance the reliability of real-time applications. With static analysis, the possibility of
deadline violation could be prevented before it happens, thus for most of the cases testing
and debugging may not be necessary (18).
The supervisory control method seeks to ensure that a system complies with a correctness specification. In this dissertation the supervisory controller will be automatically
generated that disables behaviors violating the following specification. A specific operation, namely a target transition, is required to be executed within a given time period,
called a deadline. The system transitions that could cause the deadline to be missed will
be gradually disabled by the supervisory controller. Supervisor generation requires that
transitions be observable and controllable. A transition is observable if its occurrence can
be detected, based on knowledge of its external outputs. A transition is controllable if it
can be prevented from occurring by the controller.
In this work, we assume that system transitions are both controllable and observable.
This assumption is reasonable for software systems. In other words, the controller has full
access to the state of the system being mentioned. Thus, the supervisory controller works by
disabling certain controllable events in such a way that behaviors violating a specification
5
do not happen. Because of this characteristic, supervisory control can be used not only
to solve the scheduling problems in flexible manufacturing, but also to enhance software
reliability, which is considered as the probability that the given operation of a software
system will not fail for a given period of time in a given environment.
When designing a supervisory control algorithm, the real-time system needs to be modeled, then formal methods and techniques can be applied to the model. Discrete Event
Systems (DES) are dynamic system models characterized by a discrete state and eventdriven transitions (20). In contrast to the models with continuous state and time driven
characteristics, DES models refer to the fact that the state changes depend completely on
the occurrence of discrete events, asynchronously and over time. From the modeling point
of view, time no longer causes state changes, but events serve the purpose of driving such
systems. Petri nets and Finite State Automata (FSA) are two common methods for modeling controlled DES. As formal models of systems, Petri Nets (PN) can represent efficiently
concurrent, causal, and conflicting relationships, and event synchronization. One main advantage of Petri nets for modeling controlled systems is that Petri nets facilitate supervisor
generation algorithms that are sometimes more tractable than the corresponding methods
for FSAs. The lack of this property in FSAs can limit the potential for developing compu-
tationally efficient algorithms for analysis and supervisor synthesis. A second advantage of
Petri nets is that extensions to the timed domain have already been proposed and studied
extensively (12; 14; 47).
6
In this dissertation we introduce a method that reflects duration changes in execution
sequences defined off-line. This method provides a conservative exclusion of all execution
sequences that violate a predefined deadline. Our method models the controlled system
as a Time Petri Net (TPN) (53). Controllable software operations (functions, events, or
methods) are denoted by transitions and transition durations are modeled by time intervals.
The time interval of a given transition is a continuous, closed interval with known lower
and upper bounds. An interval is defined in such a way as to include all possible actual
durations of the corresponding operation. When the range of operation durations are known
in advance, this information is captured in a Petri-net model, and all possible operations
that violate the predefined deadline are disabled. Here we are targeting the class of realtime execution sequence problems in which actual operation durations are updated (as they
become available) within a predefined range and the execution sequence algorithm has to
modify the current schedule based on up-to-date information. By using time Petri nets to
model the controlled system, we can capture all main kinds of system flow patterns (e.g.,
sequential, choice, and iteration). These two features (real-time execution sequence and
general flow pattern), characterize one of the most difficult and computationally complex
class of execution sequence problems (62). To our best knowledge, this class of control
problems has not yet been explored with Petri-net-based supervisory control theory.
Our method does not seek to define a specific feasible execution sequence; instead, we
seek to prevent all infeasible execution sequences from being selected. We define a feasible
execution sequence to be one that can definitely finish a specific operation within a given
7
time horizon (deadline). At any given point of time, given a deadline for a target transition,
all system execution sequences can be divided into three disjoint sets: (1) the set of execution
sequences that definitely violate the given deadline, (2) the set of execution sequences that
may possibly violate the deadline, and (3) the set of execution sequences that will definitely
meet the deadline. We view all schedules in the first two sets as infeasible. Although this
separation of schedules seems to be somewhat conservative, the framework can be modified
to allow the execution sequences of the second set to be allowed by the control supervisors.
The goal of our framework is to guarantee that a given operation t will be scheduled at
most ? time units since the beginning of system operation, and at most ? time units since
the previous execution of t thereafter. Given a time Petri net modeling system behavior, we
synthesize a supervisory controller that enforces these conditions. The generated controller
reacts to changes in system state, by disabling and re-enabling system transitions, depending
on the current state and the time left until the expiration of deadline ?.
The supervisory controllers are based on the notion of transition latency. Let net transition t¿ model the task to be scheduled periodically. The latency of a net transition t f t¿
is an upper bound on the time required for t¿ to fire after t fires. Evidently, a transition
t should be allowed to fire only when the latency factor of t is no greater than the time
left until the deadline on the firing of t<¡ expires. Thus, the supervisory controllers will
dynamically disable net transitions when their latency is greater than the time left until
the expiration of the deadline on the firing of td. The method to compute the latency is
based on a so-called unfolding of the Petri net underlying the controlled system. A net
8
unfolding is an acyclic Petri net satisfying two useful properties. First, the unfolded net
has the same set of reachable states as the original untimed net. Second, the unfolded net
shows explicitly causal relationships on transition firings that are implicit in the original
net.
In general, there are five major contributions in our dissertation for generating supervisory controller.
1. We defined a new unfolding algorithm (Stubborn Unfolding) to generate the unfolding
net based on the original time Petri net.
2. We identified conditions that a Petri net must meet in order for supervisor generation
to be successful (the merge exclusion assumption).
3. We defined a new algorithm for computing latencies (Discovery).
4. We relaxed some of the conditions for applicability of the method with the Challenger
algorithm.
5. We implemented our method and conducted empirical evaluations.
The first contribution is important, because we obtain a better (small) unfolding net
for structure analysis and we detect potantial deadlocks during unfolding construction.
The stubborn unfolding method can generate unfolding efficiently based on a partial ex-
ploration of a Petri net's unfolding (79). We call the resulting construction a stubborn
unfolding because we exploit concepts that are part of the stubborn set method during unfolding exploration. The reason to motivate us to build a new unfolding algorithm is that
9
stubborn unfoldings are usually smaller (sometimes much smaller) than the corresponding
full unfoldings as defined by Esparza et al (28), and all terminal transitions are cutoff transitions. The first point is critical for reducing the size of unfolding nets, because the following
latency algorithms have linear complexity in the size of unfoldings. The second point is
needed because we must discover all the cyclic behaviors in original net, which the latency
algorithms require. Moreover, stubborn unfoldings are never larger than the corresponding
full unfoldings. The reduced portion, comparing with full unfolding, is called unfolding
redundancy, which is a copy of a subnet of the existing unfolding. Both full unfolding and
unfolding redundancy will be defined later in Chapter 4.
In Figure 1, we summarize the six major steps of our framework for generating deadlineenforcing supervisory controllers.
1. Given a time Petri net model, our stubborn unfolding algorithm will unfold the (un-
timed) ordinary Petri net underlying the original time Petri net. This construction
yields an unmarked Petri net, called the unfolding.
2. The Atlantis algorithm identifies the key transitions for enforcing the deadline, and the
Merge Exclusion Assumption (MEA) is checked. In general, violations of the MEA
might lead to deadlocks in the supervised net, if the supervisory controller ignores
them. If the MEA holds, the Atlantis algorithm computes the so-called basic latency
of each node in the unfolding. This basic latency is the maximum delay between the
firing of a given transition and the firing of the target transition, over all the possible
acyclic paths leading to the target transition. If the MEA does not hold, we will
10
Time
Petri
ne
!.Stubborn Unfolding
Unfolding nets
2. Atlantis algorithm
Check MEA
Basic latency
B. Discovery algorithm
I
Full latency ]
B.Controller generation
G Controlled Net |
No
G
Fail
MEA Type (a
h. Challenger algorithm
( Dynamic Controlled
6 .Simulator
Empirical results
Figure 1: Framework for supervisory controller generation.
11
distinguish three types of possible MEA violations. As we will see, the Challenger
algorithm described below can manage the MEA case (a) violations. If there is an
occurrence of case (b) or (c) MEA violation, the supervisor generation is exited and
the analysis is inconclusive.
3. Basic latencies do not reflect so-called synchronization delays. These delays may
occur when an operation is ready to synchronize with another operation, but the
second operation is not ready yet. The Discovery algorithm computes the full latency
of relevant unfolding nodes. The full latency of a transition is the maximum delay,
including the synchronization delays, between the firing of the transition and the firing
of the target transition along firing sequences permitted by the supervisory controller.
4. A supervisory controller is generated that disables key net transitions whenever the
elapsing of time makes their latency (either basic latency or full latency) greater
than the time left until the deadline on the firing of target transition expires. The
supervisory controller is also modeled as a Petri net.
5. If the merge exclusion assumption has case (a) violations, the Challenger algorithm
modifies the supervisory controller in order to prevent deadlocks. In addition, Chal-
lenger modifies the controller in order to allow a greater degree of parallelism in the
supervised net. The supervisory controller is implemented in the Java language in
this case.
6. Eventually, the controlled Petri net is tested by simulator to evaluate whether the
deadline is violated.
12
Previously published reports (80) were subject to two assumptions, (1) Merge Exclusion
Assumption (MEA), (2) the absence of certain kinds of livelocks. MEA is a condition
that the key transitions do not synchronize or conflict with other controlled branches in
the unfolding. We discuss three concrete cases violating the MEA definition in Chapter 5.
The latency calculated by the Atlantis algorithm is valid to represent the maximum delays
without synchronization delay. The Discovery latency algorithm is valid when the key
transition merges with initial places, which is not a controlled branch. Except this type, all
the other kinds of synchronization with respect to key transitions will be violations of MEA.
But in our current new methodology, violations of the MEA case (a) will be overcome by
the Challenger algorithm. The second assumption concerns the possibility of livelock in the
unfolding. The correctness of our algorithms for latency definition relies primarily on the
existence of suitable unfoldings. We assume that the controlled TPNs are level-4 live and
free from certain kinds of livelocks (27). If a TPN under control is not live, a deadline on
target transition firing may not be enforceable.
We have tested our whole framework on a number of benchmarks. The unfolding algo-
rithm is implemented in C++ language and tested on Cygwin 1.7.5. In Table II we report
the size of the result unfoldings for some of Corbett's (24) examples, and compare our ef-
ficiency with Esparza et al. (28) unfoldings. The data shows substantial reductions on the
size of stubborn unfoldings with respect to full unfoldings. And it reports the correct results
on deadlock detection. The latency algorithms, supervisory generation and simulation are
implemented in Petri net kernel tool (PNK) (61) running under WindowXp. In Table III
13
we show a few cases from benchmark examples in Corbett (24). Our assumption of level-4
liveness is violated in those cases. But the controlled net fired all the fireable transitions
that could be fired as in the original net. From that point of view, our supervisory controller
is successful in enforcing deadlines on the target transition without missing a high degree
of parallelism. In Table IV we show the valid cases under the above assumptions. The new
method (Challenger) is evaluated by the extendable examples, in which MEA case (a) and
parallelism issue are all evaluated. The result shows that our methods have reached their
goals.
This thesis is organized as follows. In Chapter 2 we summarize a survey of related
works. In Chapter 3 we introduce the required definitions. In Chapter 4 we explain the
stubborn unfolding algorithm. In Chapter 5 we explain the latency algorithms Atlantis and
Discovery. In Chapter 6 we define the supervisory controller in our framework, including
Challenger solvers and simulator. In Chapter 7 we prove the correctness of the algorithms
Discovery and Challenger with the supervisory controlled net. In Chapter 8, we demonstrate
the empirical results. Finally, in Chapter 9 we will make conclusions and present future
research directions.
CHAPTER 2
LITERATURE SURVEY
Analysis and developement of real-time systems are quite challenging. Those systems
usually execute multiple tasks simultaneously, which increases the complexity of the analysis. Thus, it becomes critical to ensure compliance with time deadlines, when the execution
of other operations might cause extra delays. In order to satisfy the reliability requirement,
formal methods are often applied during the software developement cycle. We will discuss
two formal methods, verification and supervisory control, in the following sections.
2.1
Verification
The most-frequently used formal method is automatic verification. Formal verification
consists of using mathematical methods to prove or disprove the correctness of a concurrent
or real-time system with respect to a certain formal specification or property (35). There
are many verification methods, but we will discuss briefly three well-known verification
methods: Temporal logic model checking, finite automata, and Petri nets.
In temporal logic model checking, the system will be checked with the specification
whether it meets with the requirement using a precise mathematic language. The complexity
of this method often increases exponentially in the size of the program analyzed. As a result,
verification with temporal logic typically faces the state space explosion problem (21).
14
15
A second verification method is based on automata.
An automaton is an abstract
machine built based on the mathematic model called finite state machine (FSM) (70). FSM
takes a symbol as input and the transition from one state to another. Given the current
state and an input symbol, FSM will follow the transition table which contains information
about which state to go next (41). A major advantage of automata is that it is easy to be
used and interpreted. But its use is only limited to a system consisting of a sequence of
discrete events (3).
Another verification method is based on Petri nets. A Petri net is a directed bipartite
graph. The nodes represent transitions (bars) and places (circles). The directed arcs describe which places are pre and/or post conditions for which transitions (60). Its graphical
notation makes it easy to use to model concurrent system. Its structure relationships, such
as causal, concurrent, conflicting, and synchronization, make it widely used for analyzing
the DESs. Its mathematics foundation makes it provable on the correctness and liveness
properties. Still verification using Petri nets is complex.
2.1.1
Real-time models
A real-time computer system is a system that must perform and meet its specified
deadline. This real time system normally has to interact with the environment in which the
violation of any property (e.g. time constraint) would lead to a serious consequence (41).
As a result, the software reliability is a must property to have, which can be improved by a
formal development via modeling and verification. Time automata and time Petri net are
two most commonly uses providing formal method to develope real-time systems.
16
Timed automata are automata with time constraints. They were first introduced by
Rajeev Alur (3). In these automata, the transition to another state is made based on the
current state, an input, and a time delay (clock variable). A transition to another state
is allowed only if its delay guard is satisfied by the current delay (77). Timed automata
are very useful for modeling many concurrent, real-time systems in which time automata
capture the influence of the time factors. This lets several time automata run in parallel and
communicate with each other. A set of timed automata can be combined into a product of
timed automata by allowing the transitions of the timed automata that do not correspond to
a shared action to be interleaved while letting the transactions labeled with shared actions
to synchronize (25).
Timed automata are useful for many applications. For example, timed automata are
well-suited in finding logical errors in communication protocols and asynchronous circuits.
Timed Automata can also be used to verify the safety and liveness properties of a real time
system. They can also be extended to modeling hybrid systems, for instance, by modeling
discrete controllers embedded within continuous changing environments (3; 46; 69).
Real-time verification is much more challenging than verification under untimed model.
One of successful approaches is to use timed automata and temporal logic to represent real-
time systems and the specification with the time feature. An extended Linear Temporal
Logic (LTL), called Metric Temporal Logic (MTL) is one famous way to formalize such
specification (48). MTL can facilitate the sepcification of a wide range of time behaviors.
But it is unsatisfing for model checking over dense time (5), because that this problem
17
is undecidable (15). Since then the decidability and tractability became motivations to do
real-time model checking. Considering exact time constrains, analysis on decidable fragment
of MTL became a classic method in real-time model checking. Unfortunately they (4; 38)
are all intractable.
One of interesting problems in real-time verification is scheduling periodic tasks in a
hard real-time environment. Some recent works (2; 8) show there are polynomial time
algorithms to handle this problem. The following problems exist in these works. First,
their results are approximability results, and contain certain level of errors. Second, they
consider only sporadic task systems, which restricts the whole problem domain into a special
subset. Third, they are not model-based methods.
The static scheduling approach has the limitation that the scheduling of the tasks restricts the freedom of the system behaviors at each fixed time slot (7). In fact, formal
methods using finite automata with temporal logic to solve the scheduling problem as above
concerned have not been explored. Weiss and Alur (81) recently worked on an implementation of such real-time scheduling analysis, based on finite automata over infinite words. In
their approach, each component is specified and analyzed individually. Eventually the system scheduling problem is reduced to computing the intersection of the language recognized
by a finite set of automata. If the result is an empty intersection, then there is no schedule
to satisfy the requirement of all components. Their schedulability analysis can result in
state explosion. Moreover, their method still needs to output all acceptable schedules for
the systems, and their scheduling constraints are limited to being specified using automata.
18
One of the most interesting values is they implementated a tool using Real-Time Java and
provided interfaces for both control engineer and software designer (6).
Similar to the evolution of the timed automata, Petri net also take the time feature into
the net. Ramachandani (66) first extended the basic Petri nets to the timed Petri nets by
adding a fixed duration to each transition. However this type of net cannot represent the
fact that the firing time of the tasks in a real-time system is not fixed, due to the state
of the system and environment. One way to characterize such probabilistic distribution of
a duration, is stochastic Petri nets. They can be used to evaluate some system features
like performence, throughput, service time, and so on. But this type of net cannot model
compliance with hard, real-time deadlines.
Time Petri net was then invented by Marlin and Färber (58). The correctness of the
system will take time as a part of the requirements. Time Petri net introduces two more
parameters, a minimum delay and a maximum delay, associated with each transaction (10).
In this case, the earlier firing time and the latest firing time can be modeled. There are also
two commonly used semantics of time Petri net. A clock approach, clocks will be assigned to
different component of the net (e.g. transitions, places or processed). Another semantic is
a firing interval approach, assigning a firing interval to each transition enabled; a transition
has to fire within a specified time interval. There are other different ways to integrate
temporal interevals in Petri nets. The arcs between place and transition are associated with
time in (34). Colored Petri nets take the interval times to transitions in (1). In this type
19
of model, tokens of different colors represent different kinds of resources or tasks, and the
timestamp is used to study the enabling and firing of transitions.
Compared with a finite automata, Petri nets are more suitable for modeling concurrent
system because the Petri net representation is more compact and can handle multiple events
that occur at the same time. One of the most important uses of supervisory controller is that
it can enforce the system to ensure liveness property to improve the system reliability. As
a result, development of supervisory controller can reinforce the improvement of real-time
system (71; 80).
Time-critical systems are discussed broadly in software engineering. Unfortunately, the
existing systematic research on engineering time-critical systems is rare. Even the formal
methods can be used for verifying the correctness of time-critical systems. Still it is a big
gap to utilize the formal development into the run-time evaluation. An investigation on
time-critical systems engineering has been discussed to combine the formal technologies
and time-critical systems (31). A special logic framework has been defined to cope with the
life-cycle of time-critical systems. The validation and analysis can be performed, against the
formal specification of the systems. The approach of this method is to report the violation
and locate the error at the run-time. However, automatic verification can only detect, but
not correct potential violations.
Currently the real-time systems are widely researched in many active areas, such as
database analysis, electronic trading, network management, traffic control, robotics and
flexible manufacturing. In one of the examples, Zhou et al. introduced a framework of DES
20
to synthesize supervisory controllers for real-time database systems (86). The problems of
concurrency control and scheduling of transactions in the real-time database systems are
considered by way of modeling Discrete Event Dynamical Systems (DEDS) into Deterministic Finite Automata (DFAs) . Zhou et al. eventually demostrate the potential applicability
of supervisory control theory in real-time database systems.
2.2
Supervisory Control
Another famous use of formal methods to improve the software reliability is supervisory
control. Supervisory Control is used to coordinate the sub-systems within a system to meet
the system specification (43). The sub-systems are called controllers. The controllers are
used to prevent the system from entering a forbidden state or in another words, the system
would not produce any unexpected result (13). Therefore, supervisory control is a such
integrative framework for the control of DESs. Since it is based on formal language, the
designer can implement standard algorithm to solve the specified DES control problems.
Ramadge and Wonham (64; 65) were the first to introduce the supervisory control technique for discrete event systems. In their works, the supervisor is generated by analyzing
the reachability graph based on an automata model of the controlled system. The concept
of timed DESs was explored to be used with real-time specifications. In the later frame-
work (16), they define a supervisory controller based on the given model of the system and a
specification of desired behaviors. The main feature of their approach is that the controlled
system and the controller sub-systems are loosely-coupled. Therefore, the interactions between them are modeled explictly. The reason to apply the Petri nets for representing
21
the controlled DESs is that there are established methods to model time in the extended
uncontrolled Petri nets (55).
Considering a system that uses an automaton, a supervisor can be used to prevent
the occurrence of controllable symbols in the model in order to enforce specification in
the closed-loop system. The class of specification in such systems can be divided into two
categories (73): the forbidden state problem (64), where the control specifications are based
on the forbiden conditions that the system should be allowed to enter; and the forbidden
string problem (65), where the control specifications are based on a sequence of operations
that have to be avoided.
One more reason for using Petri net, is that efficient algorithms for computing controls
can be derived from the analysis the structural properties of Petri net models. Holloway
et al. summarize two basic approaches for controller synthesis (39). One is based on state
feedback and the other is event feedback. The first efficient technique using linear integer
programming can take advantage of the linear structure of the Petri net, which is based on
the state transition equation to analyze the feedback for controller generation. The other
one utilizes the path-based algorithms to take advantage of the graphical structure of Petri
net models.
This thesis will briefly discuss the development of feedback control time Petri nets
(CtITPNs). CtITPNs is an extension of untimed Control Petri net (CtIPN) models which
have the real-time constraints on the firing times for enabled transitions.
22
The main difference between CtIPNs and CtITPNs is that CtIPNs cannot force the
transition to fire at the specified time, while for CtITPNs the transitions are forced to
fire according to the time constraint (39). Many versions of CtITPNs are proposed by
many researchers. Sathatye and Krogh (67) present the event base sequential control.
Their approach is designed for infinite state plants with regular specifications, but only
allows singleton control inputs. Brave and Krogh (17) extended CtITPNs to be able to
work concurrent system by allowing concurrent firing of transition which can take arbitrary
subsets of controlled transitions as control inputs.
Ramadge and Wonham's earlier works used state feedback control technique (64; 65).
Based on previous works, Takai et al. (72) discussed necessary and sufficient conditions
to obtain a maximally permissive policy. When a unique maximally permissive controller
exists for a time Petri net, they can express it formally. Brave and Krogh (17) introduce the
idea of CtITPNs with a marked graph structure (CtITMGs). In their method, a collection
of the forbidden sets is presented. The clock is used to monitor the system state. The
controller is not allowed to schedule the future events. Holloway (40) put more emphasis
on the time constraint by using the time delay to be one of the system requirements. The
system has to perform an assignment following the time constrains. If the system violates
the time constraint, the control objective cannot be met. Therefore the basic problem in
most cases is to design a state feedback policy which can keep the system always running
in allowed states, or avoiding the forbidden states. (39).
23
In supervisory control a legal sequential behavior can be used as a specification. The
purpose of this control is to keep the system running in the legal behaviors. Such control
policy is called event feedback. Cofer and Grag (22; 23) wdiscuss a max-plus algebra
approach to explore the problem of controlling the timed behaviors of Petri nets. Cofer and
Garg study two control problems: one is how to maximize the delay in the system while
satisfing the minimal schedule. Another major problem is how to force a minimal delay
within the system so that specific events always have a minimum separation time between
occurrences (39).
Boissel and Kantor (11) consider solution by using simulated annealing. The objective
is to consider the liveness property for the problem associated the closed loop system which
avoids forbidden state and minimizes certain delays (39).
In brief, most of the supervisory control desgins are interesting on the forbidden state
avoidance. Some of state feedback control works have to consider the whole state space.
Some of event feedback control works convert the scheduling problems into mathematics
problems. The control problem for target operation with time constrain is not addressed
yet. The structure analysis of the Petri nets has not been applied exhaustively. This is the
problem addressed in this dissertation.
CHAPTER 3
DEFINITIONS
As the formal model we use in generation of supervisory controller, Petri nets will be
first defined in Section 3.1. Then we will define Time Petri nets in Section 3.2.
3.1
Petri Nets
An ordinary Petri net is a four-tuple Af = (P, T, F, Md) where P and T are the node
sets and F the edges of a directed bipartite graph, and Mq : P —> N is called the initial
marking of Af, where N denotes the set of nonnegative integers. In general, a marking or
state of Af assigns a nonnegative number of tokens to each ? e P.
Given a net Af = (P, T, F, Mo), we use the following notations for the sets of predecessors
and successors of a node ? & PUT:
*x = {y\(y, x) G F} = the set of input nodes of ?
x* = {y\(x, y) & F} = the set of output nodes of x.
Clearly, for a place ? e P, '? and ?' will be (possibly empty) transition subsets. Conversely, for t e T, 't and t* will be place subsets. We extend this notation to place subsets
and transitions subsets. For instance, given a transition subset C C T, *C denotes the
union of the predecessor sets of all transitions in C. Thus, we have *C = Uví¿ec #í¿ an<^ so
on.
24
25
G®
Pi
O P4
K-r
P2
P3
<·*
P5
^TJ
Figure 2: A Petri net.
A transition is enabled when all its input places have at least one token. When an
enabled transition t is fired, a token is removed from each input place of t and a token is
added to each output place; this gives a new marking (state).
Two transitions t\ and Í2 of Af are said to be in direct conflict if they share at least
one input place, 't\ ? "¿2 f 0· When conflicting transitions are simultaneously enabled,
the firing of one such transition t will disable all transitions in direct conflict with t. The
concept of direct conflict among transitions is generalized to the concept of conflict among
unfolding nodes in Section 4.2.
Petri net J\f = (P,T,F,M0) is safe if M0 : P -> {0, 1}, and if all markings reachable
by legal sequences of transition firings from the initial marking have either 0 or 1 tokens in
every place. Henceforth, we assume that our Petri nets are safe.
The Figure 2 shows an example of a small Petri net. The place set consists of places
Pi . ..p$. The transition set consists of transitions t\ . . . Í4. Places pi, p2, and p¡ are initially
marked. In this marking, t\ and Í2 are enabled. The firing of ii removes a token from p\
26
and P2- A token is deposited back in p2 and another token goes to place p\. In this marking,
transitions Í2 and Í3 are enabled. The firing of £3 returns the Petri net to its initial marking
by moving a token from p\ to p\ .
The firing of ¿2 when p2, P3 and 734 are marked moves a token from p$ to ps, whereas the
token in P2 is returned to p2· Transition £4 is now enabled. Firing this transition removes
a token from p$ and deposits a token in p$. After Í3 and Í4 are fired, the net is back to
its initial marking. The net is safe because its initial marking and all reachable markings
have at most one token in each net place. We chose this example because it results in an
unusually large full unfolding.
3.2
Time Petri nets
A time Petri net Mt is a five-tuple (P, T, F, Mq, S) where (P, T, F, M0) is an ordinary
Petri net, and S associates a static (firing) interval I(t) = [a, b] with each transition t,
where a and b are rationale in the range 0 < a < b < +00, with a f co. See (10) and (53).
Delay intervals change the behavior of a time Petri net with respect to an ordinary Petri
net in the following way. If transition t with static interval X(t) = [a, b] becomes enabled
at time ??, then transition t must fire in the time interval [?? + a,?? + b], unless it becomes
disabled by the removal of tokens from some input place in the meantime. The static earliest
firing time of transition t is a; the static latest firing time of t is b; the dynamic earliest
firing time of t is 6>o + a; the dynamic latest firing time of t is ?? + b; the dynamic firing
interval of t is [O0 + a, O0 + b] if t became enabled at time O0. Henceforth, we will denote
the earliest static firing time of t by eft(i) and the static latest firing time of t by lît(t).
27
The state of a time Petri net is a triple (M, T,-D), where M is the marking of the
underlying untimed Petri net, ? is the global time, and D is a vector containing the dynamic
firing interval of each transition enabled by M. The initial state of a time Petri net consists
of its initial marking, time 0, and a vector containing the static firing interval of each
transition enabled by this marking.
A firing schedule for time Petri net ??? is a finite sequence of ordered pairs (í¿,6>¿) such
that transition t\ is fireable at time Q\ in the initial state of ??t, and transition í¿ is fireable
at time 0¿ from the state reached by firing the transitions tj for 1 < j < i in the schedule
at the given times from the initial state of ??tHenceforth, we will assume that the ordinary Petri nets underlying time Petri nets are
safe. In practice, we found that this assumption does not limit the systems that can be
analyzed automatically. (See, for instance, (26).) Moreover, this method can be easily
extended to bounded, rather than safe, Petri nets. In a bounded Petri net, the number
of tokens that can reside in any reachable net state is at most k, for some integer k > 1.
Methods for unfolding bounded Petri nets already exist (36). Moreover, our algorithms for
latency definition rely primarily on the existence of suitable unfoldings. We also assume
that uncontrolled time Petri nets are level-4 live and free from certain kinds of livelocks (27),
that we discuss in Section 5. If a Petri net is not live, a deadline on transition firing may not
be enforceable. Finally, we assume that time Petri nets do not contain zero-length cycles
(i.e., a non-Zeno assumption). This means that any non-empty firing schedule leading the
28
net from a given marking back to the same marking, must have a strictly positive total
duration.
CHAPTER 4
STUBBORN UNFOLDING
The main motivation of stubborn unfolding is to build a suitable unfolding for latency
generation. In this chapter, we discuss a new method for unfolding generation which combines two existing heuristics, namely stubborn sets and model unfolding in order to sidestep
the complexity of unfolding construction algorithms. On the one hand, both stubborn sets
and model unfoldings exploit independent transitions contained in a formal model of the
software under consideration. Examples of such models include communicating finite state
machines and Petri nets. In summary, two transitions contained in a model are independent
if the execution of one transition does not affect the execution of the other transition and
vice versa. Thus, the state reached by executing a set of independent transitions does not
depend on the order in which the transitions are executed.
On the other hand, stubborn sets and unfoldings differ in the way they exploit the notion
of independent transitions to achieve reductions in the resulting state space representations.
The goal of stubborn sets is reduced state space generation. This method produces a reachability graph that does not include all the reachable states while preserving concurrency
properties of interest in the reduced graph. Graph nodes represent reachable states; arcs
represent state transitions. Given a set of transitions that can be executed concurrently and
independently of each other, it is often sufficient to explore just one possible interleaving of
these transitions to check a property of interest. This is true, for instance, when only the
29
30
final state reached by executing all independent transitions is relevant to the property. Significant state space reductions are sometimes obtained by avoiding the explicit enumeration
of all intermediate states (76).
In contrast with stubborn sets, unfoldings do not represent states explicitly. Instead,
unfoldings are acyclic Petri nets capturing the possible execution sequences of the processes
contained in a concurrent program. An important property of unfoldings is the complete-
ness of an unfolding's state space relative to the state space of the underlying model. An
unfolding contains all and only the reachable states as the underlying model after unfolding
states are mapped to states of the model. Esparza et al. have showed that their unfoldings
are never bigger than the state space of the underlying Petri net, modulo a fixed factor (28).
In general, unfoldings are much smaller than the corresponding state spaces.
The completeness property and the acyclic property of unfoldings allow direct application of various linear-algebra algorithms for the supervisory controller generation. In
addition, unfoldings also lead to solving such problems as deadlock detection and state
reachability. For instance, the existence of an integer solution to the marking equation is
a necessary and sufficient condition for the existence of a feasible firing sequence reaching
a given net state. The marking equation involves the incidence matrix representation of a
Petri net and a vector of variables denoting how many times each Petri net transition must
fire in order to reach a given marking. By contrast, the existence of an integer solution to
the marking equation is only a necessary, but not sufficient, condition in the case of a Petri
net containing cycles (52; 55).
31
An additional advantage is that unfoldings show explicitly the causal relationships on the
execution of transitions in the underlying model. The unfolding of a model of a concurrent
system shows explicitly the basic execution sequences of each process contained in the
system. This property supports efficient analysis algorithms such propositional solvers (27)
and supervisory control of real-time systems (79). Finally, unfoldings are typically much
smaller than explicit state space representations of their underlying models.
A disadvantage of unfoldings is that they are sometimes unnecessarily large for the
property being checked. This happens because current unfolding definitions seek to map
the entire state space of the original model. Similar to the case of stubborn sets, some
properties (e.g., deadlock detection) do not require full state space exploration.
Here we define a new method for supervisory controller generation based on a partial
exploration of a Petri net's unfolding (79). We call the resulting construction a stubborn
unfolding because we exploit concepts that are part of the stubborn set method during
unfolding exploration. Stubborn unfoldings are usually smaller (sometimes much smaller)
than the corresponding full unfoldings as defined by Esparza et al. (28). Moreover, stubborn
unfoldings are never larger than the corresponding full unfoldings. We define the unfolding
redundancy to be the portion of a full unfolding not included in a stubborn unfolding. Full
unfoldings and stubborn unfoldings will be formally defined later. The stubborn unfolding
of an ordinary Petri net ?? may not be unique. Consequently, the unfolding redundancy of
Af may not be unique either.
32
The key advantage of stubborn unfoldings is that they combine the strengths of net
unfoldings and stubborn sets. By representing state spaces as unfoldings, we avoid explicit
state space enumeration. By incorporating stubborn sets in unfolding construction we avoid
the inclusion of unnecessary states in an unfolding. Previously we introduced the concept
of stubborn unfolding and we discussed an algorithm for detecting global deadlocks using
stubborn unfoldings (79). A disadvantage of that algorithm is that unfolding construction is
sometimes terminated as soon as an indefinitely repeating behavior is found, meaning that
a disconnected ordinary Petri net may not be fully explored. Here we present a modified
algorithm that continues unfolding exploration until every transition that is fireable at least
once in the original net is included in the stubborn unfolding. This approach will broaden
applicability of stubborn unfoldings. It will be used not only for deadlock detection, but
also for the generation of supervisory controllers to enforce deadlines in time Petri nets (80).
We also report empirical data obtained with the current version of stubborn unfoldings and
we compare the size of these unfoldings with full unfoldings obtained by the method of
Esparza et al. (28). These were the smallest unfoldings PNs until we introduced stubborn
unfoldings.
This chapter is organized as follows. In Section 4.1 we summarize the stubborn set
method. In Section 4.2 we introduce unfoldings and we define related terms. In Section 4.3,
we discuss the construction of stubborn unfoldings. In Section 4.4 we illustrate empirical
results.
33
4.1
Stubborn Sets
The goal of stubborn sets is to reduce the number of states that are explicitly enumerated during state space exploration of a variable/transition system, such as a set of
communicating finite state machines or a Petri net (76). State space space reduction is
achieved by identifying sets of transitions that are independent of each other when exploring a given state. Informally, the firing of a transition ? in a given state s does not affect
the fireability of transitions independent of t. Thus, the exploration of states reached by
firing these independent transitions can be postponed to the state reached by firing t or
subsequent states.
Stubborn sets define formal conditions for transitions to be independent of each other.
When exploring a given state s of a Petri net Af, stubborn sets partition the transition
set T of Af into two subsets, namely the stubborn set and the nonstubborn set. Only
enabled transitions contained in the stubborn set are considered for exploration of s. The
exploration of enabled, non-stubborn transitions is postponed to the successors of s in TV's
state space. As a result, only a subset of this state space is explicitly enumerated.
Our definition of stubborn sets mirrors Valmari's original definition for Petri nets (74;
76). Given a reachable state s of a net Af with transition set T, a stubborn set of T in s is
a transition subset Ts CT subject to the following conditions:
1. Ts must contain at least one enabled transition.
2. If an enabled transition t is in Ts, all T transitions in direct conflict with t must also
be in Ts, whether they are enabled or disabled.
34
T « T
O
O
/?\
Q
Figure 3: Petri net example along with its unreduced and reduced state spaces (70).
3. If a disabled transition t is in Ts with 't f 0, all transitions in 'p must be in Ts for
some place ? in *t.
Exploration of the state space of a Petri net with stubborn sets is conducted in the
following way. Starting from the initial net state, the net's transition set is partitioned into
a stubborn set and a non-stubborn set. Only enabled transitions belonging to the stubborn
set are included in the reduced state space. For each such transition the state, s, reached
by firing that transition is considered. If that state is already in the reduced state space,
no additional action needs to be performed. Otherwise, the state is added to the reduced
state space and to a queue of open states to be further expanded. The expansion of each
open state requires the computation of a stubborn set for the state. State space exploration
ends when all open states have been expanded.
In general, a given net state can be associated with multiple stubborn sets (i.e., multiple
ways of partitioning the transition set into a stubborn and nonstubborn set). When this
happens, one stubborn set is chosen arbitrarily. The effectiveness of stubborn sets (i.e., the
35
extent of the reduction in state space size) depends on the choice of a stubborn set. A good
heuristic is to choose always a stubborn set with a minimal number of enabled transitions.
Consider, for instance, the net shown on the left-hand side of Figure 3. The figure shows
a simple Petri net, as well as its full and reduced state spaces. In the initial net marking the
singleton sets {ii}, {¿2}, and {Í3} are all stubborn because each set satisfies the definition
above. The middle portion of the figure shows the full state space of the net shown on the
left. The overall state space has a number of reachable states exponential in the number
of transitions in the original Petri net. The right-hand side shows a state space that could
be obtained by applying the stubborn set method. This state space has a number of states
linear in the number of transitions in the Petri net.
The most important strength of stubborn sets is that they can sometimes fruitfully
exploit the parallelism implicit in the model of a concurrent system. This happens, for
instance, when a system contains many asynchronous processes that interact sparsely with
each other. Stubborn sets sometimes fail to achieve meaningful reductions in centralized
systems in which a single process must synchronize and communicate with all other pro-
cesses. (See, e.g., (26) for a discussion of empirical results obtained with stubborn sets.)
4.2
Unfoldings
The unfolding of an ordinary Petri net Af is an unmarked Petri net U whose places and
transitions are mapped to APs places and transitions by a homomorphic function h. A net
unfolding provides two useful properties relative to its original net. First, the unfolding U
has the same set of reachable states as the original net Af after h is applied to the places in
36
IA. Second, the unfolding shows explicitly causal relationships on transition firings that are
implicit in the original net.
Net unfoldings have been defined in different ways, resulting in different unfolding sizes
for the same Petri nets. McMillan (51) originally defined unfoldings of ordinary Petri nets.
Subsequently, Esparza et al. (28) provided a new definition that sometimes results in smaller
unfolded nets than McMillan's unfoldings. Khomenko uses a similar definition to Esparza
et al. (45). Finally, Semenov and Yakovlev (68) defined unfoldings for time Petri nets (53),
which extend ordinary Petri nets by modeling explicitly the notion of time. Our definition
follows closely the definition of Esparza et al. (28).
Most of the definitions below are similar to existing terminology in the unfolding lit-
erature (28; 51). However, the concepts of matching conditions, tight fringe conditions
and loose fringe conditions are peculiar to our algorithm for building stubborn unfoldings.
Following most of the unfolding literature, when talking about nodes in an unfolding U we
use the terms condition and event instead of place and transition to avoid confusing these
nodes with nodes in the Petri net underlying U. (See, e.g., (28).)
Definition 1 (Node precedence) Consider nodes ? and y in a Petri net. Recall that a
node can be either a place (condition) or a transition (event). Node ? precedes y, denoted
by ? < y, if there is a directed path from ? to y in the Petri net.
For example, in the unmarked net appearing in Figure 4 condition O4 precedes event e3
and condition b'-y.
37
J7I
^v.
^O
Ob"" ?....?>*
CK
Figure 4: Unfolding of Petri net in Figure 2.
Definition 2 (Node conflict) Nodes ? and y are in conflict, denoted by x#y, if the
Petri net (unfolding) contains two distinct paths originating at the same place ? (condition
b) that diverge immediately after ? (or b) and lead to ? and y. When ?f? holds, we say
that node ? is in self- conflict.
In Figure 4, event e2 and successor nodes are in conflict with event e'2 and successor
nodes because of condition Ò3.
Definition 3 (Concurrent nodes) Nodes ? and y are concurrent if they are neither in
conflict or precedence relation with each other.
In Figure 4, condition O4 is concurrent with nodes e'2, e4, b'2, b2", 63, and b'5.
38
Definition 4 (Net homomorphism) Given ordinary Petri nets M1 = (Pi, Tl, ??,µ?,?)
and ?/2 = (P2, T2, F2, µ?,2), o, homomorphism from ?/? to M2 is a function h : (P1 U Ti) —>
(P2 U T2) mapping M1 nodes into ?/2 nodes, subject to the following conditions:
1. Function h preserves node types (transitions and places). Thus, we have h(P\) C P2
and H(T1) C T2.
2. Function h preserves the input and output relation among nodes. Thus, we have Vx G
(Pi U Ti) and Vy G *x, h(y) G '(h(x)). Likewise, we have Vz G x* , h(z) G (h(x))*.
Conversely, Vx' G (P2 UT2) such that x' = h(x), if y' G 'x' we have 3y G (Pi U T1)
such that y' = h(y) and y G '?. Moreover, if z' G x'* we have 3z G (Pi U Ti) such
that z' = h(z) and ? G ?*.
3. Function h preserves initial marking. Thus, we have Vp1 G µ?,?, H(P1) G /lío,2 o-nd
Vp2 G µ0,2, 3p G µ0,? ^¿?? p2 = /?(?).
For instance, unfolding events ei and e^ in Figure 4 are mapped to transition tx of the
Petri net appearing in Figure 2. Likewise, conditions i>i and U1 are mapped to place P1.
Definition 5 (Occurrence net) We define occurrence net O — (B , E, Fo , ßm[n) to be
an unmarked Petri net having the properties below. In the sequel B denotes a finite set of
conditions and E denotes a finite set of events. Conditions correspond to Petri net places
and events correspond to transitions. Net O must satisfy the following:
1. \'(b)\ < 1 Vb e B.
2. O is acyclic.
39
3. Vx e B U E, £#£ does not hold. (No node ? & ? is in self-conflict. )
Set ßmin C B denotes the initial conditions of the occurrence net. These conditions have
empty input sets (i.e., no incoming arcs). All other O conditions have unit in-degrees.
The Figure 4 shows an example of an occurrence net.
Definition 6 (Finite prefix) Given an ordinary Petri net Af — (P, T, F, µ?), a finite
prefix of the Petri net is a pair F — (0,h) consisting of an occurrence net O and a net
homomorphism h that maps Ö nodes into Af nodes. The initial conditions /^mm of O are
isomorphic to µ?, the initially marked places of Af. Thus, Ai1nJn will contain exactly one
condition for each initially-marked place ? £ µ? of Af and no other conditions.
Definition 7 (Configuration) Let F be a finite prefix of net Af and O = (B , E, Fo , Mniin)
be F 's occurrence net. A set of events C C E is a configuration, if the following statements
hold:
1. If e Ç C and e' e E, then e' < e implies e' e C. (C is backward closed.)
2. No pair of elements of C are in conflict with each other.
For instance, in Figure 4 the event set {ei,e'2,e3,e'A} forms a configuration.
Definition 8 (Cut of a configuration) The cut of a configuration CQE of occurrence
net O is defined by removing all input conditions of events in C from the union of the initial
conditions of occurrence net O with the output conditions of events in C:
Cut(C) = (Mmin U C) - -C
40
In Figure 4 the cut of configuration {ei, e'2, e$, e^} is the condition set [V1, O2", 63}.
Definition 9 (Cause of an event) The cause [e] of an event e G E (also called local
configuration by McMillan (51)) is the set containing e and all events preceding e in O:
[e] = {e' e E I e' < e}
For instance, in Figure 4 the cause of event e'4 is the set containing events e\, e'2 and e4.
It is easy to show that an event's cause is also a configuration. (See, e.g., He and Lemmon's
short proof (36).) In the sequel we refer to the "cut of the cause of an event e" as the "cut
of event e" for brevity.
In the definition below, we seek to establish a total ordering among configurations in
an unfolding, similar to Esparza et al. (28). For a configuration Ci, function 0(Ci) returns
a sorted list of the transitions T\ mapped by events in Ci (i.e., Ti = Zi(Ci)). Transitions
are sorted by their index, that is, í¿ will precede tj in the sort if i < j. For instance, given
configuration C = {eg, e$, ei, e3}, we have 0(C) — (t^, Í3, íi, Í3), assuming that h maps both
events e¿ and e¿ to transition í¿. Also, function s (C) maps an event configuration C to a
sorted list of the events contained in C. Again, an event e¿ will precede ej in the list if
i < j. For events with identical indices, we have that e¿ will precede e¿, which will precede
e", and so on. Finally, the symbol <C denotes lexicographic ordering between two transition
lists or two event lists. Esparza et al. establish a total ordering among configurations using
the following definition (28).
41
Definition 10 (Configuration precedence) Given two configurations C\ and Ci of finite prefix T, we say that C\ precedes Ci and we write C\ -< Ci if one of the following three
conditions holds:
1. \C\\ < \Ci\ (Ci is smaller than Ci).
2. \Ci\ — \Ci\ and 0(Ci) <C 4>(Ci) (Ci and Ci have equal size but the set of transitions
mapped by events in Ci lexicographically precedes the transition set mapped by events
in Ci.)
3. \C\\ = IC2 1 j 0(Ci) = <fc(Ci) and s (Ci) <C s(Ci) (The transition sets mapped by events
in Ci and Ci are identical, but Ci lexicographically precedes C2 after both sets are
sorted.)
In Figure 4, the cut of e[ contains conditions O2"', b'4, and 65. The cut of e'2 contains
conditions b'i', Ò4, and b'5. Thus, we have that [e2} -< [^1] because b'2" <C b2".
Definition 11 (Cutoff event) An event e offinite prefix J7 is called a cutoff event if one
of the following conditions holds:
1. The cut of the cause of event e is homomorphic to the initial condition set of the
unfolding:
h(Cut([e])) = MMmin)
2. There exists another event e'eJ whose cause [e'] precedes e 's cause according to the
definition of configuration precedence above ([e'] -< [e]), and we have:
42
/i(Cut([e])) = /i(Cut([e']))
In Figure 4, e': is a cutoff event because its cut, which contains conditions b'™, b'4, and
¿>5, is homomorphic to the cut of e'2, which contains conditions b'2" , 64, and b'5. In addition,
we know that [e'2] -< [V1].
Definition 12 (Matching event) Let e e E be a cutoff event in finite prefix T. If there
ise£ E such that h(cut([e])) = h(cut([e\)) and [e\ ~< [e], we calle the matching event of e.
In Figure 4 cutoff event e'4 is matched by event e\ because their cuts are homomorphic
to each other. Matching events are sometimes called "companion" events (27) or "mirror"
transitions (32).
Definition 13 (Fringe condition) Let e G E be a, cutoff event in finite prefix T. Any
condition b e B belonging to the cut of e's cause (i.e., b e cut([e})) is called a fringe
condition for e.
Definition 14 (Matching condition) Let e e E be a cutoff event and condition b G B
be contained in e's cut. Then there is a condition b e B, such that b is homomorphic to
b and b is contained in the cut matching the cut of e. It is possible for b and b to be the
same condition, in which case we say that b is its own match or b is a self-matching fringe
condition.
In Figure 4 fringe condition Ò3 is matched by Ò3 because these conditions belong to the
cuts of matching events e'4 and e\.
43
Definition 15 (Tight fringe condition) Let b be a fringe condition for event e. We
say that b is a tight fringe condition for e if b € e* .
Definition 16 (Loose fringe condition) Let b be a fringe condition for event e. We
say that b is a loose fringe condition for e if b £ e'.
If ò is a loose fringe condition for e, then b must either be an output condition for some
event e' with e' < e (i.e., e' precedes e in J-) or we have ò € \imìn- For instance, in Figure 4
b'2 is a loose fringe condition for event e'4, which is matched by event e\. Thus, b'2 is an
output place for event e'2 with e'2 < e'4. In this case, condition b'2 matches b2 .
Definition 17 (Unfolding) We define unfolding U — (O, h) to be a finite prefix that
contains all and only the events whose cause does not include cutoff events. For each such
event e of U, e's input and output sets, *e and e" , must also be in U.
The above definition establishes the terms for a finite prefix to be an unfolding. This
definition refers to existing unfoldings, not our stubborn unfoldings. When this is necessary,
we will use the term "full unfolding" to avoid confusion with our stubborn unfoldings.
Esparza et al. proved that full unfoldings complying with the above definition have all and
only the reachable states as their underlying Petri nets (28).
Algorithms for unfolding construction start by adding conditions corresponding to initially marked places in the underlying Petri net to the unfolding (28). The transitions
enabled in the initial marking of the original Petri net are identified. Events corresponding
44
to those transitions are added to a priority queue of events to be included in the unfolding.
Events in the queue are prioritized by the -< relation among the corresponding cuts.
Next, the algorithm by Esparza et al. (28) repeats the following sequence of actions until
the event queue is empty. The minimal event according to the -< relation is removed from
the queue and added to the unfolding under construction. Conditions mapping the output
places of the transition corresponding to this event are also added to the unfolding. Next,
the algorithm checks whether this event is a cutoff event. If this is a cutoff event, no further
action is necessary and the algorithm proceeds to the next event in the queue. Otherwise,
additional events that are enabled by the newly added conditions are identified and placed
in the queue. The algorithm ends when the queue becomes empty.
The Figure 4 shows the unfolding of the Petri net in Figure 2 obtained by the algorithm
of Esparza et al. (28). Conditions 6i, Ò2, 63 are initially added to the unfolding because they
are the images of the initially marked unfolding places, pi, p2 and p$. The configuration
consisting of these conditions enables events e\ and e2, which are added to the event queue.
Assume without loss of generality that event e\ is added to the unfolding first, along with
its output conditions O4 and b'2. These two conditions are concurrent with Ò3. Condition 64
enables event e%, which is added to the queue. Conditions Ò3 and O2 enable event e2, which
is also added to the queue. The addition of event e-i to the unfolding, along with output
conditions O2 and 65 produces similar results to the case of e\. Events e^ and e[ are added
to the queue. Event e$ is a cutoff event because its cut, consisting of conditions b[, b'2 and
Ò3, is homomorphic to the initial unfolding cut. Event e^ is similarly a cutoff event.
45
The cuts of events e[ and e2 in Figure 4 are homomorphic to each other. In this case,
the -< relation determines which of the two events is a cutoff event. The cause, C2, of e'2
contains e2 and e\. The cause, Ci, of e^ contains e[ and e2. Therefore, in this case we have
\C\\ = IC2I· Also, we have 0(Ci) = <t>(C2) = (íi,Í2)· After sorting Ci and C2, we have
that C2 -< Ci because e\ -C e\. Consequently, e\ is the cutoff event with matching event
e2. The expansion of conditions in (e2)" leads to the inclusion of event e'4 and its output
condition, 63. Given that e'4 is a cutoff event, the event queue is empty after e'4 is dequeued
and unfolding construction is complete.
4.3
Stubborn Unfoldings
The stubborn unfolding of an ordinary Petri net is a subset of the full unfolding that
we defined in the previous section. Although a stubborn unfolding is generally smaller than
the full unfolding that we discussed earlier, the stubborn unfolding still holds the completeness property, which means the stubborn unfolding has to cover all fireable transitions in
the original PN and the stubborn unfolding has the same deadlock properties as the full
unfolding. Therefore, it can be used to detect global deadlocks before generating supervisory controller. We build stubborn unfoldings by running a partial state space exploration
algorithm for the original Petri net during unfolding construction. In general, this partial
state space is a version of the reduced state space in the stubborn set method. Every time
a transition in the original net is explored, an occurence of this transition will be added
into the stubborn unfolding net. Unfolding exploration terminates when cutoff transitions
are found.
46
Our algorithm for unfolding construction uses a stack of events to be added to the
unfolding. Each stack item is a record containing the following triple:
1. A set of events Ei to be added to an unfolding.
2. A Petri net marking m¿.
3. An integer c.
Events set Ei and Petri net marking m¿ have the following property. The set of Petri net
transitions T¿ mapped by Ei (i.e., T¿ = h(Eì)) are enabled in marking m¿. Indeed, when the
record containing T¿ and m¿ is pushed on the stack, 7¿ consists of the enabled transitions
belonging to a given stubborn set for marking m^. During the exploration of a stubborn
unfolding, events are removed from the 7¿ set found on top of the stack and added to the
stubborn unfolding under construction. The stack is popped when the Ti set on top of the
stack becomes empty. Integer c denotes the number of cutoff events that are currently in
the stack. The significance of this value will be clear later on.
An algorithm for unfolding construction appears in Figure 5. Function StubbornSetUn-
fold() takes as input a safe, ordinary Petri net M and produces a reduced unfolding Us as
well as an indication whether a global deadlock is possible in ??. The algorithm uses the
event stack that we introduced earlier. Initially, StubbornSetUnfoldf) initializes Us to an
empty unfolding and the stack to an empty stack. Next, the algorithm considers the initial
marking, µ?, of net J\f and assigns this marking to identifier current-marking. The set of
initial conditions /Ltm¿n, with h{ßmin) — µ?, is added to Us-
47
StubbornSetUnfold(N) {
U_S = emptyNet ;
stack = emptyStack;
B = mu_min;
current _marking = mu_0;
stubborn_set = ComputeStubborn (current_marking) ;
if (stubborn_set == emptyset) {
ReportGlobalDeadlockO ;
return U_S;
>
cutoff_count = 0 ;
Push(Enabled(stubborn_set) , current_marking, cutof f _count , stack);
while (stack != emtpyStack) {
current transition = SelectTransition(stack top);
if (current transition == no transition) Pop(stack);
else {
Remove (current transition, stack top);
current marking = SelectMarking(stack top) ;
cutoff count = SelectCount (stack top);
InsertEvent (current transition, unfolding);
if (IsCutof f Transition(current transition) ) ¦[
current marking = UpdateSelfMatching(current marking) ;
stubborn set = ComputeStubborn (current marking) ;
if (stubborn set != emptyset) {
Push (Enabled (stubborn set), current marking, cutoff count+1, stack);
>
>
else {
current marking = Po stMarking( current transition, current marking) ;
stubborn set = ComputeStubborn (current marking);
if (stubborn set == emptyset)
if (current count ==0)
ReportGlobalDeadlockO ;
else continue;
else
Push(Enabled(current stubborn), current marking, cutoff count, stack);
> // end else
} // end if
} // end while
return U_S;
}
Figure 5: Algorithm for constructing stubborn unfoldings.
48
Function ComputeStubborn(), computes a stubborn set of transitions for µ? according
to the stubborn set definition in Section 4.1. When the current state contains multiple
stubborn sets, function ComputeStubborn() uses a heuristic that prioritizes stubborn sets
by two rules. The first rule is to select stubborn sets, in which any input place of an enabled
stubborn transition is explored the first time. The second rule is to select stubborn sets
containing a minimal number of enabled transitions. The stubborn set is assigned to identifier stubbornset. If the stubborn set is empty, no transitions are enabled in µ?, meaning
that the state ?4 is dead. In this case, a deadlock is reported and StubbornSetUnfold()
returns immediately. Otherwise, the first record is pushed on the stack. This record con-
sists of the initial net marking (i.e., current-marking), the enabled transitions contained
in the computed stubborn set (i.e., Enabled(stubbornset)), and a zero value for identifier
cutoff.count.
The purpose of identifier cutoff-count is to record the number of cutoff events being
explored. When we add a cutoff event ec to Us, we conduct a limited unfolding exploration
starting from self-matching loose fringe conditions in cui(ec). Initially, we set cutoff.count to
zero. Whenever we add a cutoff event ec to Us, we look for events enabled by self-matching
loose fringe conditions for ec. If there are any such events, we increment cutoff.count and
we push the corresponding record on the stack. When the goal of stubborn-unfolding
construction is merely the detection of global deadlocks, we do not need to conduct the
limited unfolding exploration after adding a cutoff event to the unfolding. For instance,
our earlier report did not include this search (79). In this case, variable cutoff.count is
49
not needed. We subsequently added this capability because we extended analysis with
stubborn unfoldings to the supervisory control of timed Petri net models. (See, e.g., (80).)
The limited search allows us to include behaviors parallel and disjoint from the behaviors
contained in the stubborn unfolding. We explain this concept later in Figure 7.
After pushing the initial record on the stack, the algorithm in Figure 5 begins its main
loop. Each loop iteration performs the following actions. First, it checks the top record in
the stack. If this record contains any enabled transitions, one such transition £¿ is selected
and removed from the record. Otherwise, the stack is popped and control returns to the
beginning of the loop. If, however, a transition í¿ was found, it is saved with identifier
current-transition; the marking enabling í¿ is saved with identifier current-marking; and
identifier cutoff-count is assigned the corresponding value from the top item in the stack.
Next, new event e¿, with /i(e¿) — ti is added to Us along with e¿'s output conditions. At
this point the algorithm checks whether e¿ is a cutoff event.
If e¿ is a cutoff event, we must initiate a limited search starting from self-matching, loose
fringe conditions on the cut of e¿. Function UpdateSelfMatching() creates a new marking
that assigns tokens only to places corresponding to these conditions. This marking is stored
in current-marking. A stubborn set for this marking is computed. If the stubborn set is
not empty, a new stack record is created, which consists of the computed stubborn set,
current-marking, and the current value of cutoff-count incremented by one. Finally, the
new record is pushed on the stack.
50
???.
-? £?
Figure 6: Stubborn unfolding of Petri net appearing in Figure 2.
If e¿ is not a cutoff event, a new stack record is created. The marking of this new record
is obtained from the marking enabling í¿, which is held with identifier current-marking, by
applying the marking changes induced by the firing of í¿. Function PostMarking() removes
a token from places in % and adds a token to places in í¿\ The new marking is saved with
current-marking. Function ComputeStubborn() computes a stubborn set for the new net
marking. If this set is empty, this means that no transitions are enabled in current-marking.
If this is the case and if cutoff.count is zero, a global deadlock is detected and reported.
If cutoff-count is greater than zero, no action is taken and control returns to the top of
the loop. If a nonempty stubborn set was found, it is included in the new record with
current-marking. This record is pushed on the stack. The value of cutoff-count in the new
record is identical to the value currently held in the identifier.
The loop in Figure 5 is exited when the stack is empty. Stubborn unfolding Us is
returned.
51
The Figure 6 shows a stubborn unfolding of the Petri net appearing in Figure 2. This
unfolding is clearly smaller than the full unfolding shown in Figure 4. The algorithm in
Figure 5 initializes the unfolding to contain conditions &i, b2, and 63, which function h maps
to the initial places of the underlying Petri net. Next, a stubborn set is computed for the
state in which pi, p2 and Pz are marked. This stubborn set contains both transitions t\
and t2- A record containing these two transitions, the initial Petri net marking, and a zero
value for cutoff-count is pushed on the stack. Without loss of generality, suppose that the
algorithm adds event ß? to the unfolding next. The output conditions of e\ (i.e., O4 and
O2) are added as well. The new Petri marking assigns a token to places p2, P3 and p4.
The singleton set containing Í3 is a stubborn set for this marking. Thus, we push a record
containing this transition and marking on the stack.
On the next iteration of the loop in Figure 5, we remove transition Í3 from the top stack
item and we add the corresponding event, e%, to the unfolding with its output condition b[.
This event is found to be a cutoff event, because its cut—consisting of conditions b[, b'2 and
O3—is homomorphic to the initial cut of the unfolding (i.e., b\, 62, and 63). Now we check
self-matching loose fringe conditions in the cut of e3. There is one such condition, namely
Ò3. When place p3 is marked, but no other place has a token, no transition is enabled in
the Petri net underlying this unfolding. Therefore, we do not conduct a limited unfolding
exploration in this case. Instead, we find that the top stack item has no transitions, and
we pop the stack. The new stack top contains transition t2. Thus, we add event e2 along
with output conditions b'2' and 65 to the unfolding. The new marking assigns a token to
52
Pi, p2 and p¡. An initial stubborn set for this marking contains the singleton transition Í4.
After we add the corresponding event e\ and its output condition b'3, we find that e^ is a
cutoff event. This case is similar to the case of e^. Eventually, we find that the stack top
has no transitions. Popping the stack results in an empty stack. The algorithm returns the
unfolding shown in Figure 6. No global deadlock is detected.
The key advantage of the stubborn unfolding algorithm is that it recognizes situations in
which the expansion of a set of concurrent conditions would simply replicate event sequences
originating at preceding conditions. In contrast with the full unfolding shown in Figure 4,
the reduced unfolding of Figure 6 does not expand events following conditions b'2 and O2 ¦
The event sequences obtained by expanding either 6'2 or £>2 would be sublists—modulo
homomorphism—of the event sequences originating at condition O2, which matches both O2
and 62'. Full unfolding construction requires the expansion of O2 and O2, in order to capture
all the states contained in the original Petri net.
4.3.1
Algorithm Correctness
We will prove that all transitions fireable from the initial state of the original Petri net
will be included in the stubborn unfolding, also called completeness property. This unfolding
property is critical for our supervisory control method, because we need an unfolding to
cover all the fireable transitions in the original net Af, so that the latency algorithms can
compute the time elapsed from each node to target transition following the cycle behaviors.
If any fireable transition is missed in our unfolding, the latency values may not be correct
about capturing the maximum delay to target transition firing. Since latency algorithms
53
are linear in the size of unfoldings, we also aim at building smaller unfoldings to benefit the
latency algorithms. Lemma 2 will show that stubborn unfoldings have this property.
In addition, our stubborn unfoldings, similar to stubborn set method, preserve the
deadlock properties of the underlying Petri net ?? This proof is tied to the state space
exploration performed by stubborn sets (74; 76). This state space exploration is known to
preserve the deadlock properties of the underlying net.
Our stubborn unfolding is not a simple combination of stubborn set method and unfold-
ing method. Compared with stubborn sets, the stubborn unfolding conducts an additional
computation called Limited Unfolding Exploration (LUE), in order to include the independent parallel subnets of a Petri net, such as disconnected net portions. Compared with the
unfolding method, stubborn unfoldings will reduce the replicated portions of full unfolding,
but keep the completeness property of the unfolding net. Before proving the completeness
property of stubborn unfolding, we will first define the independent parallel subnets, and
then prove that these two special features of the stubborn unfolding hold, which will be
used in the proof of the completeness theorem.
Definition 18 (independent parallel) We define two subnets Ri and Rj in a Petri net
Af to be independent parallel, if following two conditions are held:
1. All nodes in Ri are concurrent with nodes in Rj and vice versa.
2. The output nodes of any node in a subnet are also included in the same subnet.
We have the following two lemmas to prove.
54
Lemma 1 Limited unfolding exploration will lead to the inclusion of the independent parallel subnets of Petri net M into its stubborn unfolding UsProof. In the stubborn unfolding construction, only two cases are encountered as indepen-
dent parallel subnets. In the first case, Petri net M contains disconnected subnets (such as
the example in Figure 3). We assume one subnet Ri is disconnected with another subnet
R2. During unfolding constructing, when a cutoff event ec is reached in one subnet, Ri. We
need to prove that its parallel portion R2 will be included in the stubborn unfolding by the
limited unfolding exploration. Assume R2 is originating at a set of initial places P2- If any
place of P2 is on the cause of ec, then it will contradict with the definition of independent
parallel subnets between the subnet R\ and the subnet R2- So every place of P2 must be
excluded from the cause of ec. By the definition of cutoff event, the places in P2 are all
on the cut of ec, and all self-matching. The algorithm in Figure 5 shows that the limited
unfolding exploration keeps expanding the unfolding net, based on the self-matching places.
In the disconnected subnet R2, places of P2 are all self-matching places for cutoff event ec.
Therefore subnet R2 will be included in the stubborn unfolding, when the limited unfolding
exploration produces the unfolding based on the self-matching places of P2.
In the second case, we assume that there is a common predecessor event em leading to
two subnets, Ri and R2. Ri and R2 are independent parallel to each other, where there
is no output nodes on the cause of em. We can easily see that Ri and R2 are actually
disconnected, after the firing of em. If the stubborn unfolding first discovered a cutoff event
ec in the subnet Rx , we need to show that R2 is also included by the limited unfolding
55
?T?? ?®"
Ó
Ps
Ò
P4
O1
0
0-,
Q-.
\J b¿
W bí
Figure 7: Example illustrating partial unfolding construction.
exploration. We know that em is on the cause of ec. Let p\ be an output place of em in
subnet R\, and p2 an output place of em in subnet R2. The place p2 must be a self-matching
place for ec. This is because of the following two reasons. First the place p2 must on the
cut of ec by the definition of cut. If the place p2 is not on the cut of ec, it must be on the
cause of ec. It will contradict with the fact that R\ is concurrent with R2. Second, p2 must
be a self-matching. If it is not self-matching place for ec, there must be a matching place
preceding em. It will contradict with the assmuption that no output nodes of R2 are out of
the subnet R2, the second condition of the definition of independent parallel subnets.
?
In summary, the limited unfolding exploration subsequent to the discovery of a cutoff
event will include all independent parallel portions of a net in the stubborn unfolding. We
have added this feature here because it can generate the complete unfolding. Furthermore,
we plan to apply the stubborn unfoldings to the verification of additional properties, such
as partial deadlocks, in the near future.
56
This case is illustrated in Figure 7. The Petri net on the left-hand side has 4 places
and transitions. Suppose that events e± and e¡ are first added to the unfolding on the
right-hand side. The event e% is a cutoff event because its cut, consisting of conditions b[
and ¿>2 is homomorphic to the initial cut of the unfolding. If we did not conduct a limited
unfolding exploration from self-matching fringe condition Ò2, events e2 and ß4 will not be
included in the unfolding. In this case, our algorithm would still correctly report that no
global deadlock is possible. This is obviously correct because the existence of a cutoff event
shows that a net behavior must be repeated indefinitely.
The algorithm in Figure 5 will conduct a limited unfolding exploration from condition
Ò2 after including event e¡ and its output condition b[ in the stubborn unfolding. This is so
because 62 is a self-matching fringe condition for cutoff event e%. The limited exploration
will lead to the inclusion of events C2 and e^ in the stubborn unfolding.
Lemma 2 The exploration of the stubborn unfolding with the priority selection on a stubborn set Ts, will lead a reduced unfolding from full unfolding, and the removed portion is
identical modulo homomorphism to an existing portion of the stubborn unfolding.
The function ComputeStubborn() in our algorithm first selects a stubborn set in which
the enabled transitions contain a first time discovered input place. This priority rule will
efficiently cause a decrease in the unfolding size, compared with the full unfolding. We
will define the portion of a full unfolding that is not contained in a stubborn unfolding as
Unfolding Redundancy.
57
Definition 19 (Unfolding Redundancy) Unfolding Redundancy is a portion offull un-
folding U, originating at only non-self-matching loose fringe condition set C' for a cutoff
event ec.
For proving that Unfolding Redundancy is a replicated copy of the existing unfolding
part, and does not contain any new behavior, we need to distinguish two cases expanding
at the loose fringe conditions.
Assume that there is a cutoff event e' with its corresponding matched event e. C" is a
loose fringe condition set for e', and C is the corresponding condition set matching C . The
expansion of the stubborn unfolding is subject to the first priority rule to distinguish the
following two cases.
1. Keep expanding the successors of C conditions, when they merge with the conditions
concurrent with C".
2. Stop expanding the successors of C" conditions, when they are non-self-matching and
they do not merge with condition concurrent with C.
In the first case, the new behavior (transition) ? is originating at a certain condition 6
concurrent with all loose fringe conditions in C". Obviously, b must be on the cause of e';
otherwise b will be in C". During the stubborn unfolding expansion, the output transitions
of b are all in one stubborn set, including ti on the cause of e' and t2 leading to v. The
transition t2 will be expanded after stack pop by the definition of the stubborn set method.
58
Pi
P3
P5
t2
p2
P4
P7
t4
P6
p8
Figure 8: Four buffer example in (21)
The behavior ? will be expanded consequently. In this case, the behavior ? does not belong
to unfolding redundancy, and the stubborn unfolding includes v.
In the second case, C is non-self-matching, and does not merge with any condition b
on the cause of e'. Then the behaviors only originating at C will be identical to behaviors
originating at the matching conditions in C. Whatever the behaviors will be expanded
by C", the same behaviors were already expanded by C modulo homomorphism. Such
portion we have defined as Unfolding Redundancy. Priority rule will ignore it, if the sibling
conditions for C" are first occurence during the stubborn unfolding expansion.
In order to understand this Lemma in a picture, we illustrate a Petri net example
from Esparza et al. (28) in Figure 8, and compare the stubborn unfolding with the full
unfolding in Figure 9. Above the dashed line, it shows the stubborn unfolding, and below
the dashed line is the Unfolding Redundancy contained in the full unfolding. This Unfolding
Redundancy is originating at conditions b'7, b'5 and 6'3, which are all loose fringe conditions
for the cutoff event ei- When the stubborn unfolding algorithm reaches the state, {b'7, b6},
59
b7
T
e4
b3
b6
bS'I
b5
b4
O
e4
* b7
b6
b3
b2
O
b8
O
b4
O
O
Figure 9: Four buffer unfolding comparison in Figure 8.
the priority rule will select e3 to expand instead of e'5. This is so because condition O6, the
input of the event e^, is a first occurence, and condition b'7 is not.
Theorem 1 Every reachable transition in Petri net Af is included in the stubborn unfolding
Us ofAf.
60
Proof. A stubborn unfolding will be terminated on a branch expansion when a cutoff event
is included, which is same as full unfolding. So we have the following three facts.
1. The transitions on the cause of a cutoff event must be included in the stubborn
unfolding.
2. Lemma 1 ensures that the stubborn unfolding will always include the independent
parallel subnets in the stubborn unfolding.
3. Lemma 2 guarantees that the removed portion, compared with full unfolding, is replicated from the existing part of unfolding.
By contradiction, assume that there is a transition t reachable in Petri net Af, but not
included in the stubborn unfolding Us- We have following three cases.
First, if an occurrence of t is a predecessor of a cutoff event ec, then t must be in the
stubborn unfolding. The first fact has ensured this point.
Second, consider the case in which an occurrence of t is a successor of a loose fringe
place pi of a cutoff event ec. If pi is self-matching, Lemma 1 ensures t will be included by the
limited unfolding exploration. If pi is non-self-matching, Lemma 2 ensures there is another
existing occurrence of t already included in the unfolding. Both of the cases contradict the
assumption that t is not in the unfolding.
Third, consider the case in which an occurrence of t is a successor of a cutoff event ec
in unfolding. There must be a set of places P¿ leading to i, where P1 is a subset of the
cut of ec. Because ec is a cutoff event, there is a matching event e in unfolding, and Pj a
61
subset of the cut of e, where the places in Pj are matching with the places in P1. Transition
t can be reached from Pj in Petri net J\f, because the places in Pj are homomorphically
the same as P¿. Originating at Pj, if an occurrence of t is on the cause of ec, then it must
be included, based on the first case. If there is no occurrences of t on the cause of ec, an
occurrence of t must be conflicting with ec. There must be an event e¿ on the cause of ec,
and an event ej on the cause of the occurrence of t, where e¿ shares a common input place
Pi with ej. Assume that a transition tj on the path from Pj to t, is the earliest transition
not included in unfolding. Transition tj can not precede ej, because all predecessors of ej
are included in unfolding. Transition tj can not be ej homomorphically, because when the
stubborn unfolding algorithm includes e¿, and all the conflicting transitions including e3
are in a stubborn set, ej must also be included in unfolding. Therefore, tj can only follow
ej. We know the only reason for the unfolding termination is a cutoff event e^ is met on
that path from Pj to t and before reaching tj. Then we got the same statement for tj and
e/c as for t and ec, meaning that an occurrence of tj is a successor of a cutoff event efc. If
this assumption holds, we will keep finding such cutoff events ec, e/c and so on. Here we
assume our Petri net ?? is finite and the stubborn unfolding will not include the transitions
following cutoff events. Therefore the number of the cutoff events included in the stubborn
unfolding Us is finite. Because unfoldings are finite this process must terminate at a cutoff
transition. As a result, there must be an occurrence of tj included in the unfolding.
Thus, any reachable transition t from Petri net ??, will have an occurrence included in
the stubborn unfolding Us-
D
62
problem
buffer 4
buffer ?
PN IPI
PN |T|
U\B\
20
In
? ¦
Ö(r?
MISI
Us\B\ W5 \E\
____10_ _____12_
3n
0(n2)
n+1
Table I: Comparison on the buffer examples.
4.3.2
Complexity
The complexity of the algorithm in Figure 5 is dominated by the size of the resulting
stubborn unfolding. The addition of a new event to an unfolding is relatively straightforward. The first two actions are the removal of a transition t from the algorithm's stack, and
the insertion of the corresponding event with its output conditions to the unfolding. Both
actions are constant. Next, the algorithm checks whether the newly-added event is a cutoff
event. This check, performed by the function IsCutoffTransition(), can be done efficiently
by storing cuts of unfolding events in a hash table. Finally, the algorithm must compute
a new stubborn set for the state obtained by firing t. It depends on the complexity of the
interleavings of the net, which also affects stack expansion. Since our stubborn unfolding
mimics the reduced space generation by the stubborn set method, the stack expansion is
substantially shrunk from the full state space exploration. Valmari has also defined various
efficient ways of computing stubborn sets for a given net state (74; 75; 76).
One disadvantage of our algorithm for the construction of stubborn unfoldings, compared
with the construction of the full unfolding, is that it sometimes attempts to insert the same
63
event multiple times in the unfolding under construction. Function InsertEventf) must
therefore check whether an event—corresponding to a transition on the stack—has already
been included in the unfolding. This is because different states enable the same transition
included in multiple stubborn sets. We plan to investigate ways in which this check can be
simplified. But when the full unfolding net contains unfolding redundancy, our stubborn
unfolding tends to be much faster. In Table I, we have compared the buffer examples
from (28). The stubborn unfolding shows the linear growth for ?-buffer models, while full
unfoldings grow in 0(n2).
In general, the efficiency of the algorithm in Figure 5 is bounded by the number of events
to be inserted in the stubborn unfolding. We report empirical data of the unfolding size in
the next section.
4.4
Empirical Results for Stubborn Unfolding
We implemented the algorithm in Figure 5 and we conducted extensive empirical studies
with various well-known benchmarks of concurrent systems. Our implementation is based
on the PEP V2.0 model checker (59). We modified the core of PEP that performs event
insertion in the unfolding in order to implement our algorithm.
Our empirical studies are based on some of Corbett's benchmarks (24) of concurrent
systems including the dining philosopher example, the readers and writers example, the gas
station example, the token ring example, the elevator example, the furnace example, and a
highway overtaking protocol. For each example, we built both the full unfolding using the
64
problem
dpd3
dpd5
dpd7
dpn3
dpn5
dpn7
ring3
ring5
ring7
gasql
gasq2
rw6
over2
over3
over4
furnace 1
elevatori
elevator2
PN P(T)
12(9)
20(15)
28(21)
12(10)
20(16)
28(22)
39(33)
65(55)
91(77)
28(21)
78(97)
33(85)
33(32)
52(53)
71(74)
27(37)
62(99)
146(299)
UB(E)
33(15)
95(45)
189(91)
39(16)
105(46)
203(92)
97(47)
339(167)
813(403)
43(21)
346(173)
806(397)
83(41)
369(187)
535(326)
296(157)
1562(827)
Us B(E)
21(9)
35(15)
49(21)
27(10)
45(16)
63(22)
77(37)
167(81)
295(144)
43(21)
340(170)
806(397)
59(29)
224(112)
712(362)
404(235)
292(155)
1556(824)
Table II: Data for stubborn unfoldings.
algorithm of Esparza et al. (28) and our stubborn unfoldings using the algorithm in Figure
5. A description of these examples can be found elsewhere (24).
Empirical results that we collected with and without limited unfolding exploration after
a cutoff event show that this feature rarely results in larger stubborn unfoldings. Larger
unfoldings will occur only when the original Petri net has indefinitely repeating, independent
behaviors, an unlikely occurrence. For this reason, here we only report data obtained with
the additional search.
65
The Table II reports empirical data that we obtained. The first column on the left
reports an abbreviated name of each example we considered. The second column reports the
number of places and transitions (in parenthesis) in the original Petri net of each example.
The third column reports the number of conditions and events in the full unfolding, as
reported by PEP. Finally, the fourth column reports the number of conditions and events
in our stubborn unfoldings. These data were collected on a Linux server with 4 Pentium 4
CPUs running at 3.0 GHz with a memory size of 2.6 GBytes.
The first three rows in Table II report data for the basic version of the dining philosophers
example with a deadlock. Data for 3, 5, and 7 philosophers are reported. In all three
cases, our algorithm correctly reports the possibility of a global deadlock. These data show
substantial reductions in the size of stubborn unfoldings with respect to full unfoldings. In
particular, the size of a stubborn unfolding is linear in the number of philosophers in the
example. The performance of stubborn unfoldings in this case also exceeds that of Valmari's
stubborn set method, which results in a quadratic growth in state space size (26).
Rows 4 through 6 in Table II report data for a version of the dining philosophers example
in which deadlock is removed by adding a new transition that allows philosophers to put
down their forks when each philosopher holds a fork. As with the previous version, the
number of events in our stubborn unfolding is linear in the number of philosophers. Rows
7 through 9 in Table II report data for a token ring example with 3, 5 and 7 nodes. The
data again shows substantial reductions in the size of stubborn unfoldings with respect to
full unfoldings. No global deadlocks are reported.
66
Rows 10 and 11 in Table II report data for versions of the gas station example with 1
customer and 2 customers. In this case, our algorithm fails to achieve meaningful reductions
in unfolding size. No reductions in unfolding size are achieved for a version of the readers
and writers example shown in Row 12.
Rows 13 through 15 in Table II report data for the highway overtaking protocol with
2, 3, and 4 cars. Stubborn unfoldings are smaller than full unfoldings in this case. In
particular, we could not complete the construction of the full unfolding for the version
with 4 cars because PEP ran out of memory. Our stubborn unfolding has 362 events
and 712 conditions. No deadlock is reported. Finally, Rows 16 through 18 report data
for the furnace and the elevator example. No significant reductions in unfolding size are
accomplished. Both versions of the elevator example contain a global deadlock, which is
correctly reported by our algorithm.
On the whole, the empirical data that we collected shows mixed results for stubborn
unfoldings. Some examples show significant reductions in unfolding size, whereas other
examples show small reductions or no reductions at all.
CHAPTER 5
TRANSITION LATENCIES
The latency l(t) of a transition t is the time at which the transition must be disabled,
relative to the expiration of the deadline on target transition t¡¡, in order for í<¿ to fire by the
deadline. We compute l(t) in two stages. First, the Atlantis algorithm computes a so-called
basic latency 6(i) for each transition in the unfolding U of the untimed net Af underlying
time Petri net Mt- Next, the Discovery algorithm computes the full latency l(t). Both
algorithms compute delays that tokens may incur while traveling from arbitrary places in
Mt to places in 'td- The basic latency does not take into account synchronization delays. A
synchronization delay occurs when a token has reached a transition's input place, but the
transition is not enabled because another input place is not marked. Thus, the transition's
clock is not running in this case.
5.1
Atlantis algorithm
The goal of Atlantis is to compute the basic latencies of nodes in the untimed unfolding of
time Petri net Mt- The algorithm's input is given by the unfolding U = (Pu, Tv, Fu, h, ßmin)
of the ordinary net M underlying Mr, by target transition td and its occurrences in U, and by
the set of matching places produced during unfolding construction. The output of Atlantis
consists of the set of basic latencies of nodes in Pu U Tu, and additional information needed
by Discovery.
67
68
Atlantis uses its first input, the unfolding, to identify paths that tokens can follow
in order to enable the target transition. The second input, the target transition and its
occurrences in the unfolding, determine the end points of these paths. The third input, pairs
of matching places, allows Atlantis to identify recurring net behaviors, that is, transition
firing sequences leading the net from a given state back to the same state. The ability to
bring the net back to a previous state is sometimes necessary before the target transition
can be fired. Consider the time Petri net and its unfolding in Figure 14. Suppose that
transitions t\ and i2 are fired from the initial state. Also, suppose that transition t-j fires
next, preventing t§ from firing. Now the net must return to a previous state in order for tß
to be able to fire. The firing of transitions Í3, ig, and in will in fact bring the net back to
a state from which iß can be forced to fire (e.g.. by disabling Í7).
The unfolding captures the return of the net to a previous state through cutoff transitions. As we saw in Section 4.2, the cut of a cutoff transition contains the same places,
modulo homomorphism, as a preceding transition's cut or the initial cut of the unfolding.
When working with an unfolding, we glean future net behaviors after the firing of a cutoff
transition by "jumping back" tokens to the places matching the places in the cut of that
cutoff transition. Looking at the unfolding in the right-hand side of Figure 14, the cut of
cutoff transition en contains places 61, O2, and £/3. This cut is homomorphic to the initial
cut: 61, O2, b3. After transitions in en's cause are fired, the net underlying the unfolding has
the same marking as the initial net state. Therefore, after jumping tokens from unfolding
places O2 and b'3 back to b2 and O3 the unfolding will still capture the state of the underlying
69
Petri net after net transitions homomorphic to transitions in [en] are fired. Atlantis uses
the concept that tokens can be transfered in zero time from fringe places (e.g., O2' and Ò3)
back to their matching places (e.g., 62 or Ò3) along paths from arbitrary unfolding nodes to
target transition occurrences.
Atlantis computes the basic latencies of unfolding nodes by searching backwards from
target transition occurrences in an unfolding. In brief, the length (in terms of transition delays) of paths leading to target transition occurrences is accumulated for each node
encountered along such paths, until the initial unfolding places are found. We call this
process of searching backward in an unfolding a sweep. Places that were discovered during
this backward search and that match yet-unprocessed fringe places (i.e., places on the cut
of cutoff transitions) are recorded. These fringe places will be used as starting places for
the next sweep. The algorithm ends when a sweep does not discover any places matching
yet-undiscovered fringe places.
The node sets considered in each sweep are disjoint. All nodes discovered during a sweep
were not discovered during previous sweeps. Also, each sweep settles the basic latency value
of each node discovered during that sweep. Each sweep consists of two phases. First, we
discover and mark the nodes whose basic latency will be computed during the sweep. Next,
we compute the latency values for marked nodes. We use two phases because this makes
the run-time of Atlantis linear in the size of the unfolding.
Before we explain Atlantis in detail, we must define formally the concept of basic latency.
To this end, we require the following definitions. The full cause of an unfolding node, the
70
precedes-by-match relation, and extended-precedes relation are instrumental in defining the
concept of extended path. An extended path is essentially a directed unfolding path in which
it is possible to "jump" from a fringe place to a matching place in an effort to reach a target
transition occurrence.
Let U = {Pu, Tu, Fu, Urniiv ^) ^e tne unfolding of the untimed Petri net Af underlying
Aft- Let t¿ denote the target transition representing the system event for which we want to
enforce periodic deadline ?, with ? e N+.
Definition 20 (Pull cause of a node) Let u e Pu[JTu be a node in unfolding U. The
full cause of u is the set of nodes preceding or equal to u: [[it]] = {u' G Pu \jTu\u' < u}. In
a similar manner, we define the full cause [[U]] of a set of nodes U to be the union of the
full causes of all the nodes in U : \Jw€W [Ml·
The full cause of an unfolding node u contains both the transitions and places preceding
u, whereas the cause of a transition t contains only transitions. For instance, in Figure 14
[[ho]] contains nodes bw, e6, b5, O6, ei, e2, h, and O2. The full cause of an unfolding node
generalizes the cause of a transition, which contains only transitions.
Definition 21 (Precedes-by-match) Let w e Pu[JTu be a node in unfolding U and
places b,l) e Pu such that b is a fringe place, b ~ b, and b e [[w]]. For all w' e Pu[JTu
such that w' e [[O]] - [[w]], we say that w' precedes-by-match w and we shall denote that
by w' <match w. In a similar manner, we say that w' precedes-by-match a set of nodes
N C Pu[JTu if w' <match w,Vw£N.
71
W
W
/
/
/
/
/
b/
Figure 10: Illustration of precedes-by-match relation: w' <TOatcn w. The dotted lines
leading from node w' to place b and from place b to node w signify plain unfolding paths
between those nodes. The line from ò to ö signifies that a token may be returned from a
fringe place back to a matching place. We say that w' extended-precedes w.
The precedes-by-match relation can be informally explained using the following logic. If
a node w', w' f [[w\], precedes fringe place b, that matches place &, and since place b precedes
arbitrary node w, then w' precedes by match w. w' < b,b « b,b < w —>· w' <match w- For
instance, in Figure 14 we have that 69 <match e6 because 69 < b'2, b'2 ~ Ò2, and 62 < e§.
Definition 22 (Extended-precedes) We define the extended-precedes relation to be the
transitive closure of the precedes-by-match relation. When node w' extended-precedes node
w, we write by w' <Cw. We say that w1 extended-precedes a set of nodes N C Pu[JTu if
w' <£w,Vw £ N.
The Figure 10 shows the precedes-by-match and the extended-precedes concepts. The
two horizontal lines, at the top and at the bottom of the picture, suggest the set of initial
places and the terminal places of unfolding U. Assume that nodes w' and 6 are in sweep i,
whereas w and b are in sweep i — 1. Assume further that b and b are matching places. If
72
we have that w' < b, and b < w, we say that w' <C w to signify that a token can move from
w' to w using the "jump-back" mechanism in addition to plain unfolding paths.
In Figure 14 we have that e^ <§: eg because e^ <match e5 (by way of matching places b'4
and Ò4) and es <match ^e (e-g-, by way of matching places O2 and O2).
Definition 23 (Extended path) We also say that an extended path aw exists between
two nodes w and w' if w and w' are in an extended-precedes relation and the following two
conditions hold:
1. aw contains at most one occurrence of nodes in unfolding U.
2. Each node ? in aw is followed either by a node discovered in the same Atlantis sweep
as ? or the previous sweep.
The second condition relies on the notion of an Atlantis sweep, which we explain in
the sequel. In brief, this condition stipulates that an extended path should be aimed at
reaching occurrences of i^, possibly by connecting fringe places to their matching places.
The reasons for this condition will be clear later on. In Figure 14 the following is an extended
path between nodes e^ and e%: e^, bs, eg, b'4, 64, es, bg, ß??, b[, b\, e\, Ò5, and e§.
Definition 24 (Basic latency) Let u be a node in unfolding U. We define the basic
latency b(u) of u to be a function mapping u into a nonnegative integer as follows:
b : Pu (J Tu ->· N, b{u) = max <
^
{e?s,,???-??}
lft(e)|au = {u, ...,edi} and edi e TD > (5.1)
J
73
In the above, s„ denotes an extended path starting with node u and ending with some
edi occurrence in U. Given a transition e, lft(e) denotes the static latest firing time of net t
that corresponds to e (i.e., we have h(e) = t). Set Td denotes all occurrences of the target
transition in IA:
Td = {edi e Tu\h(edl) = td}
Thus, we define the basic latency of a given node u to be the maximum length of all
extended paths leading from u to an occurrence of td in U. In addition, we define the length
of one such path au to be the sum of the latest firing times of all transitions in au — {u},
the sequence obtained by removing u from au.
If the first node u of au is a transition, we deliberately exclude the latest firing time of
u from the computation of the length of au. The static delay of the first node (if it is a
transition) is irrelevant to the definition of control supervisors.
The Figure 11 shows pseudocode for Atlantis. We use a color scheme to capture the
status of each unfolding node while Atlantis is executing. White nodes have not been
processed; gray nodes are processed in the current sweep; and black nodes were processed
in previous sweeps. Initially, Atlantis sets the color of each unfolding node to white and
its basic latency to zero. Next, Atlantis stores unfolding events edi that are mapped to the
target net transition td in variable startsweep.nodes. The ensuing loop performs a sweep
starting from all nodes in this variable. Functions do-phase-1 and do-phase-2 perform
the first and second phase of each sweep. Function update.start.nodes computes the set
74
of starting nodes for the following sweep. When this set is empty, the loop is exited and
Atlantis terminates.
Function do.phase-1 searches for predecessors of starting sweep nodes whose color is
white. Each such node that is discovered while searching backwards is colored gray. The
search ends when either an initial unfolding place is found (which will have no predecessors),
or a black node (processed in a previous sweep) is found. Initial unfolding nodes discovered
in this sweep, and nodes whose direct predecessor is in the previous sweep are called root
nodes for this sweep. When do_phaseA returns, all nodes whose basic latency will be settled
in the current sweep are gray.
Before executing the second phase of a sweep, Atlantis invokes function check-join, whose
main purpose is to check that the merge exclusion assumption, which is defined below, is
respected. The code of this function, which is not shown in Figure 11, performs a forward
sweep from the root nodes of the sweep, as we explain below.
Function do-phase.2 considers all gray nodes, beginning from the starting nodes of the
sweep. The predecessors of each node nl are considered. For each predecessor n2 of nl, the
arc from n2 to nl is marked to indicate that this arc has been used by Atlantis. If the color
of n2 is gray, its basic latency is updated as follows. If n2 is a transition (in which case
nl is a place), the basic latency of n2 is set temporarily to the greatest value between n2's
current latency and nl 's basic latency. The current latency of n2 could be greater than ni 's
value because n2 could have multiple successor nodes in the current sweep. Consequently,
n2's could have been set while following backward an arc to a different node in n2' than nl.
function Atlantis (unfolding, t_d, matching_nodes) {
for all nodes ? in unfolding {
color (n) = white ;
basic_latency(n) = 0 ; }
start_sweep_nodes = target_transition_ images (unfolding, t_d) ;
while (start_sweep_nodes != empty_set) do {
do_phase_l(start_sweep_nodes) ;
check_join(outOf SweepNodes) ;
do_phase_2(start_sweep_nodes) ;
start_sweep_nodes = update_start_nodes(start_sweep„nodes, matching_nodes) ;
function do_phase_l(start_sweep_nodes) {
open_nodes = start_sweep_nodes ;
while (open_nodes != empty_set) do {
remove (nl, open_nodes) ;
for each node n2 in predecessors (nl) {
if (color (n2) == white) { color (n2) = gray ; add(n2,open_nodes) ; } } } }
function do_phase_2(start_sweep_nodes) {
open_nodes = start_sweep_nodes ;
while (open_nodes != empty_set) do {
remove (nl, open_nodes) ;
for each node n2 in predecessors (nl) {
mark_arc(n2, nl) ;
if (color (n2) == gray) {
if (is_place(n2))
basic_latency(n2) = max(basic_latency(n2) , basic_latency(nl) + lft(nl))
else
basic_latency(n2) = max(basic_latency(n2) , basic_latency(nl) ) }
if (visit_complete(n2)) add(n2, open_nodes) ;
> > }
function visit_complete(n2) {
for each node nl in successors (n2) {
if ( !is_marked_arc(n2, nl) kk color(nl) == gray) return false ; >
return true;
}
function update_start_nodes (start_sweep_nodes, matching_nodes) {
new_start_nodes = empty_set ;
for each node ? in start_sweep_nodes {
color(n) = black ; }
for each node nl with color (n)== gray; {
if (is_place(nl))
for each node n2 in matching_nodes(nl) {
if (color(n2) == white) add(n2, new_start_nodes) ;
}
color(nl) = black ; >
for each node ? in new_start_nodes {
basic_latency(n) = max(basic_latency(matching_nodes(n))) ;
return new_start_nodes;
}
Figure 11: Pseudocode of Atlantis algorithm.
76
Consider, for instance, the unfolding in Figure 14 and assume that eio is a target transition
occurrence. While searching backward from eio, we would discover e\ both from Ò4 and Ò5.
The basic latency of e\ is longest of the two backward paths from eio to e\.
If n2 is a place (in which case nl is a transition), the basic latency of n2 is set temporarily to the greatest value between n£'s current latency and the sum of ni 's basic latency and
nl 's static latest firing time. The static delay of nl is added to ni 's basic latency because
this transition will delay the movement of tokens from input places (e.g., n2) to its output
places.
Function do-phase-2 invokes function visit-complete, which determines when the temporary latency value assigned to n2 is final. This will happen if all nü's out arcs to nodes
considered in this sweep are marked. In this case, n2,s predecessors can be explored, if
there are any.
Finally, function update.start-nodes checks whether any of the (gray) unfolding places
considered during this sweep has a (white) matching place that Atlantis has not processed
yet. All such places, if there are any, are stored in local variable newstart-nodes, which is
returned. The basic latency of each such place ? is set to the highest basic latency of all
places p¿ matching p.
Consider the TPN in Figure 14 and assume that target transition te must be fired every
50 time units. In this case, we start the first sweep from unfolding transition e6, the only i6
occurrence in the unfolding on the right. During Phase 1 of the sweep, we gray unfolding
nodes whose basic latency will be settled in this sweep: e6, 65, ei, h, O6, e2, and O2. During
77
Phase 2, while exploring back from e§, we first discover places Ò5 and b§. The basic latency
of these places is set to 3, the sum of e6's latency (zero) and latest firing time (three).
Exploring back from place 65, we discover transition ei, whose latency is also set to 3.
Although this transition has an additional out arc (to place 64), this latency value is final
because Ò4 is white, meaning it is not discovered in this sweep. Exploring back from e\, we
find place £>i, whose latency value is set to four, the sum of ei's latency and latest firing
time. In a similar way, while exploring back from be, we set e2's latency to 3 and £>2's latency
to 5.
After Phase 2 of Sweep 1 is complete, function updatestartnodes will compute the
starting nodes for Sweep 2. Places Oi and 62, discovered in the first sweep, are matched by
white places O1, b'2, and O2, which become the starting places of the second sweep. During
the latter sweep, the following basic latencies are assigned: b[ — 4, O2 = 5, eio — 5, O9 = 9,
610 = 9, e5 = 9, 64 = 16, O2' = 5, eu = 5, bn = 13, O12 = 13, e7 = 13, e8 = 13, O7 = 22,
e3 = 22, and 63 = 30. Places 63 and 64 are matched by white places b'3 and b'4. A third
sweep is necessary, which assigns the following latency values: O3 = 30, b'A — 16, e9 = 16,
&8 = 20, and e4 = 20. No additional sweeps are needed.
Before we conduct sweep i + 1 with i > 2, we must first check whether the control
problem under consideration satisfies the following merge exclusion assumption. In the
following we use the term out-of-sweep node for sweep S¿_i to denote a node u discovered
in sweep Si such that at least one node in 'u was discovered in the previous sweep 5¿_i.
78
Definition 25 (Merge Exclusion Assumption (MEA)) Consider unfolding paths originating at out-of-sweep nodes u\...un for sweep S¿_i, such that at least one of nodes
u\ .. .Un is a transition. All such paths must satisfy two conditions:
1. The paths do not intersect each other in sweep Si.
2. The paths do not intersect conflicting paths contained in sweep 5¿. (Two unfolding
paths conflict if each contains at least one node in conflict with a node contained in
the other path.)
For more details about MEA, Section 6.3 will categorize the valid cases of the MEA and
three kinds of MEA violations.
If the assumption is not met, our framework for enforcing deadlines in time Petri nets
may not preserve net liveness. But the Challenger algorithm in Section 6.3 will address one
of three kinds of MEA violations. In the other two cases of MEA violation, our analysis is
inconclusive and we abandon the algorithm for supervisor generation. We assume also that
a controlled net does not allow an infinite behavior s in which a (non-target) transition i, is
fired infinitely often, such that U is matched by an out-of-sweep transition e¿ for sweep Si,
subject to the following livelock condition. Sweep 5¿ contains a transition ej, that shares
an input place with e¿, such that tj, the net transition homomorphic to e¿, is not enabled
infinitely often in s.
The code for checking the merge exclusion assumption, which is performed by function
check-join between the two phases of each Atlantis sweep, uses a kind of breadth-first
forward search from the root nodes of each sweep. As we noted above, the root nodes of
79
sweep i are the initial unfolding places discovered in sweep i and the out-of-sweep-nodes for
sweep i — 1 (contained in sweep i). We do not check whether the controlled net is level-4
live and free of the livelock condition above.
5.1.1
Correctness of Atlantis
To prove the correctness of Atlantis, we must show that the basic latencies associated
with each node tiSFf/UTy match the definition (Equation 5.1) above. The following theorem states that Atlantis always terminates and computes correct basic latency values (49).
Our proof of algorithm correctness follows the steps below:
1. If the set of nodes finished after the i-ih sweep M F, is backward closed, then the
restriction nodes of sweep i + 1 Rk+i U ( [ ) Ft) is also backward closed.
¿=Tjfc
2. F* = Rk-
3. M F? is backwards closed.
?=??
4. F is always Tp closed.
5. The final latency assigned to each node is a correct unfolding latency, according to
definition (Equation 5.1) above.
6. Every unfolding node is assigned a final latency during the Atlantis algorithm execution.
7. The Atlantis algorithm always terminates.
Theorem 2 // M F^ is backwards closed, then Rk+iU ( [J f? is backwards closed (49).
i=Tjc
i=l,k
80
Proof: The breadth first search computeRestriction method starts from set S^+? and
follows backwards every reachable arc. The procedure is discontinued only when:
1. An initial node u G ßmin is reached, or
2. A finished node ? G ^J í>¿ is reached.
Since M F^ is backwards closed by hypothesis, it is easy to see that Rk+i = [pfc+i]] —
i=Tjt
M F?. Then, the union of the restriction with the finished nodes Rk+i u ( U F0 =
[pfc+i]] - i=Tjc
U f?) u (i=Tjc
U f0 = IP*+1]] u ¿=l,fc
U *<· Since both tPfc+1]] and ¿=l,fc
U F? are
backwards closed, their union also is backwards closed.
Theorem 3 F^ = Rk (The restriction of a sweep and the nodes finished in the same sweep
are equal) (49).
Proof: Let us assume, without losing generality, that F/c f Rk, but that F^ = i?¿, 1 <
i < k - 1. Thus, the restriction and the set final nodes were equal in every sweep before
the kth sweep. Recall that F£_? is the set of final nodes discovered in the previous sweep
and S/. is computed as follows: S/. = {p G Pu\3p G F/c-i : ? « ?) ¦
The recursive procedure computePrefixNodeLatency is called once for every node in Efc
to compute the latencies of the nodes of the unfolding. Thus, every starting node is assigned
a latency value (at least a temporary latency).
Since F/c f Rk at the end of the kth sweep, an unfinished node ? G Rk exists, having
unprocessed restriction arcs. Therefore, a node y exists, such that the arc (x, y) was not
81
processeci during Phase 2 of sweep k. Consequently, node y was also not discovered during
Phase 1 of the sweep. Similarly, at least one successor of y must also be undiscovered and
so on. Then, there exists a directed forward path x, ...,z from î toa starting node ? in
Rk, such that every node in the path is undiscovered. This contradicts the fact that every
starting node is assigned a temporary latency value.
Corollary 1 Between sweeps, the set offinished nodes F = M <É>¿ is backwards closed (49).
The set of finished nodes F is equal to M F^ after k sweeps. The corollary trivially
follows from Theorems 2 and 3. Since F? is empty it is trivially backwards closed. Then,
.Ri U F? is backwards closed and F? = Ri. By induction on the union operation, [J F; is
backwards closed for all k; thus, F is backwards closed before each sweep.
Theorem 4 The set containing all nodes finished at end of each sweep is Td closed (49).
Proof. Let tyk denote the set of all nodes finished at the of sweep k, F/t = Ui=I^ f?> where
F? is the set of all nodes contained in sweep k. Let w e F/t be a node contained in *fc.
The latency of a node w is final and the node is added to F when its latency has been
computed along all outgoing arcs from w that belong to the current restriction. Three types
of nodes are added to F during the algorithm execution:
1. Target transition occurrences, as starting places of the first sweep.
2. Nodes discovered following unfolding arcs backwards from finished nodes, during
sweeps.
82
3. Fringe places, as starting places of the k-th sweep (matched by places finished in the
(k — l)-st sweep).
Recall that a subset of nodes N Ç P L)T is Td closed if:
V td e NnTn, Vw e N : (w « w') ? O' << td) -iffl'eJV
This means that a node w belongs to a Td closed set if and only if every extended path
from w to every target transition in the set is included in the set.
The proof is by induction on the number of nodes in F. The induction base case is the
empty set of finished nodes F before the execution of the algorithm, which is trivially Td
closed. Suppose F currently contains k nodes and it is Td closed. Let the next node to be
added to F be w. We shall prove that F U {w} is Td closed.
Case 1. Node is a target transition occurrence, w = ?<&.
Since every target transition occurrence is added to F in the first sweep, no fringe places
have yet been finished and every extended path in F is a simple directed path in F. Suppose
adding target transition occurrence tdx to F does not preserve the closure property. Then, a
directed path p — w,ui,u2, ...,ux, ...,tdx connecting node w — t¿i e F and target transition
tdx € F exists, containing node ux such that ux ^ F.
Before the first sweep, we computed the restriction containing all the arcs and nodes
in the full configuration of the set TD- Since node w <? F, all its incoming arcs must have
been processed and removed from the restriction. Because each arc is removed from the
83
restriction only after the arc's source node has been processed, node u\ must have been
finished before œ, so ni e F. By induction on the nodes of the sequence we can prove that
every node in path p belongs to F. Because U1 jí F we reached a contradiction.
Case 2. Node w is added to F because it was discovered backwards along an arc.
Suppose that F is not Tu closed after w is added. Then, an extended path ? —
w,ui,U2, .--,Ux, ...,tdx exists, such that a node in ? is not finished. The latency of w is
final because it was updated along all outgoing arcs of w that belonged to the restriction.
If «i is finished, then every path from M1 to t¿x must be in F. Then, path ? is finished,
contradicting our assumption. Thus, node u\ must be unfinished. But u\ belongs to the
current restriction, so it must be finished, otherwise node w could not be assigned a latency
along the arc (w, u{). If u\ does not belong to the current restriction, then path ? is not a
valid extended path.
Case 3. Node tuisa fringe place (added during the fc-th sweep).
Recall that, before each sweep, we compute the restriction, containing every node that
is reachable backwards from the starting nodes and its outgoing arcs. Also, recall that all
terminal transitions in our unfoldings are cutoff transitions. The latency of a node is final
when its latency has been updated along all its outgoing arcs that belong to the restriction.
Because the restriction is computed before the sweep, a fringe place cannot be added to
F if incoming arcs that were not yet processed connect it to other nodes in the restriction.
Thus, when w is added to set F, all its incoming arcs in the restriction must have been
removed. Suppose an extended path ? = w,ui,...,ux,...,tdx exists such that ux is not
84
W
Figure 12: Maximal Td closed subset.
finished. Nodes w and u\ could be connected in two ways. First, an arc (w,ui) exists. If
Ui is finished, then Ux is also finished—in this case the contradiction proof is the same as
for Case 2 above. Second, u\ is the matching place of fringe place w and u\ was discovered
during the previous sweep. Since \Ji=1 k_x <E>¿ is backwards closed, Ui must be finished,
contradicting our assumption that ? contains an unfinished node.
?
Theorem 5 When the algorithm terminates, the set of finished nodes F contains all the
nodes in the connected component of the target transition in the original net graph (49).
Proof. The proof is by contradiction. We shall prove that, if any undiscovered node exists
in the unfolding after algorithm execution, the Petri net is not safe and live. Let ux be
a node in the unfolding such that its image in the original net belongs to the connected
net component containing the target transition. Suppose Ux has not been discovered along
backward paths from any node in the unfolding. Then, since Ux belongs to the connected
component of td in the original net, ux must be reachable by following forward arcs from
85
nodes in the finished set of nodes F. Let s = Mi, «2, ..., Ux be a forward path in the unfolding
such that «? is a finished node, but M2 is an unfinished node.
Two possibilities exist, shown in Figure 12: Either Mi is a place and M2 is a transition,
or vice versa. Let us first assume that Mi is a place and that M2 is a transition. Since M2
was not discovered during algorithm execution, then M2 does not reproduce any marking in
its cause (if M2 did produce a repeated marking, then it would have been discovered in the
full configuration of some fringe place) . Therefore, the firing of transition M2 permanently
removes tokens from some of the places in its cause. Consequently, the original Petri net is
not live, contradicting our assumptions.
Let us assume that Mi is a transition and that M2 is a place. Because the original net
is live, then transition Mi could be fired an infinite number of times. Since place M2 does
not belong to the full configuration of any fringe place, then the firing sequence containing
transition Mi does not remove the token in place M2 in order to be fired. The original Petri
net is not safe because of token accumulation in M2, contradicting our assumptions.
?
Theorem 6 The latencies of the finished nodes F are correct unfolding latencies with
respect to definition ( Equation 5.1) (49).
Proof. The proof is by contradiction. We already proved that adding any node m to F
preserves the Tp closure of F, that is, the nodes along every extended path from u to every
target transition are in F. Let us assume that every node in F is currently assigned a
correct latency and that node u is the next node to be added to F, but that the latency of
m is not a correct unfolding latency.
86
First, suppose u is a target transition occurrence in the unfolding. Obviously, the correct
latency is zero valued. Second, suppose u is a fringe place. We already argued that the
correct latency of u is the latency of its matching place. But the matching place of u was
previously added to F, so its unfolding latency is correct along the extended path formed.
Third, suppose u is not a fringe place. Then, the latency of u is equal to the maximum
latency computed along its outgoing arcs in the restriction. But all the nodes in the postfix
of u in the restriction were already added to F with correct latencies. Consequently, by
definition, the latency of u is also correct with respect to definition( Equation 5.1).
?
Theorem 7 The Atlantis algorithm always terminates (49).
Proof. Let us assume that node u is discovered during the fc-th sweep. The latency along
all its outgoing arcs in the restriction is computed and assigned during the same sweep
and the node is added to F. Suppose that node u is discovered again in a subsequent
sweep. This can only happen if we explore an outgoing arc of u that does not belong to
the restriction of the fc-th sweep. This is in contradiction with the operation procedure
of computePrefixNodeLatency that returns immediately when a finished node is reached.
Therefore, the final latency of each node is computed during a single sweep and nodes are
never revisited again. Since the unfolding graph is finite, the algorithm execution terminates.
Computing the unfolding latencies, which represent upper values on the durations of
extended paths, is a key aspect of the framework. Above, we proved that the values associated with the unfolding nodes when the algorithm terminates are correct unfolding latencies
with respect to definition( Equation 5.1).
n
87
5.2
Discovery algorithm
Atlantis computes basic latencies as the longest paths that a token may travel from arbitrary unfolding nodes to occurrences of the target transition t¿. However, the basic latency
does not take into account synchronization delays. A token may reside in a place for a longer
time than the latest firing time of its output transition(s) because the transition(s) may not
be enabled when the token reaches the place. The synchronization delay is the additional
time spent by a token in a place while waiting for one of its output transitions to become
enabled. When the supervisory control problem satisfies the merge exclusion assumption,
the Discovery algorithm defines full latencies that take into account synchronization delays,
along firing schedules allowed by the supervisory controllers. In addition, the Discovery
algorithm also computes the common full latency for the out of sweep transitions when
they are merging with each other, which is the first case of the MEA violation. Challenger
will use their common full latency to solve this special case, which will be defined later.
Discovery does not seek to update the basic latency of all nodes in an unfolding; instead,
full latencies will be defined only for certain out-of-sweep nodes (both places and transitions)
identified by Atlantis. A key aspect of Discovery is identifying unfolding transitions with
multiple input places because synchronization delays may occur at these transitions. We
call such transitions merge-point transitions. As a result, Discovery increases the basic
latency values computed by Atlantis for out-of-sweep nodes that are backward reachable by
merge-point transitions subject to additional conditions described below.
88
Suppose that a merge-point transition tm, discovered in sweep S¿, has multiple input
places p\, P2, ¦ ¦ ¦ , Pk- In this case, a token in one such place, pm, could be delayed by the
absence of a token in a sibling place pj G *tm. The basic latency of the following kinds
of nodes plays a key role in accounting for im's effects when we define the full latency of
out-of-sweep nodes for S¿_i:
1. Initial unfolding nodes discovered in sweep S^, and
2. A place pi, discovered in Si, such that the unfolding contains a path that starts at pi
and contains in the following order: Transition tm, an out-of-sweep node for Si, and
a starting place p\ for sweep S¿+i matching Pi (i.e., p¿ « p'%).
Definition 26 (Full latency) The full latency of out-of-sweep nodes for a sweep 5¿_i is
defined as follows. Consider all merge-point transitions tm¡i in sweep Si whose cause [[ím,¿]]
contains at least one node from either of the two above categories and one or more out-of-
sweep nodes n0¿ for sweep 5¿_i. For each tm,i, we set the full latency of out-of-sweep nodes
n0,i for sweep S¿_i, with n0¿ € [[tm,i]] to be the maximum of the following values:
1. The maximum of the basic latencies of all out-of-sweep nodes n0¿ e [[ím,¿]]/
2. The basic latency of initial unfolding nodes discovered in sweep Si and contained in
[[tm,i]]; and
3. The full latency of each out-of-sweep place p0 for sweep Si, such that p0 leads to a
starting place p\ for sweep 5¿+i and p\ is matched by a place pi e [[tm,i]]-
89
When an out-of-sweep node for 5¿_i is backward reachable from multiple merge-point
transitions in Si, the highest computed full latency value for all such transitions defines the
full latency of n0¿.
Definition 27 (Common full latency) The common full latency of out-of-sweep transitions for a sweep Si-i is defined as the maximum of the full latencies of all out-of-sweep
transitions, if they are merging at sweep S¿.
Evidently, the full latency of out-of-sweep nodes for sweep 5¿ requires that the full
latency of out-of-sweep nodes for sweep 5¿+i be known. For this reason, Discovery is
conducted in reverse order with respect to the sweeps identified by Atlantis. If Atlantis
defines k sweeps, Discovery starts from the last sweep Sk, by first computing the full latency
of out-of-sweep nodes for sweep Sk-i- The full latency of out-of-sweep nodes for sweep Sk-2
is computed next and so on. Discovery ends when the full latency of out-of-sweep nodes for
the first sweep, Si, is computed.
The Figure 13 shows the pseudocode of the Discovery algorithm. The algorithm takes as
input three arrays. The first array, rootNodes, has size k, the number of sweeps performed
by Atlantis. Each array entry is a set containing the root nodes of the corresponding sweep.
(Recall that root nodes are initial unfolding nodes discovered in the sweep or out-of-sweep
nodes for the previous sweep.) The second array, basicLatencies, has the basic latencies
values assigned by Atlantis to each unfolding node. The third array, rootNodeSubsets, again
has an entry for each Atlantis sweep i. The i-th entry is a set of subsets of root nodes for
sweep i. The root nodes in each subset are related in that unfolding paths originating at
90
function Discovery (rootNodes [k] , basicLatencies[n] , rootNodeSubsets [k] ) {
for (i=k; i > 1; i—) {
for each subset S in rootNodeSubsets [i] {
maxLatency = max(basicLatencies (nodes in S)) ;
for each node u in S {
fullLatency [u] = maxLatency;
>
}
for each node b in sweep i matched by starting node b' in i+1 {
find out-of -sweep node w on path from b to b';
temp = fullLatency [w] ;
for node ? in rootNodes [i] {
if (precedes (n,b) )
fullLatency [n] = max(f ullLatency [n] , temp) ; } > > }
Figure 13: Pseudocode of Discovery algorithm.
those root nodes intersect in sweep i. For instance, in Figure 14, the rootNodeSubsets entry
for sweep ¿>2 would contain the following subsets:{i>3, ß?} and {64,610}· This is so because
63,67,64 and Ò10 are root nodes for the sweep. Moreover, paths originating at 63 and ß?
intersect at en. Paths originating at 64 and 610 intersect at eioStarting from sweep k, the last sweep identified by Atlantis, Discovery iterates the follow-
ing actions on each sweep i. For each subset S in the rootNodeSubsets entry corresponding
to sweep i, the full latency of all S nodes is set to the maximum basic latency value of all
such nodes. Next, we consider nodes discovered in sweep i that match starting places for
sweep i + 1. Let place b be matched by starting place b' for sweep i + 1. Let w be the
out-of-sweep node found on the unfolding path from b to b'. In this case, we update the full
91
latency value of each root node ? preceding b to be at least as great as the full latency of
w. The processing of sweep i is now complete, and Discovery continues with sweep i — 1.
Consider, for instance, the Petri net in Figure 14. The third Atlantis sweep discovered
nodes b'A, eg, bg, and e^. This sweep contains only one out-of-sweep node, e^, and no initial
unfolding places. Thus, Discovery sets the full latency of e± to the same value as its basic
latency, that is, 20. The second Atlantis sweep contains the following out-of-sweep nodes
(with basic latencies): b\Q (9), 64 (16), and e-j (13). This sweep also contains initial unfolding
node Ò3 whose basic latency value is 30. Given that forward unfolding paths starting at O3
and e-¡ intersect, Discovery sets the full latency of both nodes to 30. Paths originating at
bio and Ò4 similarly intersect, causing the full latency of both nodes to be 16.
In Chapter 7, we prove that the full latencies obtained by Discovery result in a live
controlled net in which the deadline on the firing of t¿ is met, if the following two conditions
hold. First, the controlled net will be equipped with a supervisory controller that disables
out-of-sweep transitions when their full latency equals the time left until the expiration of
the deadline on the firing of í<¿. Second, the full latencies of all out-of-sweep nodes in the
unfolding must be smaller than the value of ?.
5.3
Complexity of Atlantis and Discovery
The running time of the Atlantis and Discovery algorithms is linear in the size of an
unfolding, O [Pu +Tu + Fu). We argue the complexity of Atlantis first, based on the
following points.
92
First, the initialization of Atlantis, which includes also data structures not shown in
Figure 11, is performed in time linear in the size of the unfolding. Structures not shown
include the set of target transitions, the mapping of unfolding nodes to Petri net nodes, and
pairs of matching unfolding places. All these structures are initialized in linear time from
the unfolder's output.
Second, the set of nodes and arcs considered during each Atlantis sweep are disjoint. The
first sweep discovers only nodes backward reachable from target-transition occurrences in
the unfolding. The basic latency value of all such nodes is settled permanently. The second
sweep is also a backward search starting from places matched by places discovered during
the first sweep. The search is discontinued along arcs leading back to nodes discovered
during the first sweep. Subsequent sweeps are similar.
Third, the run-time of each Atlantis sweep is linear in the number of nodes discovered
in the sweep. As shown in Figure 11, each sweep runs three functions, namely do-phase.1,
check-join, and do-phase-2. Function dojphaseA conducts a breadth-first search (BFS)
backwards from the starting nodes of each sweep; the run time of BFS is known to be
linear in the sum of the nodes and the arcs in the graph being explored. Thus, the run-
time of dojphaseA is also linear. Function check-join runs a straightforward BFS starting
from so called "root nodes". These nodes are the initial unfolding nodes discovered by
do-phase-1 and the out-of-sweep nodes of the previous sweep. For each node ? encountered
during BFS, check-join updates a set of the root node(s) from which ? was reached. Upon
completion of the BFS, all sets are checked for incompatible node types (e.g., two out-of-
93
sweep transitions) also in linear-time. Finally, the run-time of function do-phase-2 is clearly
linear in the number of nodes and arcs contained in the current sweep. The function keeps
nodes to be explored in a queue named open-nodes. Each node can be enqueued only once.
Moreover, each time a node is removed, each of its input arcs is processed in constant time.
The invocation of function visit-complete checks whether a node has been fully explored in
constant time. Moreover, once a node has been fully explored it never needs to be considered
again. Thus, the overall run-time of function Atlantis is linear in the size of the unfolding.
The Discovery algorithm uses structures produced by Atlantis and check-Join in order
to produce the full latencies of out-of-sweep nodes in an unfolding. Only the root nodes
of each sweep are considered. One potential non-linearity of this function is given by
the array rootNodeSubsets, whose entries are sets of node subsets. However, in general
each sweep will have a small number of root nodes. In addition, node subsets in each
rootNodeSubsets entry are likely to be disjoint or else the problem may violate the merge
exclusion assumption. Thus, the combined size of all node subsets in each rootNodeSubsets
entry will be proportional to the number of root nodes of the corresponding Atlantis sweep.
In general, Discovery will process only a small subset of unfolding nodes. We conclude that
the overall run-time of Atlantis and Discovery is linear in the size of the unfolding (i.e., the
sum of the arcs and nodes in the unfolding).
5.4
Net latencies
Consider, for instance, the time Petri net on the left-hand side of Figure 14. The Petri
net's unfolding is shown on the right-hand side. Suppose that target transition i6 must be
94
api
-\s
P2®
tl[0.1]
P4 Qp5
I4[U]
?3F~?
t2[0.2]
UPe
b20
b30
t3[1,8]
QPy
\ / \t7[3.3] t8[6,9]
¦ -^t5[3.7] -rteP.3] ?- ~
9 P8
Ob1
??9??°
Qpii9p«
\kf
bnOb12
0"¿' Ob3'
Figure 14: Example of a time Petri net (left-hand side) and its untimed unfolding (righthand side).
fired every ? = 50 time units. Given that the static latest firing time of t% is 3 time units,
te must be enabled at least 3 time units before the deadline expires in order to meet the
deadline. Thus, we define the basic latency of transitions t\ and i2, which must fire before
i6, to be 3 time units. Similarly, the firing of transition tw enables ¿i and i2. The basic
latency of transition tw is 5 time units, the sum of i6's latency and the maximum of the
latest firing times of ii and t2 (2 time units for i2).
Defining the latency of transition t4 is more difficult because the firing of this transition
disables i5, thereby preventing iio from firing. Aside from the initial state, ¿i cannot fire
unless ii0 is fired first. Thus, t4 must be disabled some time before the deadline on the
firing of i6 expires in order for i6 to meet the deadline. As we will see, the basic latency of
transition t4 is 20 time units, the maximum time needed for tokens to travel from place ps
to places p4, p9, pi, p2, Ps, and p6. Target transition i6 is guaranteed to fire within 3 time
95
units after p$ and pe are marked, if transition tj is disabled. The firing of t-j prevents t§
from firing. Thus, Í7 must be disabled some time before the expiration of the deadline on tg.
Atlantis sets the basic latency of tj to 13 time units, the sum of the static latest firing times
of transitions in, Í2, and tß. This basic latency does not take into account that a token in
place pu will reside in this place until a token is deposited in place p\2 and transition in
can fire. Thus, a token in pu could be delayed longer than 8 time units, the lft(tu), by the
absence of a token in place p\2- We refer to this situation as a synchronization delay at t\\.
As we will see, Discovery sets the full latency of Í7 to 27 time units, reflecting the delays
that transitions £3 and is mav impose on the arrival of a token in place p\2-
After computing the unfolding latencies for unfolding U we assign latencies to the original Petri net transitions. This process, which is called folding, seeks to generate static
supervisors, that is, supervisors that use a unique latency value for each net transition,
regardless of net state. A transition t in the original Petri net ??t, may have multiple
occurrences in the unfolding and these occurrences may have distinct latencies. This corresponds to a situation in which t should be disabled at different times—relative to the
expiration of ?—depending on the state of the time Petri net. In this case, we consider all
occurrences, f¿, of í in unfolding U, such that í¿ is in direct conflict with at least one transition discovered in an earlier sweep. In this case, we choose the highest latency max 1(U)
of such ti occurrences as the net latency of t. This policy for setting net latencies results in
a conservative control strategy, because we always disable a transition t as early as needed
in order to force target transition t¿ to fire.
96
It is possible to generate dynamic controllers, which can take into account the net state
when deciding whether to disable a transition. In the future of this research, we will not be
required to merge the different latencies of multiple occurrences of a net transition t into
a single value. Instead, a dynamic controller can check the state of net to decide which
latency value should be used. The benefit of this strategy will be greater permissiveness of
the resulting supervisors with respect to static controllers.
CHAPTER 6
SUPERVISORY CONTROLLER GENERATION
Supervisory controller generation relies on the latency computation as we discussed in
previous chapters. This chapter is organized as follows. In Section 6.1, we briefly summarize the framework for supervisor generation. In Section 6.2, we define the generalized
supervisory control and its examples. In Section 6.3 we discuss the Challenger algorithm
that addresses certain cases of the MEA and increases the amount of concurrency in the
controlled nets. In Section 6.4, we illustrate the empirical data obtained with our algorithms.
6.1
Framework for supervisor generation
Given a time Petri net ?/t — (P, T, F, M0, S), define S to be the set of all firing schedules
of Mt- Given a schedule s G S, and a deadline ? on the firing of target transition t¿, we
say that s is deadline-compliant with respect to t if the following two conditions are met:
1. The first firing of td occurs at time Oq < ? time units from the start of s.
2. Any subsequent firing of t<¿ in s follows the previous firing of td by at most ? time
units.
Otherwise, we say that schedule s is deadline-violating. We define S? to be the subset
of S that contains all and only deadline-compliant schedules of TVr- Likewise, Sy is the
97
98
subset containing all deadline-violating schedules in S. Clearly, we have Ec U Ey = E and
Ec ? Ey = 0.
Our paradigm for generating deadline-enforcing supervisory controllers consists of five
steps. Here we focus on the last two steps and on proving the correctness of the overall
framework in the next chapter.
1. We unfold the (untimed) ordinary Petri net N underlying TPN Nt- This construction
yields an unmarked Petri net U, called the unfolding of TV.
2. The Atlantis algorithm computes the so-called basic latency of each node u in U.
3. The Discovery algorithm computes the full latency of relevant nodes in U. For each
such node u which is a transition t, the full latency l(t) is the maximum delay between
the firing of t and the firing of target transition t¿ along firing sequences permitted
by the supervisory controller.
4. We define a supervisory controller that disables key net transitions í¿ whenever the
elapsing of time makes their latency 1(U) greater than the time left until the deadline
on the firing of t¿ expires. The supervisory controller consists of places and transitions
that are added to the original net Nt5. We define the special parts of supervisory controller, called Challenger solvers, when
unfolding U contains two limitations: the MEA case (a), and the limited parallelism.
99
Pi®"
tl[0.1] .
9-N
t2[0,2] .
P3 Qp4
Op5
* -¥ -4*7 ? ??3[3.3]
Q P8
t6[0,4]
UxP77
Op6
t4[1,4]
Figure 15: Example of a time Petri net.
6.2
Supervisor definition and application examples
The purpose of supervisory controllers is to disable transitions whose firing may lead
to a violation of the deadline on the firing of a target transition. Given a time Petri net
Mt = (P, T, F, µ?, 5) and a deadline ? on the firing of some t¿ e T, assume that we have a set
of latencies l(t) for each t &T. To avoid violating the deadline, our supervisors will disable
any transition t € T whose latency l(t) is greater than the time left until the expiration of
?. Transitions in ?/? are enabled again immediately after td fires. In the sequel, we assume
that ? > l(t) Vi G T. If we had that l(ts) > ?, for some transition ts e T, ts will never be
enabled by control supervisors. In this case, a portion of the net would never be enabled.
In the worst case, the target transition may not be fireable by the deadline.
The supervisory controller consists of a so-called clock subnet and a supervisor subnet.
The former subnet marks the elapsing of time, whereas the latter actually disables key net
100
transitions when their full latency exceeds the time left until the expiration of the deadline
on the target transition.
We define a supervisory controller that enforces deadline ? on the firing of transi-
tion to of time Petri net Af = (P, T, F, M0, S) with its corresponding clock net C —
(Pc,Tc,Fc,M0c,Sc)- Let Py-Pc- {Pd}- The supervisory control constraint is expressed by:
Vi e T, disable t if pv marked with l(t) — ?
(6-1)
We recall that by construction, for any marking of clock net C, all places in Py combined
will always contain exactly one token. Unless t¿ is fired, the token in Py will always move
toward places with lower index values. Constraint(6.1) above states that a transition t G Af
is disabled whenever the token in Py moves to a place pv whose index ? is equal to i's
latency l(t). Therefore, control constraint(6.1) above will disable all transitions that might
delay the firing of t¿ by more than ? units, the index of the marked state of C. Moreover,
once t is disabled, t is not allowed to fire again until after target transition t¿ has been fired.
As a result, all the transitions that may result in the violation of deadline ? on the firing
of td are disabled (19).
We apply the following procedure to each isTin order to implement constraint (Equation 6.1) above:
1. Given t € T with l(t) = v, define two places q\ and q2 and one transition r.
2. Connect Qi, Ç2, and r to other components of Af and C, using the following rules:
101
(a) ·91 = {t} U {s e TR I 3p™ e (*s nPv):œ< ?}.
(b) gie = {i}U{r}.
(e) ·?·2 = {tw eTL \pv € tw'}.
(d) 92* = {r}.
(e) V = {<?i,<?2}·
(f) r· = 0.
(g) Set the static delay interval of r to [0, 0], and put one token in q\ initially.
The procedure above constructs a disabling mechanism for every transition t € T. Based
on this mechanism, one token remains in place q\ as long as q2 nas not received any token.
Place q2 is marked when the T¿ transition feeding place pv with ? — l(t) is fired. As soon as
this happens, the drain transition r will fire and remove tokens from q\ and 92- This action
will disable t. Finally, place q\ becomes marked again (meaning that t is now allowed to
fire) as soon as t¿ fires. When this happens, place p¿ receives a token, thus enabling one of
the transitions in set Tr. The firing of this transition drains the token from ?o and deposits
a token back into all q\ places that do not contain a token (19).
Here we simply report an example of a supervisory controller for the Petri net in Figure 15. Suppose that target transition t¿ = Í3 must be fired every 30 time units. The
right-hand side of Figure 16 shows the supervisory controller that enforces this requirement. In order to ensure that Í3 is fired in timely fashion, we must disable transition t5 22
time units before the expiration of the deadline on t3. The clock subnet consists of transitions i8 through iio and places pd, p30, and p2o, along with arcs incident upon these places
102
I1[O1IJ
t2[°·2]
ï
t10 [O.OJ
I5[LT]
t6[0,4]
Figure 16: Supervisor subnet for time Petri net in Figure 15.
and transitions. A token in place p^ signifies that there are at least 20 time units, but
no more than 30 time units until the deadline on Í3 expires. A token in place p20 signifies
that there are no more than 20 time units left until the deadline's expiration. Place P30 is
initially marked. After 10 time units, transition ig moves the token to place p2o- The firing
of this transition will also disable transition Í5, as we will see. Transitions ig and ¿io reset a
token in place P20 immediately after the firing of t¡; these transitions are enabled by place
Pd, which is a new output of transition Í3. Firing either ¿9 or t\Q initiates a new 30-unit
deadline period.
In Figure 16 we show also the supervisor subnet for the time Petri net appearing in Figure 15. Place qi must be marked in order for transition i5 to fire; this place is initially
marked, meaning that transition Í5 is initially enabled (e.g., because l(t5) >= ?). When
clock transition i8 fires, place Ç2 receives a token, enabling transition r. When this transi-
103
tion fires, a token is removed from place q\\ this action disables £5. This transition becomes
enabled again when target transition ¿3 and transition Í10 are fired.
We have completed an implementation of Atlantis and Discovery in the Java language
and we have applied the resulting software to various examples of supervisory control problems, such as the following reliability and maintenance system. In this system, various
faulty units must be repaired by a team of two engineers. Each repair must be completed in
? time units. Each unit consists of three components; the fault may reside in one or more of
these components. In order to repair a unit, the engineers first must test the components.
The test could have three possible outcomes. A component could be found to be working
properly or to be faulty. In addition, the test could be inconclusive, in which case it should
be repeated.
The first engineer works on the first component, while the second engineer works on
the second and third components in an arbitrary order. Once a component is found to be
faulty, it is replaced with a new component. A sequence of inconclusive component tests
may result in deadline ? to be missed. In this case, an alternative strategy is to replace the
entire unit. Component testing, component replacement and unit replacement are assigned
variable task durations ranging within predefined intervals.
The purpose of supervisory control is to ensure that each unit be repaired either through
component replacement or unit replacement. We analyzed a Petri net model of the above
reliability and maintenance system containing 26 places and 31 transitions. The corresponding unfolding has 78 places and 43 transitions. Atlantis and Discovery complete their
104
analyses in less than a second of CPU time on a Pentium 4 processor with 1 GByte of
primary memory. We are currently analyzing various other examples in an effort to assess
the scalability of our approach.
6.3
Challenger algorithm
The purpose of Challenger method is to overcome two limitations of the supervisory
control framework. The first limitation is the merge exclusion assumption (MEA) . As we will
see, we clarify three kinds of occurrences of this problem. Since an existing publication (80),
we have extended our supervisory control method to handle one of the three kinds of
systems violating the MEA. The second problem is that our previous version (80) of the
method unnecessarily limits the amount of parallelism in a supervised net, in an effort
to enforce compliance with the deadline. The parallelism limitation will be discussed in
Section 6.3.2. The Challenger algorithm solves certain cases of the MEA violations and
increases concurrency substantially in the supervised net.
Before we introduce the Challenger strategy, we first discuss some examples that comply
with the MEA in Figure 17. We define an unfolding that does not violate the MEA as a
valid unfolding. Case (a) illustrates a general case without synchronization delay. Cases (b),
(c), (d) and (e) show the valid cases with synchronization delay. Cases (b), (d) and (e) are
all valid cases and the basic latency represents the maximum elapsed time that a token may
take to travel from arbitrary unfolding nodes to occurrences of the target transition. Case
(c) shows that the out of sweep transition t\ need to be compensated by the synchronized
initial place. All these cases satisfy the MEA. The last two cases just show that the latency
105
O O
UV\
\
i.v
I
Jl Jl
Figure 17: Valid cases satisfying the MEA.
of the out of sweep transition may be conservative, because it is the maximum sum of the
latest firing times, among all possible paths to the target transition. The out of sweep
transition synchronizes with the initial place in case (c), which was handled by Discovery
with the full latency definition.
6.3.1
MEA Solver
The synchronization delay problem does not just occur when the out of sweep transitions
merge with the initial places, it can also happen among the out of sweep nodes. The
106
O O
(J^ S
\ )
s~-,
tm 1^ ?
:
tm ? ^
A A
(C)
Figure 18: Three cases in which the MEA can be violated.
MEA actually covers such kinds of synchronization delay, when the key transition needs to
synchronize with other out of sweep nodes.
The Figure 18, shows three possible ways in which the MEA could be violated.
1. Case (a) Paths starting at out-of-sweep transitions merge with each other in a given
sweep.
2. Case (b) Paths starting at out-of-sweep transitions and out of sweep places merge
with each other in a given sweep.
3. Case (c) Paths starting at out-of-sweep transitions merge with conflicting paths in a
given sweep.
In all three cases and their combinations, the supervised net may contain deadlocks,
even though the original net did not. Challenger handles case (a) of the MEA. The key
107
feature of Challenger is to identify the out of sweep transitions that merge at a common
transition tm with paths originating at other out of sweep transitions. These out of sweep
transitions will be stored in a set for each of such key transitions. We define this set as
follows.
Definition 28 (SECS (Synchronization Enabled Control Set)) Let tk be an out of
sweep transition for a given sweep Sk- Consider another out of sweep transition ti for sweep
Sk such that tk < tm and U < tm, where tm is a merge transition in sweep Sk- Then the
SECS of tk contains tk and all such transitions as í¿.
Each key transition tk might hold such set of transitions, if this transition is out of sweep
transitions in the MEA case (a). In Figure 18 case (a), out of sweep transitions t\ and Í2
merge at tm. So the SECS of t\ will include transitions Í2 and ii. The controller will disable
all key transitions in the SECS initially. Transitions in a SECS will be enabled together
when their input places from the previous sweep are all marked with a token. Then the
maximum full latency of all such transitions will be compared with the time left before the
expiration of the deadline. If a maximum full latency is still less than the time left, all
transitions are selected to be fired. Here the maximum full latency of all transitions in the
SECS is also called common full latency Ic, defined in Section 5.2.
The main steps of the Challenger algorithm for solving MEA case (a) as followings.
1. Compute the SECS for each out of sweep transition defined in MEA case (a), where
each SECS contains the out of sweep transitions preceding a common merge transition
108
tm. This step is executed offline during latency computation. The common full latency
Ic is also calculated as the maximum value of the full latencies in the SECS.
2. At run time, all out of sweep transitions in the SECS, will be disabled initially.
3. When an enabled transition is ready to fire, Challenger detects whether all input
places from the previous sweep for the out of sweep transitions in the same SECS are
all marked.
4. If the last step is true, then assign Ic to all the out of sweep transitions; if false, go
back to Step 3.
5. If the last step is true, it will check the next condition. If the time left to deadline
is less than Ic, jump to Step 6; if the time left to deadline is not less than Ic, which
means all the out of sweep transitions can be fired at the same time. In case that any
one of them is fired, disable the transitions in the previous sweep conflicting with the
transitions in this SECS. Then go back to Step 3.
6. The transitions in the previous sweep conflicting with transitions in SECS will be
fired, then go back to Step 3.
The pseudocode of MEA solver is shown in Figure 20.
The right-hand-side of Figure 19 shows an unfolding net for the Petri net in the lefthand-side of the figure. Two dashed lines divide the unfolding net into three subnets, called
sweeps. The upper part, including target transition ¿2, is the first sweep. The middle part
is the second sweep, containing two out of sweep transitions, t\ and Í3, from the first sweep.
109
The last sweep holds place p3, and a branch starting at transition iio, which is an out of
sweep transition for the second sweep. The pair of out of sweep transitions, ii and Í3, merge
at a transition, t7. This is an example of the MEA case (a) in the unfolding. Therefore, the
MEA solver will generate the SECS {t\,t3}. They also share the common full latency value
of Ic as Step 1 described. In the run time strategy, t\ and £3 will be initially disabled, until
all their input places in the previous sweep are marked. When places pi, and p2 are marked,
the MEA solver will compare the time left before the expiration of the deadline with Ic. If
there is more time left than Ic, t\ and Í3 are fired at the same time. For instance, when p\
and P2 are marked, t\ and £3 are enabled. At the same time, the transition i2 conflicting with
ii and t3 is also enabled. The transition to be fired next is selected non-deterministically.
But if ii is selected to fire, the other associated transition Í3 has to be fired, which can be
done by disableing its conflicting transition, Í2· This strategy is to avoid an uncontroled
syncronization delay. In other words, the MEA solver of Challenger ensures that transitions
in the same SECS are fired together by disabling their conflicting transitions. If there is
enough time left for going through any one of paths to Í7, then eventually there is enough
time for firing the target transition t2. Otherwise, t\ and i3 will be disabled, which means
neither ii or Í3 will be fired..
6.3.2
Parallelism Solver
In addition to solving case (a) of the merge exclusion assumption, Challenger also aims
at solving the parallelism limitation which affects the performance of the controlled system.
Considering the parallelism issue in Figure 19, the target transition is t2, and out of sweep
110
P2
P2
P3
P3
C2
tl
tlO
(
p4
P5
pll
pi
p4
P2
P5
O
pll
O
> til
I
I
I
P'
?6
\L
p6
P7
Pl
P2
P3
O
P3
O
Figure 19: Example Petri net (left-hand-side) and its unfolding (right-hand-side) for MEA
solver and Parallelism solver.
transitions are t\, £3 and Í10. It is necessary to disable these three out of sweep transitions,
when time passes their full latencies, because the firing of the target transition Í2 requires
the tokens back to the inputs of ¿2 in limited time ?. But in a special period, if the system
is running in a parallel mode, disabling certain key transitions by controller will decrease
the concurrency degree of the system. In fact the parallel behaviors originating at disabled
transitions sometimes have no effect on the firing of the target transition. In this example,
the target transition ¿2 can be fired in a loop including ¿2- The key transition ??? is contained
in a concurrent firing loop, {¿io, in}· These two loops could be executed in parallel with
each other. Ideally if the system executes such parallel loops independently, all transitions
in the loops can be fired freely. We call this parallel mode. Our previous version of the
method (80) disables key transition £10 in the second loop, without allowing for parallelism.
Ill
For instance, transitions iio and its sucessor in should be free to fire, when Ì2 is firing in
its loop. There are no interleavings between these two loops. Our previous version will
disable transition iio at an earlier time, when the latency of iio becomes greater than the
left time before the expiration of the deadline. Recall that the latency of iio means the time
elapsed in a longest extended path from iio to ¿2, which is iio, *n, te, tj, ti in this case. The
parallelism limitation is caused by the supervisory controller that restricts the firing of loop,
{iiOj in }, when the target transition Í2 is firing in its own loop. Even this problem will not
compromise the correctness of the supervisory control method, but it might decrease the
performence of the system when the high degree of the concurrency is demanded.
The second goal of Challenger is to identify such potential parallelism in an unfolding
by a structural analysis. We call this Challenger function Parallelism solver. Every key
transition is associated with a special data structure PECS, defined as follows.
Definition 29 (PECS (Parallel Enabled Control Set)) An out of sweep transition ti
contains a PECS, if its input place in its previous sweep S¿_i merges with an out of sweep
transition tj for sweep Sj-i, where j < i. Then the out of sweep transition tj is included in
the PECS of U.
In Figure 19, the transition iio has a PECS, including only i3, based on the definition
of PECS.
The Parallelism solver algorithm of Chanllenger has 4 steps.
112
1. Compute the PECS for each of out of sweep transitions, if there are independent loops
executed in parallel with the firing of the target transition.
2. When the latency of an enabled transition t, is greater than the time left for firing
the target transition, Challenger checks whether any key transition in PECS of t is
disabled. If no such transition is found the Parallelism solver is exited. Otherwise we
proceed to Step3.
3. Enable the key transition t, ignoring the general latency strategy.
4. When the target transition is fired, go back to Step 2.
The pseudocode of the parallelism solver is shown in Figure 20.
In Figure 19, iio is an out of sweep transition for Sweep 2. There is another out of sweep
transition Í3 for Sweep 1, leading to a merge transition t^ with place p¡, which is an input
of ¿io- When target transition ¿2 is n°t firing, its conflicting transition £3 is fired and the
token from place ps is needed to fire the merge transition tg. In this mode, it is necessary
to disable transition fio based on the general latency strategy. When target transition t2
is fired, its conflicting transition Í3 and its successor t§ are not able to fire. Therefore, a
token in place p¡ is not needed for firing target transition t2. In this parallel mode, the
firing sequence {¿io, ¿n} is concurrent with the firing of t2- In this example, out of sweep
transition i10 has a PECS, including t3. Disabling i3 will allow i10 and in to be fired freely.
113
6.4
Simulator
Now we simulate the firing of a supervised Petri net model by giving a Petri net, a target
transition t¿, a deadline, full latencies and two hash maps composed of SECS and PECS.
Compared with the original Petri net, the supervised net is a Petri net with additional
data structures utilized by the supervisory controller. These data structures include full
latencies, SECS and PECS. In addition, the simulator also needs to know the specified
target transition and deadline.
Initially, the clock variable will the value of deadline ? to simulate the countdown of the
clock net. The possibles variable is computed by function computejpossible-firing to get the
initial fireable transitions. For each step, we will check if possibles is empty or the clock is
negative. If possibles is empty, the simulator reach a state in which no fireable transition is
found. That means that a deadlock state has been reached. If the clock becomes negative,
the simulator ran out of the time, and the deadline expired before the target transition was
fired. Either case means the supervised TPN failed in enforcing the deadline. If there is no
violation of either constraint, picking the next firing transition subjects to the next three
controller rules.
1. If a possible firing transition is not an out of sweep transition, it will be selected
randomly, because of non-determinism of firing.
2. If a fireable transition is an out of sweep transition, and if a key transition's latency
is smaller than the clock time, MEA.solver checks if all transitions in its SECS are
114
function boolean supervised_simulator (net, t_d, deadline, SECS, PECSM
init C) ;
simulate () ;}
function simulate () {
possibles = compute_possible_f iring(initial_marking) ;
current_marking = initial_marking;
clock = deadline;
f iring_transitions ;
while (possibles is not empty or clock— >= 0){
for every transition t in possibles
if (t is not key transition) {
f iring_transitions . add( (random_pick(t) ) ; >
else if (t. latency <= clock) {
if (MEA_solver(t, SECS))
f iring_transitions . add (random_pick (SECS Ct)) ;}
else{
if (Parallelism_solver(t, PECS))
f iring_transitions . add(random_pick (PECS (t) ) ; >
current_marking = f ire(f iring_transitions, current „marking) ;
if (t_d in f iring_transitions)
clock = deadline;
possibles = compute_possible_firing(current .marking) ;
}// end of while
if (clock < 0)
Error ("Deadline violation found!");
if (passibles is empty)
Error ("Deadlock violation found!");
>
function boolean MEA_solver (key_transition, SECS) {
all_included = true;
for every transition t in SECS(key_transitionH
if (t not in possibles)
all_included = false;}
return all_included;
}
function boolean Parallelism_solver (key_transition, PECS) {
for every transition t in PECS(key_transition)
if (t. latency > clock)
return true ;
return false;
}
Figure 20: Pseudocode of simulator.
115
ready to be fired as well. If the MEAsolver returns true and any transition in the
SECS is fired, the conflicting transitions with the transitions in that SECS will be
disabled.
3. If a possible firing transition is an out of sweep transition and its latency is not smaller
than the clock time, Parallelismsolver checks whether the latency of any transition
in its PECS is greater than the clock time. If Parallelismsolver returns true, this
transition can be fired.
After selecting a transition to fire, the variables current-marking and possibles need to be
updated based on the transition firing. If target transition t¿ is fired, the clock will be reset
to deadline ?.
The function MEAsolver is a piece of the pseudocode of Challenger. It checks the SECS
of a given key transition. If all transitions in its SECS are possible firing by their input
places in the previous sweep, it returns true.
The function Parallelismsolver is another piece of the pseudocode of Challenger. It
checks the PECS of a given key transition. If any transition in its PECS is disabled by
clock, it returns true.
In general, this simulator emulates the firing of the supervised Petri net. The full latency,
SECS and PECS of each out of sweep transition are used to decide whether a possible
enabled transition is ready to fire by controller rules. These rules mimic the supervisory
controller defined by our methods.
116
The complexity of the Challenger is estimated by its two functions at run time, MEAsolver
and Parallelism.solver. In Figure 20, these two funcitons are linear on the size of SECS and
PECS respectively. In general, the size of either is very small comparing with the whole
set of transitions, because only the out of sweep transitions can be in these sets and out of
sweep transitions are rare.
CHAPTER 7
CORRECTNESS OF SUPERVISORY CONTROLLER
We now prove the overall correctness of the framework for supervisor generation. We
must specifically show that the controlled net Mc fires target transition t¿ within ? time
units since the beginning of net operation or the previous firing of t¿. The controlled net
is similar to the original net Mt, except that all out-of-sweep transitions are disabled when
their latency is bigger than the time left until td must fire.
To prove correctness of the control supervisors, we show that the firing of the target
transition becomes inevitable—as its deadline approches—because all transitions that could
delay its firing will be disabled. The proof is based on the following cornerstones. We assume
that all full latencies assigned by Discovery to net nodes are strictly smaller than ?, the
deadline on the firing of t¿.
1. Target transition t¿ is fireable at least once before ? time units elapse from the initial
net state. This is so because the first sweep is live in the inital net state, leading to
the firing of a t¿ occurrence. Also, we know that the latency of all initial net places
discovered in the first sweep is smaller than ?. Thus, the firing of t¿ must occur within
? time units.
2. The firing of a cutoff transition tc and of all transitions in [tc] (i.e., all transitions
preceding tc in U) puts the unfolding in a state identical, modulo homomorphism, to
117
118
either (1) the initial unfolding state, or (2) the unfolding state reached by firing a
transition tc preceding tc in the unfolding.
3. Suppose that cutoff transition tc is discovered in sweep Sk- Following the firing at tc,
the return of tokens from ic's cut to the matching cut brings the unfolding back to a
state in which sweep Sk-i is active again.
4. By disabling out-of-sweep transitions, we force a decreasing order of sweeps to become
active until the first sweep is active again and transition t¿ can fire.
5. The firing of t¿ must occur before its deadline has expired.
6. The MEA solver and Parallelism solver always guarantee the firing of target transition
td within ? time units of its previous firing.
First, we introduce the concept of Maximal Marked Concurrent Place Subset or MMCPS.
Definition 30 (MMCPS) Given an unfolding U = (Pu, Tu, Fu, ßmin) of ordinary Petri
net Af, a maximal marked concurrent place subset of U is a subset Pm Q Pu , such that (1)
all places in Pm are concurrent with each other, and (2) for each place pt G Pu - Pm, Pi is
either in conflict or a precedence relationship with at least one place in Pm ¦
In brief, a set of places in U is an MMCPS if no other place of U could be added without
breaking the property that all places in the set be concurrent with each other. We have the
following well-known fact for MMCPSs (27).
119
Proposition 1 Consider the unfolding, U, of a safe Petri net Af — (P, T, F, µ?) such that
Vi e T, t* F 0. The set of marked places in all reachable states of U are all and only the
MMCPS sets of U.
Lemma 3 All execution sequences in an unfolding U of a safe and live Petri net, must
lead to the firing of at least one terminal unfolding transition.
Proof. By the algorithm for unfolding construction, a transition is added to an unfolding
when it is enabled by a set of concurrent places contained in the unfolding. In addition,
by Proposition 1 we also have that all maximal sets of concurrent places contained in an
unfolding represent reachable markings. A transition is added to an unfolding only if it
is fireable once. Thus, all terminal transitions must be fireable from the initial cut of the
unfolding. In addition, a firing sequence in U cannot lead to a dead marking, that is, a
marking in which no U transition is enabled. If such a firing sequence existed in U, the
same firing sequence (modulo homomorphism) would lead to a dead marking in the net Ai
underlying U. This would contradict our assumption that Af be level-4 live.
?
Corollary 2 All execution sequences in an unfolding, U, of a safe and live Petri net must
end with one or more cutoff transitions.
Proof. All terminal transitions in our unfoldings are guaranteed to be cutoff transitions.
Since all firing sequences in U must end at a terminal transition by Lemma 3, each execution
sequence includes at least one cutoff transition, that is, the terminal transition. Transitions
different from terminal transitions can be cutoff transitions as well. If an execution sequence
120
contains a non-terminal cutoff transition tc, tc and all transitions following tc are guaranteed
to be cutoff transitions.
?
Lemma 4 Given a cutoff' transition tc of unfolding IA, suppose that all and only transitions
in [tc] are fired from the initial unfolding state. After firing tc, the ordinary Petri net ??
underlying U is in a state that is identical either to (1) the initial net state or (2) a state
obtained by firing a transition tp € [tc], with tp f tc and tp « tc.
Proof. This lemma follows directly from the definition of a cutoff transition and the definition of transition cause.
?
We use Lemma 4 to capture cyclic net behavior in an unfolding. We specifically say
that an unfolding "jumps" to a previous state when we move all tokens back from fringe
places to the corresponding matching places after the firing of some cutoff transition tc.
Lemma 5 Consider an unfolding firing seguence s, starting from the initial unfolding state
and subject to the following conditions. First, a ends with a cutoff transition tc. Second,
a contains all and only the transition in the cause [tc] of tc. Third, we divide a into two
subsequences at, and sa, where a¡, ends with the firing of the transition, tc, that matches tc.
(We have that ab is empty if the cut of tc matches the initial places of the unfolding.) In
this case, sequence aa will be fireable again from the unfolding the state reached after tokens
in the fringe places of tc are transfered back to their matching places.
Proof. We know that the subsequence aa leads the ordinary Petri net underlying U from
a given state back to the same state. Given that no U transitions are fired in s, except
121
for transitions in [tc], the U state obtained after s is fired and after tokens are transfered
from the fringe places of tc back to their matching places is identical to the unfolding state
reached after firing s^. Thus, after tokens are transfered, sa will again be fireable, resulting
in tc being fired again.
?
Definition 31 (Connected sweep region of level k) A k-level connected sweep region
Ck is a subset of an unfolding U , that contains all nodes discovered in sweep k and previous
sweeps in the unfolding.
Lemma 6 Consider a k-level connected sweep region Ck of an unfolding U. Unless an
out-of-sweep transition for Sk is fired from Ck, Ck must contain an MMCPS in the initial
cut of U and all states reachable from the initial cut of U.
Proof. The initial cut of the unfolding is clearly an MMCPS for Ck, as all nodes in /xm¿„ dCk
are clearly concurrent with each other; these nodes also precede all other nodes in Ck- Firing
an internal transition in Ck (i.e., a transition whose output places are all contained in Ck)
will preserve the MMCPS property by Proposition 1. Firing a transition t in the portion of
sweep Sk contained in Ck, such that one or more of t output places is in sweep Sk+i, will also
preserve the MMCPS property of region Ck because Ck cannot contain transitions preceded
by nodes in sweeps Sk+\ or higher. As a result, Ck transitions are enabled independently
of the marking of places in Sk+i or higher sweeps. Likewise, if a transition t contained in
sweep Si and region Ck, with I < k is fired, and if t has one or more output places not in Ck,
the subsequent firing of Ck transitions will not be affected by nodes contained in sweep Si+i
122
and higher. Given that out-of-sweep transitions for the portion of sweep Sk contained in Ck
cannot be fired, the MMCPS property of region Ck is preserved in all reachable unfolding
states.
?
Lemma 7 Let Ck-i be a connected sweep region of level k — 1 in unfolding IA. Suppose
that a firing sequence s in unfolding U contains Ck-\ transitions and exactly one occurrence
of an out-of-sweep transition for sweep Sk-i- If no out-of-sweep transitions for Sk are
fired, eventually the marking of one or more starting places for the connected portion of
Sk containing ti will cause region Ck-i to contain an MMCPS again, after tokens in the
starting places for Sk are returned to their matching places in Sk-iProof. Let t¿ denote the out-of-sweep transition for Sk-i at the end of s. Prior to the
firing of ti, the connected sweep region Ck-i containing the Sk-i place feeding t¿ must
have contained an MMCPS. (Any connected sweep region must contain an MMCPS in the
initial unfolding state, and the firing of transitions contained in the region must preserve
this property of the region.) After U fires Ck-i no longer contains an MMCPS; however,
the connected sweep region Ck that contains U must still contain an MMCPS by Lemma 6.
(Note that in this case we have Ck-i C Ck) By the merge exclusion assumption unfolding
paths originating at U and contained in Sk cannot merge with additional paths originating
at sweep Sk-i- Thus, after U fires, transitions following í¿ in Sk must be fireable. By
Corollary 2, at least one starting place ps for Sk that is not followed by additional Sk
starting places will be marked. Let tc be the cutoff transition for which ps is a fringe place.
Consider a firing sequence p, starting from the unfolding state reached at the end of s.
123
Sequence ? contains all Sk transitions in [ic], ending with tc. When tc is fired, the token in
ps (and other marked starting places for Sk, if any) is returned to its matching places ps.
By Lemma 5, the transition sequence between ps and ps is live again if only transitions in
[tc] were fired after ps became marked.
Before í¿ was fired in s, Ck-i contained an MMCPS. In other words, the firing of í¿ in
s is the only cause why Ck-i does not contain an MMCPS after s. By re-enabling í¿, we
have that Ck-i contains an MMCPS again.
?
Lemma 8 Consider a firing sequence s containing exclusively transitions from a k — 1level connected sweep region Ck-i, out-of-sweep transitions t\...t0 for Sk-i, and no other
transitions from Sk or higher sweeps. The firing of cutoff transitions following t\ . . . t0, and
the subsequent transfer of tokens back to Ck-i will eventually cause region Ck-i to contain
an MMCPS again.
Proof. This lemma is a generalized version of Lemma 7 above. In this case, we allow multiple
out-of-sweep transitions for sweep Sk-i to fire out of region Ck-i- Consider firing sequences
originating at the unfolding state ? reached after s and containing only Sk transitions and
transitions in higher sweeps reachable from Sk transitions by way of out-of-sweep places
for Sk- By the merge exclusion assumption, these firing sequences cannot merge with each
other or with conflicting paths in Sk- Since no out-of-sweep transition for Sk is fired, the
region Ck consisting of Ck-i, transitions t\ . . . t0, and the connected portions of Sk reachable
from t\...t0 must contain an MMCPS after firing s. Thus, region Ck is live after s. Upon
these conditions by Lemma 7 above, live firing sequences starting from state ? must end
124
at cutoff transitions for U. When one of these cutoff transitions are fired, tokens will be
returned to Sk-i in such a way as to re-enable one of transitions t\ . . . t0.
Consider now a firing sequence p, starting from state ? and obtained by interleaving all
firing sequences returning tokens to predecessors of transitions t\ . . . t0 in Sk-i- Because of
the merge exclusion assumption sequence ? must also be live. When ? is fired, all tokens
enabling t\ . . . t0 are returned to Sk-i and region Ck-\ will again contain an MMCPS.
?
Lemma 9 Consider a k-level connected sweep region Ck and a k — 1-level region Ck-i
contained in Ck- Consider any firing sequence s of Ck-i transitions that results in the
marking of one or more out-of-sweep p0¡i . . .p0,n places for Sk-i- If the MEA is satisfied,
we have the following conclusions after the firing of s is complete:
1. No transitions in sweeps Sk and higher are required to fire in order for target transition
t¿ to be fired.
2. If Ck contains an MMCPS after the transition in t e 'p0,k is fired and if no out-ofsweep transitions for Sk are fired, eventually a set of concurrent starting places for Sk
will be marked.
3. The firing of cutoff transitions following p0ji . . .p0,n, and the subsequent transfer of
tokens back to Ck-i will eventually cause region Ck~i to contain an MMCPS again.
Proof. The first conclusion follows from Lemma 6. Since s contains only Ck-i transitions,
Ck-i will contain an MMCPS even if out-of-sweep places for the portion of Sk-i contained
in Ck-i are marked by s. Eventually starting places for 5fc_i and lower sweeps will be
125
marked, resulting in the return of tokens to the first sweep and the firing of an occurrence
of idThe second conclusion also follows from Lemma 6. If no out-of-sweep transitions for
Ck are fired, Ck will continue holding an MMCPS. This will guarantee that C^ is still live
and that a concurrent set of terminal places will eventually be marked. Thus, any set of
concurrent starting places for Sk will eventually be marked.
To prove the third conclusion we must consider several possibilities depending on whether
paths originating at the various places p0ji . . .p0,n Jom m sweep Sk or not. (Recall that the
merge exclusion assumption precludes paths originating at out-of-sweep transitions from
merging with paths originating at other out-of-sweep transitions; however, paths originat-
ing at multiple out-of-sweep places can merge with each other.) If paths originating at
??,? · · · ??,? do not merge with each other, similar considerations apply to the proof of Lemmas 7 and 8 above. If, however, these paths merge, different considerations apply.
Suppose, without loss of generality, that unfolding paths originating at two out-of-sweep
places for Sk-i,Po,i and Po,2, merge in sweep Sk at transition tm. This situation is illustrated
in Figure 21. In order for tm to fire, both places p0<\ and p0%i must first be marked. Clearly,
these two places must be concurrent. (If we had p0ii#po,2, then tm#tm, a contradiction.
Also, we could not have p0y\ < p0,2 or viceversa, because the input transitions of p0%\ and
p0i2 are contained in sweep Sfc-i·)
Suppose that p0¡i and p0j2 are marked and that tm is eventually fired. Since region
Ck is live, the firing of tm leads to the marking of a set of concurrent starting nodes for
126
sk-l
/
?.
M^
1
Figure 21: Illustration of Lemma 9.
Sk- Similar to the proof of Lemmas 7 and 8, when tokens in the starting places of Sk are
returned to their matching places Sk-i will regain an MMCPS.
An issue may arise if one of p0¿ and p0¡2 does not receive a token. Suppose, without
loss of generality, that p0¿ is marked and p0t2 is not, for instance, because t\ fires but ¿2 is
precluded from firing by a conflicting transition in Sk-i ¦ In this case, we must have that
a sequence of transitions following p0ii in Sk are fireable after p0ii is marked, leading to a
set of concurrent starting places for Sk- (If this were not the case, then a token would be
"stranded" in sweep Sk, either in place p0ii or a place following p0ti, against the assertion
that region Ck be live.)
D
Theorem 8 If all basic and full values assigned by Atlantis and Discovery to nodes in
unfolding U are not greater than X, the deadline on transition td, the execution sequences
127
allowed by our supervisory controllers will guarantee that the deadline on target transition
t¿ is always met in the supervised net.
Proof. Lemma 8 and Lemma 9 show that tokens lost out of a sweep S¿ to the next sweep
Si+i will eventually be returned to Si, if out-of-sweep transitions for Si and higher sweeps
are disabled. This process will eventually cause the first sweep, S\, to contain an MMCPS,
leading to the firing of í<¿. We must now prove that this happens on time for t¿ to meet its
dealine.
Suppose that a token is lost from a sweep Sk to the next sweep Sk+i by way of an
out-of-sweep transition t0. In order for this to happen, t0 must not be disabled by the
supervisory controller. This means that the time left until the expiration of the deadline
on td was greater than the full latency of l(t0) of t0 when t0 fired. Moreover, we know
that l(t0) > b(t0), the basic latency of t0. By Theorem 6, b(t0) is an upper bound on the
length of all extended paths from t0 to t¿ occurrences in the unfolding. In the absence of
synchronization delays, tokens flowing to Sk+i by way of i0's firing will be returned to Sk
on time for t¿ to fire. However, we may not rule out that the placement of tokens in Sk+i
at the time of i0's firing could result in synchronization delays in Sk+i- In the worst case,
the tokens in Sk+i would be located in the root places of the sweep (or the direct output
places of root transitions) when t0 fired. We handle this case with Lemma 8. Suppose that
the hypotheses of the lemma hold (i.e., the connected sweep region Cfc+¿ that contains both
Sk and Sk+i contains an MMCPS). In this case, the full latency l(t0) of t0 will account
for all synchronization delays caused by the placement of tokens in Sk+i when t0 fired. By
128
induction, tokens lost through out-of-sweep transitions will always be returned to the first
sweep and occurrences of transition t¿ on time for t¿ to fire.
Suppose that a token is lost from a sweep Sk to the next sweep Sk+i by way of an
out-of-sweep place p0. This case is slightly more complicated than the case of out-of-sweep
transitions because controllers cannot prevent this token loss from a sweep to a higher sweep
from occurring. The first conclusion in Lemma 9 says that tokens lost from a sweep Sk to
the next sweep Sk+i by way of an out-of-sweep place p0 will not affect the presence of an
MMCPS in Sk- Thus, the return of tokens from Sk to previous sweeps, which is needed for
the next firing of td, will not be affected by the tokens lost through p0. Subsequent firings
of t¿ might be affected. However, the full latency value l(p0) of p0 takes into account all
extended paths from p0 back to t¿- Full latencies also take into account the possibility of
tokens being further passed to higher sweeps through additional out-of-sweep places. See
the third item in the definition of full latency in Section 5.2. Thus, the return of tokens on
time for the next firings of td is guaranteed even when tokens flow out of a sweep through
a place p0.
We now consider the case of the first sweep Si . Suppose that no out-of-sweep transitions
for Si are fired and that no out-of-sweep place for Si was marked since the last firing of
td- This situation occurs also in the initial net state before any occurrence of t¿ is fired. In
this case, the full latencies of the initial starting places for unfolding U will give an upper
bound on the time needed for td to fire the first time. It is possible for multiple occurrences
of td to be in precedence relationship with each other in S1. In this case, the basic latencies
129
assigned by Atlantis to t¿ occurrences following other t¿ occurrences will still give an upper
bound on the greatest delay between different firings of t¿.
Finally, we must show that the presence of an MMCPS in a level-fc connected sweep
region Ck leads to the liveness (i.e., the presence of an enabled transition) of Ck- We prove
this fact by contradiction. Suppose that Ck contains an MMCPS, but no transitions in Ck
are enabled. Consider the unfolding state in which this happens. Two possibilities exist.
First, no unfolding transitions are enabled. If the last fired transition is a terminal unfolding
transition e¿, we return tokens in the cut of [e¿] to their matching places, which will re-enable
an unfolding transition. If e¿ is not a terminal transition, this means that the unfolding has
reached a state that does not enable any transitions. This means that the original net Af
similarly contains a state that does not enable any net transitions. However, this fact would
contradict the assumption that the original net be level-4 live. Second, consider the case
in which the unfolding contains at least one enabled transition but Ck contains no enabled
transitions. In this case, at least one out-of-sweep transition e¿ for Sk, the highest order
sweep contained in Ck, must be enabled because the original net is level-4 live. However,
this would mean that the Ck transition in direct conflict with e¿ is not enabled while e¡
is enabled. This situation contradicts our assumption that the original net be free from
these kinds of livelocks. We conclude that the presence of an MMCPS in a connected sweep
region Ck causes the sweep to be live.
n
Theorem 8 has shown the correctness of the general supervisory control without violating
any case of the merge exclusion assumption. Now we turn to proving the correctness of
130
the two Challenger components, namely the MEA Solver and the Parallelism Solver. The
following Lemma will first prove that firing all transitions in a SECS returns tokens back
to previous sweep containing an MMCPS.
Lemma 10 Consider a firing sequence s containing out of sweep transitions in a SECS
from a k — 1 -level connected sweep region Ck-x, out-of-sweep transitions ?? . . . t0 for Sk-x,
and no other transitions from Sk or higher sweeps. The firing of cutoff transitions following
tx . . . t0, and the subsequent transfer of tokens back to Ck-x will eventually cause region Ck-x
to contain an MMCPS again.
Proof. Let SEi denote the SECS of a key transition in Sk at the end of s. Prior to the firing
of transitions in SEi, the connected region Ck-x containing the Sk-x places feeding SEi must
have contained an MMCPS. This property of the connected region Ck-x is preserved from
the initial unfolding state, if no other out of sweep transitions for Sk-x or higher sweeps
are fired, as we saw for Lemma 7. Consider first the case in which only transitions in one
SECS are fired. Next we generalize to the case of multiple SECSs.
After all transitions in SE1 are fired, Ck-x no longer contrains an MMCPS; however, the
connected sweep region Ck that contains SE1 must still contrain an MMCPS by Lemma 6.
Suppose that the last merge transition is tm. After firing tm, there is no merge with
additional paths originating at out of sweep nodes for sweep Sk-x- By contradiction, if
there is a merge transition with a path originating at out of sweep place, it will conflict
with the assumption that MEA case (b) does not happen. If there is a merge transition with
a path originating at out of sweep transition í¿, set SE1 should contain transition í¿, and
131
tm cannot be the last merge transition. Therefore, after tm is fired, transitions following tm
in Sk must be fireable. By Corollary 2, there is a set of starting places P3 for Sk marked.
Let tc be the cutoff transition for which places in P3 are fringe places. Consider a firing
sequence p, starting from the unfolding state reached at the end of s. Sequence ? contains
all Sk transitions in [tc], ending with tc. When tc is fired, the tokens in P3 are returned to
their matching places P3. By Lemma 5, the transition sequence between P8 and Ps is live
again if only transitions in [tc] were fired after P6 became marked, where SEi is on the cause
of tc. The firing sequence ? must become fireable again after "jumping back", because no
other transition on the cause of tc was fired. If it is not returned to the state same as the
one before firing any transition in SEj, or an earlier state containing MMCPS, then some
of transitions in SEi are not fireable. This will violate our liveness level-4 assumption.
In summary, before transitions in SEi were fired in s, Cfc-i contained an MMCPS. In
other words, the firing of all transitions SE1 in s is the only cause why Ck-i does not
contain an MMCPS after s. By re-enabling transitions in SEi, we have that Ck-i contains
an MMCPS again.
?
Lemma 11 Consider a firing sequence s containing exclusively transitions from a k —
l-level connected sweep region Ck-i, out- of-sweep transitions ti...t0 (including a set of
SECSs, SEi . . . SE0) for Sk-i, and no other transitions from Sk or higher sweeps. The
firing of cutoff transitions following tj. . . . t0, and the subsequent transfer of tokens back to
Ck-i will eventually cause region Ck-i to contain an MMCPS again.
132
Proof. This lemma is a generalized version of Lemma 7 and Lemma 10 above. In this case,
we allow multiple out-of-sweep transitions and transitions in MEA case (a) for sweep Sk-i
to fire out of region Ck-i- Consider firing sequences originating at the unfolding state ?
reached after s and containing only Sk transitions and transitions in higher sweeps reachable
from Sk transitions by way of out-of-sweep places for Sk- By the MEA case (b) and (c),
these firing sequences cannot merge with out of sweep places or with conflicting paths in SkSince no out-of-sweep transition for Sk is fired, the region Ck consisting of Ck-i, transitions
ti . .. t0, and the connected portions of Sk reachable from t\ . . . t0 must contain an MMCPS
after firing s. Thus, region Ck is live after s. Upon these conditions by Lemma 7 and
Lemma 10 above, live firing sequences starting from state ? must end at cutoff transitions
for U. When one of these cutoff transitions is fired, tokens will be returned to Sk-i in such
a way as to re-enable one of transitions t\ . . . t0.
Consider now a firing sequence p, starting from state ? and obtained by interleaving all
firing sequences returning tokens to predecessors of transitions t\ . . . t0 in Sk-i- Because of
the MEA case (b) and (c) sequence ? must also be live. When ? is fired, all tokens enabling
t\...t0 are returned to Sk-\ and region Ck-\ will again contain an MMCPS.
?
Theorem 9 // the common full latency lc values assigned by the MEA solver of Challenger
to nodes in unfolding U are not greater than ?, the deadline on transition t¿, the execution
sequences allowed by our MEA solver will guarantee that the deadline on target transition
t¿ is always met in the supervisored net.
133
Proof. There are two states for transitions in a SECS for Sk-i- They will be either all
enabled or all disabled. In case of all disabled, the MMCPS are contained in Sk-i, because
there is no token missing from that portion of connected region Ck-i, including this SECS.
Thus, the common strategy of supervisory control is sufficient to enforce deadline on target
transition td, by Theorem 8, because all cases of MEA do not happen. In case of all enabled,
the MEA solver will only allow all transitions to fire when their common full latency lc values
are less than time left until the expiration of i</'s deadline. If the transitions in a number
of SECSs are all fired together, then the conditions of Lemma 11 are satisfied. As a result,
tokens are returned back to Sk-i eventually and an MMCPS will be contained in Sk-i
again, if out of sweep transitions for Sk and higher sweeps are disabled. Moreover, we know
that the common full latency lc values are all not less than their basic latency values. The
same proving strategy as for Theorem 8 is applied, by induction from sweep Sk-i to sweep
Si.
D
Theorem 10 Considering an out of sweep transition ti for sweep S¿_i, if any transition
in its PECS has been disabled, enabling U by Parallelism solver of Challenger will not cause
the deadline violation on target transition t¿ in the supervised net.
Proof. Suppose that a disabled transition tj is in the PECS of í¿, and that tj is an out of
sweep transition for Sj-i, where j < i. By the definition of PECS, we know that transition
ti is in a loop parallel with the loops in sweep Sj -?, where these loops are all contained
in a connected region C¿. In Figure 22, transition tm is the merge transition in sweep 5¿_i
originating at tj and place pi an input of í¿. Since transition tj is disabled, transition tm
134
Sj-I
Sj
pi
\JÍ
^ ^ '
*1
/
/
\
/
\
tm iy*
\
Figure 22: Illustration of Theorem 10.
will not be fireable before í<¿ is fired. Thus, transition tm is not on the cause of firing target
transition t¿ at the current state. Enabling transition í¿ will violate the common supervisory
control strategy if the latency of U is greater than the time left. But firing target transition
td does not rely on firing the merge transition tm and tx. Remember that the latency of U
represents the longest time elapsed based on the extended path from U to target transition
td. It is not necessary to disable U for firing tm on time, because tm does not cause any
delay of firing td in which tm is not included. Therefore, the strategy of Parallelism solver
does not create additional delay for firing t¿.
D
CHAPTER 8
EMPIRICAL RESULTS
In order to evaluate empirically our method, we have implemented a tool-set, consisting
of the following components: (a) unfolding algorithm, (b) latency algorithms, (c) Challenger
algorithm and simulator. In Figure 23, we exhibit the tool-set for each step of the overview
diagram appearing in Figure 1.
The tool-set takes a basic Petri net file as an input, which is in a basic Petri net format
used by the PEP tool (Programming Environment based on Petri Nets) (59). PEP provides
an extensive set of modelling, compilation, simulation and verification functionality for
Petri nets. It has been implemented on Solaris 2.4, SunOS 4.1.3 and Linux. Our Unfolder
implements the Stubborn unfolding algorithm in Figure 5. The output file is an unfolding
net of the input Petri net. The next step Converter transforms the original Petri net and
the generated unfolding net into PNML (Petri Net Markup Language) format, which is the
compatible language for PNK (Petri Net Kernel) system (61). Converter also randomly
creates the time intervals for each transition in both nets. The original Petri net then
becomes a timed Petri net with bounded intervals, and its unfolding net inherits these time
intervals by the homomorphic mapping. The timed unfolding net will then be input into
our implementation of latency algorithms, Atlantis in Figure 11 and Discover in Figure
13. They assign the full latency values for each controlled transition in the unfolding net.
During the latency generation, the MEA (Merge Exclusion Assumption) is checked.
135
136
"C PN J
(PEP basic net file)
Unfolder
I
unfolding
I
(PEP format file
Converter
CTPN
ünfo lding
( .pnml files)
Latency
Atlantis &
calculation
Discovery
Fail-
Challenger
Controller
generation
rolled ne
PECS
[I Contro
ne
Simulator
I Empirical Ì
l^results J
Figure 23: Evaluation toolset.
137
If there is no MEA violation, Controller generation will map the transition latency
values into the original Petri net, which is transformed into a Controlled net with a few
extended fields, including time intervals, and latency. If there are MEA violations, we
will check if they all belong to MEA case (a). If these cases are only MEA case (a), our
Challenger implementation will generate the corresponding Controlled net with additional
fields, including SECS (Synchronization Enable Control Set) and PECS (Parallelism Enable
Control Set). Finally, our Simulator, implementing the algorithm in Figure 20, takes as
input the Controlled net and outputs the execution traces, where the evaluation data can
be collected.
Our first set of evaluation examples are from Corbett (24). The empirical results that
we have collected in Table III show data for supervisor generation in three examples, Gasi,
Over2 and Ring3. Gasi is a self-service gas station model (37). One operator receives the
customer's request, then activates a pump. The customer pays for gas and pumps gas.
After he/she finishes pumping, the pump station reports the pumping information to the
operator. There are some exceptions leading the system flow back to the initial state. Over2
is an automated highway system with an overtaking protocol (33). The protocol is designed
to coordinate overtaking of cars on a road. A set of cars are simulated in a queue, with
capability to communicate with adjacent cars. A driver who intends to overtake its front
car, needs to initiate an overtake negotiation decided by the environment. In this example,
there are 2 cars, 2 communication mediums for the negotiation and overtake phases, an
environment and a set of timers. Ring3 is a token ring mutual exclusion protocol, in which
138
3 users access a resource through 3 server tasks. Each server task takes a turn to get a
token around a ring.
We have executed our algorithms in our tool-set above. The results have been collected
in Table III, whose columns list three Corbett examples, Gasi, Over2 and Ring3. The first
two rows of the table show the original net sizes (places and transitions). The Unj'older
computes the corresponding Stubborn unfolding of each case. The following two rows of
the data are the Stubborn unfolding sizes (conditions and events). The assumptions in the
following rows include livelock and MEA. All the cases we tested were livelock free, but
violated the MEA case (a). Therefore, Challenger is invoked to generate the extended
controlled net, in which every key transition includes time intervals, latency, SECS and
PECS. In the Gasi example, there are 4 key transitions to be controlled by the supervisory
controller, and 2 of them merge with each other. These two merging transitions then contain
their SECS fields respectively. In Over2 example, the number of key transitions and SECS
are 5 and 3 respectively. The Ring3 gets 4 and 3.
The result rows including three evaluation criteria, deadline violation, transition cover-
age with controller and without controller. The first one is critical for assessing whether all
the execution traces returned by the simulator satisfy the prespecified deadline of a given
target transition. There are no deadline violations in all three examples. The following
two criteria on coverage of the simulation concern the restriction of our supervisory control
method. It measures if our supervisory controllers over restrict the firing freedom of the
model. The coverage means the number of transitions fired during simulation compared
139
Gasi
original net
places
transitions
stubborn unfolding
conditions
events
assumptions
livelock violation
MEA violation
supervisory controller
key transitions
results
SECS
PECS
deadline violation
coverage with controller
coverage without controller
28
21
Ring3
Over2
39"
33"
33
43
21
No
77
37
No
32
50
24
No
case (a)
case (a)
easel a)
No
No
No
85.7%
85.7%
72.7%
72.7%
81.3%
81.3%
Table III: Data for supervisory control test on Corbett examples.
with the total number of transitions in the original (uncontrolled) model. We count the
number of the fired transitions. As a terminal condition of the simulation, if it fires 20 times
size of the transition set, we will terminate our simulating. Eventually, the coverage data
show 85.7%, 72.7% and 81.3% for each of the examples. For example, in the Gasi example,
there are 21 transitions in its original net, and 21 events in the stubborn unfolding. Our
supervisory controller selectively disables up to 4 key transitions, of which two transitions
are synchronized at a merge point. So both transition will contain a SECS. The simulator
counts 420 (21*20) times of transitions fired as a terminal condition. The coverage means
85.7% transitions of the original net have been covered (fired) in the simulation.
140
dph2
original net
places
stubborn unfolding
transitions
conditions
12
14
28
12
3n
7n
3n
14
No
No
No
No
case (a)
case (a)
case(a)
case(a)
No
No
n-2
No
No
100%
100%
100%
100%
events
assumptions
livelock
MEA violation
supervisory controller
key transitions
results
SECS
PECS
deadline violation
coverage with controller
dph4
dph(n) Challenger
4ñ~
Ï6~
Table IV: Data for supervisory control test on Dining Philosophers examples.
Due to the fact that these examples do not satisfy the liveness assumption, comparison
between the coverage with and without supervisory controller is necessary to disclose a bet-
ter picture for the real capability of our supervisory controller. After we simulate the firing
without the supervisory controller, the coverage result is same as under control. Therefore,
the integrated supervisory controller does not compromise the freedom of the modeled system, but deadline is enforced by our supervisory controller successfully. Moreover, level-4
liveness seems to be a loose assumption in which our methods still work successfully on the
examples containing non-looping behaviors. In addition, our livelock assumption has not
been checked before generating the supervisory controller, but if there is a livelock violation in a model, the simulated results should reach a deadlock in a long time run. In our
assessing results, we do not find a livelock.
141
One of the advantages of our methods is computational efficiency, because we use unfolding technique and static analysis on the unfolding structure. In order to evaluate this
point, we have applied this tool-set to a set of examples in Table IV, satisfying all the
assumptions, such as level-4 liveness and freedom from livelock. The first testing case is the
classic two dining philosophers example. In this model, one of philosophers will pick up the
left-hand-side fork, followed by the right-hand-side fork, while the other one will reverse this
picking order for avoiding the deadlock. In the first step, our stubborn unfolding builds the
unfolding net containing 14 conditions and 6 events. In the second step, latency algorithms
verify that this example does not violate the MEA case (b) and (c), and identify that there
are 2 key transitions and 2 SECSs. In the third step, our simulator emulates the firing of
this model, controlled by our supervisory controller rules in Section 6.4. The output result
shows 0 deadline violation and 100% coverage, where our two evaluation goals are met.
The second example is the 4 dining philosophers Petri net model. It also goes through
the three steps computation. It generates double size of stubborn unfolding, and 4 key
transitions with 2 SECSs and 2 PECSs. This testing result also shows that our methods
succeed in enforcing the deadline and covering all transitions of the original net.
The following column is a generalized version of such dining philosophers examples. It
clearly shows the efficiency of our methods. When the size of the original Petri net grows,
the size of our stubborn unfolding inceases linearly, which is a great benefit for our latency
algorithms. Recall that our latency algorithms are linear on the size of unfolding as we
mentioned in the Section 5.3. The number of the controlled key transitions in the supervised
142
net increases linearly on the number of philosophers. The number of SECSs is always 2,
because there are only two out of sweep transitions merge. They represent that the two
picking forks events for the last philosopher in the last sweep require synchronization control
to each other. For increasing the parallelism and avoiding the "hungry" of the "non-target"
philosophers, there are n-2 PESCs generated. They will set non-target philosophers free
of picking up forks, without the restriction of the time closing to the target philosopher's
deadline. In the end, the testing results always show our supervisory controllers succeed
in enforcing the target phylosopher's deadline and avoiding the "hungry" issue of the other
phylosophers.
The last column is the testing case from the example in Figure 19. There are 3 key
transitions, ii, ¿3 and tio, to be controlled. The SECSs for t\ and Í3 both contain t\ and
i3. The only transition iio contains PECS, including transition i3. The simulation result of
this example also turns out success, considering our two testing goals.
Furthermore, we have tested even larger sizes of the dining philosophers examples, 8
dining philosophers (dp8), 10 dining philosophers (dplO), 12 dining philosophers (dpl2),
and 14 dining philosophers (dpl4) in Table V. Wc got the time costs for two major steps
of our methods, stubborn unfolding and supervisory controller generation. As we can see,
the time complexity of stubborn unfolding seems to be polynomial T(na) when the number
of the dining philosophers is increasing. In these adjacent four cases, the time costs for
stubborn unfolding are 0.020, 0.046, 0.109, and 0.312. The trend is about upper bounded
143
dph8
dphlO
dphl2
48"
dph(n)
56
52
98
52
0.312
No
4n
3n
7n
3n
32
24
56
24
0.020
No
70
30
0.046
No
36
84
36
0.109
No
case (a)
case (a)
case(a)
case(a)
10
12
14
n-2
9.808
No
T(n)
deadline violation
5.239
No
10
7.662
No
12
3.422
No
coverage with controller
100%
100%
100%
100%
100%
original net
places
stubborn unfolding
transitions
conditions
events
time (sec.)
assumptions
livelock
MEA violation
supervisory controller
key transitions
40
dphl4
30~
T(na)
No
case(a)
SECS
PECS
time (sec.)
results
Table V: Time complexity on Dining Philosophers examples.
by n4. In the other part of our methods, supervisory controller generation costs T(n), the
linear growth while the number of the dining philosophers is rising.
In summary, we have shown a set of dining philosophers examples satisfying all assumptions of our methods required. And the simulating results shown the success of enforcing the
deadline of the specified target transition, and all non-target transitions have been covered.
This full coverage is an important feature of our supervisory controller, because most of the
systems are demanding the high concurrency degree. Our parallelism solver has succeeded
to overcome this limitation to increase the parallelism. In addition, we have evaluated the
time costs for the scalable example set. The results disclose the time complexities of two
major parts of our methods are polynomial and linear respectively, which is as close as
No
144
the best methods can do in their fields. We also tested a set of benchmark cases from Cor-
bett (24) at beginning. It also confirms that our supervisory controller succeeds in enforcing
the deadline of the target transition, and the coverage is same as the counterpart testing
without supervisory controller.
CHAPTER 9
CONCLUSIONS AND FUTURE RESEARCH
In this dissertation we presented a novel supervisory control method for enforcing a
periodic deadline on the execution of an event in a modeled system. An advantage of
this method is that it can deal with varying task durations and change dynamically a
task schedule depending on the actual times that tasks take to execute. Even though the
method runs off-line with respect to the controlled system, the resulting controllers can
automatically adjust the control policies as the duration of system operations vary within
predefined ranges. Another advantage is that we tie supervisor generation to an untimed
model, namely the unfolding of the ordinary Petri net underlying the time Petri net modeling
the system. We deliberately avoid state space exploration (whether timed or untimed).
In our point of view, our method breaks new ground of enforcing reliability of the software system by utilizing unfolding analysis and supervisory controller synthesis. First, we
define our formal model bases for a concurrent software system, which are Petri nets and
time Petri nets. Petri nets capture explicitly concurrent, causal, and conflicting relation-
ships. Time Petri nets provide the time extension to represent the time feature of underlying
modeled system. Based on these models, we conduct the structural analysis which helps
define the feasibility of our method. More importantly, we discover that synchronization
and livelock properties in a net can possibly cause a deadlock in the supervised net, which
145
146
did not happen in the original model. Consequently, we are able to carefully define our
assumptions for our method.
Second, we apply the unfolding technique to obtain surprising benefits. Our stubborn
unfolding algorithm is able to not only generate a complete unfolding, but also detect
the global deadlocks of the underlying ordinary Petri nets modeling concurrent systems.
This algorithm computes the stubborn unfoldings, which combine stubborn sets and model
unfolding, two existing methods for sidestepping the complexity of state space exploration.
Our empirical data show that the reductions in unfolding size are possible, depending on
the concurrent system under consideration. This strategy also benefits the computational
tractability of the method and the size of unfolding.
Third, we create a couple of efficient latency algorithms based on the idea of the extended
path. The Atlantis algorithm divides an unfolding net into a set of contiguous sweeps based
on the target transition, backward through the extended paths. Taking the latest of firing
time of the transitions along with the extended paths, the Atlants calculates the basic
latency value for each node. The Discover algorithm first checks whether a synchronization
appears in a given sweep, and solves a particular synchronization case, where the key
transition only merges with initial places. As a result, the Discover outputs the full latency
for this special case and reports whether the merge exclusion assumption is satisfied. The
significant success of our latency algorithms is to identify the structual specialities of each
key transition (out of sweep transition) and compute latencies in time linear in the size of
an unfolding.
147
Fourth, we have made some progress toward sidestepping the merge exclusion assumption and the parallelism limitations in the controlled nets. The Challenger solvers generate
the extended fields for each key transition to associate with other key transitions, with
respect to their synchronization relationships. On the one hand, the MEA solver handles
successfully one case that violates the MEA. Additionally it makes a breakthrough for
the synchronization delay problems. On the other hand, the Parallelism solver increases
fruitfully the concurrency degree when the firing of the target transition is independent
of the parallel behaviors. Moreover, the Challenger algorithm engenders a new paradigm
to dynamically control certain key transitions, depending the firing of their corresponding
transitions.
Fifth, we have already conducted empirical studies aimed at assessing the scalability of
this method. The empirical data reported in Chapter 8 illustrate the success of our toolset
for automatically generating supervisory controller and enforcing the deadline on the target
transition. More significantly, the computations of our major algorithms are highly efficient.
In particular, we show the applicability of our methods in the Corbett examples, even in
cases that do not satisfy the level-4 liveness assumption.
9.1
Future work
We are exploring additional research directions. Regarding our MEA assumption, synchronization delay among the controlled transitions has been identified, and Challenger is
able to handle one case of possible violations. But there are two other cases without solution
yet. One of the benefits from Challenger is that it provides a way to identify these synchro-
148
nization points, and gives a solution to overcome such synchronization delay. Consequently,
we plan to analyze the difference between the solved and unsolved MEA cases, and design
the similiar solutions for other two cases by dynamic controller, which can disable/enable
a set of corresponding key transitions depending on the current state of the controlled net.
Currently our supervisory controller is conservative when the time is close to the dead-
line. Considering the whole set of the possible behaviors, our supervisory controller might
overrestrict the firing of some behaviors. This is so because of tworeasons. First, the ba-
sic latency for a common input place of multiple conflicting transitions is assigned by the
maximal latency value of these transitions. Second, the full latency of the controlled key
transition is compensated by the largest latency value in its compensation set, because of
the synchronization delay on the merge transition. Both of them can cause a conservative
strategy for the supervisory controller. In order to improve the current control strategy, we
have to identify the special structure of the MEA cases and compute the conflicting rela-
tionships in every sweep. Then a more detailed dynamic controller could be used for each
of these cases. Maximal permissiveness is an important feature for supervisory controllers.
The other works might hold this property by searching the whole state space (64; 65; 72).
We could improve our supervisory controller optimally still based on the structure analysis
which gains us the efficient computation.
Our problem domain takes into account a given target transition, which represents a
critical event in the modeled system. This event is required to execute periodically. There is
a similar problem can take advantage from our current methodology. Given a pair of events,
149
the modeled system requires prespecified delay between the executions of these events. We
could use latency algorithms to compute all possible execution sequences between them,
based on our definition of extended path. This kind of scheduling problems has been
addressed by authors (82; 83; 85). These works use dynamic programming, and different
ways to model the real-time system. Still the computational complexity is the problem
considering the practical issues, as a comparison with our framework.
In addition, we plan to expand the timing properties that we can enforce with our
control supervisors. For instance, we will investigate ways to combine maximum delays on
transition firings with minimum delays. Finally, we will explore the integration of real-time
supervisory control with methods for enforcing mutual exclusion properties (see, e.g., (54))
and liveness properties (see, e.g., (29; 50)).
CITED LITERATURE
1. Aalst, W. V. D.: Interval timed coloured petri nets and their analysis. In Application
and Theory of Petri Nets 1993, ed. M. A. Marsan, volume 691 of Lecture
Notes in Computer Science, pages 453-472. Springer Berlin / Heidelberg, 1993.
10.1007/3-540-56863-8-61.
2. Albers, K. and Slomka, F.: An event stream driven approximation for the analysis
of real-time systems. In Real-Time Systems, 2004. ECRTS 2004. Proceedings.
16th Euromicro Conference on, pages 187-195, 2004.
3. Alur, R.: Timed automata. Theoretical Computer Science, 126:183—235, 1999.
4. Alur, R., Feder, T., and Henzinger, T. A.: The benefits of relaxing punctuality. Journal
of the ACM, 43(1):116-146, 1996.
5. Alur, R. and Henzinger, T.: Logics and models of real time: A survey. In Real-Time:
Theory in Practice, eds, J. de Bakker, C. Huizing, W. de Roever, and G. Rozenberg, volume 600 of Lecture Notes in Computer Science, pages 74-106. Springer
Berlin / Heidelberg, 1992. 10.1007/BFb0031988.
6. Alur, R. and Weiss, G.: RTComposer: a framework for real-time components with
scheduling interfaces. In Proceedings of the 7th ACM international conference
on Embedded software - EMSQFT '08, page 159, Atlanta, GA, USA, 2008.
7. Audsley, N., Tindell, K., and Burns, A.: The end of the line for static cyclic schedul-
ing?
In Real-Time Systems, 1993. Proceedings., Fifth Euromicro Workshop
on, pages 36-41, 1993.
8. Baruah, S. and Baker, T.: Global EDF schedulability analysis of arbitrary sporadic
task systems. In Real-Time Systems, 2008. ECRTS '08. Euromicro Conference
on, pages 3-12, 2008.
9. Behforooz, A. and Hudson, F.: Software engineering for real-time high reliability appli-
cations. In Engineering of Computer-Based Systems, 1999. Proceedings. ECBS
'99. IEEE Conference and Workshop on, pages 247-253, 1999.
150
151
10. Berthomieu, B. and Diaz, M.: Modeling and verification of time dependent systems
using time petri nets. IEEE Trans. Softw. Eng., 17(3):259-273, 1991.
11. Boissel, O. and Kantor, J.: Optimal feedback control design for discrete-event systems
using simulated annealing. Computers fc Chemical Engineering, 19(3):253-266,
March 1995.
12. Bonhomme, P.: Constraints graph based approach for the control of time critical systems. In Emerging Technologies and Factory Automation, 2008. ETFA 2008.
IEEE International Conference on, pages 113-120, 2008.
13. Bonhomme, P.: Towards a supervisory control policy for time critical systems. In
Control Applications, (CCA) fc Intelligent Control, (ISIC), 2009 IEEE, pages
1649-1654, 2009.
14. Bonhomme, P., Berthelot, G., Aygalinc, P., and Calvez, S.: Verification technique for
time petri nets. In Systems, Man and Cybernetics, 2004 IEEE International
Conference on, volume 5, pages 4278-4283 vol.5, 2004.
15. Bouyer, P., Markey, N., Ouaknine, J., and Worrell, J.: On expressiveness and complexity in Real-Time model checking. In Automata, Languages and Programming,
eds, L. Aceto, I. Damgrd, L. Goldberg, M. Halldrsson, A. Inglfsdttir, and I.
Walukiewicz, volume 5126 of Lecture Notes in Computer Science, pages 124-
135. Springer Berlin / Heidelberg, 2008. 10.1007/978-3-540-70583-3.11.
16. Brandin, B. and Wonham, W.: Supervisory control of timed discrete-event systems.
Automatic Control, IEEE Transactions on, 39(1):329 -342, 1994.
17. Brave, Y. and Krogh, B. H.: Maximally permissive policies for controlled time marked
graphs. In Proc. of 12th IFAC World Congress, pages 263-266, Sydney, Australia, 1993.
18. Buy, U. and Sloan, R.:
A petri-net-based approach to real-time program anal-
ysis. In Software Specification and Design, 1993., Proceedings of the Seventh
International Workshop on, pages 56-60, 1993.
19. Buy, U. and Darabi, H.:
Deadline-enforcing supervisory control for time petri
nets. In CESA2003 IMACS Multiconference on Computational Engineering in
Systems Applications, 2003.
152
20. Cassandras, C. G. and Lafortune, S.:
Introduction to Discrete Event Systems.
Springer, 2nd edition, November 2007.
21. Clarke, E. M., Emerson, E. A., and Sistla, A. P.: Automatic verification of finite-state
concurrent systems using temporal logic specifications. ACM Trans. ProgramLang. Syst., 8(2):244-263, 1986.
22. Cofer, D. and Garg, V.: Supervisory control of real-time discrete event systems using
lattice theory. In Proceedings of 1994 33rd IEEE Conference on Decision and
Control, pages 978-983, Lake Buena Vista, FL, USA, 1994.
23. Cofer, D. and Garg, V.: Control of event separation times in discrete event systems. In Proceedings of 1995 34th IEEE Conference on Decision and Control,
pages 2005-2010, New Orleans, LA, USA, 1995.
24. Corbett, J.: Evaluating deadlock detection methods for concurrent software. Software
Engineering, IEEE Transactions on, 22(3):161-180, 1996.
25. Dima, C: Real-time automata. J. Autom. Lang. Comb., 6(l):3-23, 2001.
26. Duri, S., Buy, U., Devarapalli, R., and Shatz, S. M.: Application and experimental
evaluation of state space reduction methods for deadlock analysis in ada. ACM
Trans. Softw. Eng. Methodol., 3(4):340-380, 1994.
27. Esparza, J. and Heljanko, K.:
Unfoldings: A Partial-Order Approach to Model
Checking. Springer, 1 edition, April 2008.
28. Esparza, J., Rmer, S., and Vogler, W.:
An improvement of McMillan's unfolding
algorithm. Formal Methods in System Design, 20:87—106, 1996.
29. Fanti, M. and Zhou, M.: Deadlock control methods in automated manufacturing
systems. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE
Transactions on, 34(l):5-22, 2004.
30. Garg, D., Fath, A., and Martinez, A.: Real-time open-platform-based control of cooperating industrial robotic manipulators. In Intelligent Control, 2002. Proceedings
of the 2002 IEEE International Symposium on, pages 428-433, 2002.
31. Ghosh, A.: Modeling and analysis of Real-Time database systems in the framework of
discrete event systems, http://drum.lib.umd.edu/handle/1903/5704, 1995.
153
32. Giua, A. and Xie, X.: Control of safe ordinary petri nets using unfolding. Discrete
Event Dynamic Systems, 15(4):349-373, 2005.
33. Graf, S. and Loiseaux, C: A tool for symbolic program verification and abstration. In Proceedings of the 5th International Conference on Computer Aided
Verification, pages 71-84. Springer-Verlag, 1993.
34. Hanisch, H.: Analysis of place/transition nets with timed arcs and its application
to batch process control. In Application and Theory of Petri Nets 1993, ed.
M. A. Marsan, volume 691 of Lecture Notes in Computer Science, pages 282-
299. Springer Berlin / Heidelberg, 1993. 10.1007/3-540-56863-8_52.
35. Harrison, J.: Formal verification at intel. In Proceedings of the 18th Annual IEEE
Symposium on Logic in Computer Science, page 45. IEEE Computer Society,
2003.
36. He, K. and Lemmon, M.: Liveness-enforcing supervision of bounded ordinary petri
nets using partial order methods. Automatic Control, IEEE Transactions on,
47(7): 1042-1055, 2002.
37. Heimbold, D. and Luckham, D.:
2(2):47-57, 1985.
Debugging ada tasking programs. IEEE Softw.,
38. Hirshfeld, Y. and Rabinovich, A.: Timer formulas and decidable metric temporal logic.
Information and Computation, 198(2):148 - 178, 2005.
39. Holloway, L. E., Krogh, B. H., and Giua, A.: A survey of petri net methods for
controlled discrete EventSystems. Discrete Event Dynamic Systems, 7(2):151190, 1997.
40. Holloway, L. E.: Time measures and state maintainability for a class of composed
systems. IN WODES'96, pages 24—30, 1996.
41. Hopcroft, J. E., Motwani, R., and Ullman, J. D.: Introduction to Automata Theory,
Languages, and Computation. Addison Wesley, 3 edition, July 2006.
42. Introduction to air traffic control. http://www.airways.co.nz/about_airways/ans.intro.asp#l,
2010.
154
43. Iordache, M. and Antsaklis, P. J.: Supervisory Control of Concurrent Systems: A Petri
Net Structural Approach. Birkhuser Boston, 1 edition, June 2006.
44. Kemmerer, R. and Kolano, P.: Formally specifying and verifying real-time systems.
In Formal Engineering Methods., 1997. Proceedings., First IEEE International
Conference on, pages 112-120, 1997.
45. Khomenko, V.: Model checking based on prefixes of petri net unfoldings, 2003.
46. Koltuksuz, A., Kulahcioglu, B., and Ozkan, M.: Utilization of timed automata as
a verification tool for security protocols. In Secure Software Integration and
Reliability Improvement Companion (SSIRI-C), 2010 Fourth International
Conference on, pages 86-93, 2010.
47. Kovacs, A. and Hudak, S.: Time semantics in time basic nets. In Applied Machine
Intelligence and Informatics (SAMI), 2010 IEEE 8th International Symposium
on, pages 315-319, 2010.
48. Koymans, R.: Specifying real-time properties with metric temporal logic. Real-Time
Systems, 2(4):255-299, 1990.
49. Lehene, M. D.: Atlantis: Enforcing deadlines in time petri nets using net supervision.
thesis, 2005.
50. Li, Z. and Zhou, M.: Elementary siphons of petri nets and their application to deadlock
prevention in flexible manufacturing systems. Systems, Man and Cybernetics,
Part A: Systems and Humans, IEEE Transactions on, 34(1):38-51, 2004.
51. McMillan, K. L.: A technique of state space search based on unfolding. Formal Methods
in System Design, 6(l):45-65, 1995.
52. Melzer, S. and Romer, S.: Deadlock checking using net unfoldings. In Proceeding
of 9th Inernational Conference on Computer Aided Verification (Cave97) ,
1254:352—363, 1997.
53. Merlin, P. and Färber, D.: Recoverability of communication Protocols-Implications of
a theoretical study. Communications, IEEE Transactions on, 24(9):1036-1043,
1976.
155
54. Moody, J., Yamalidou, K., Lemmon, M., and Antsaklis, P.: Feedback control of pétri
nets based on place invariants. In Decision and Control, 1994., Proceedings of
the 33rd IEEE Conference on, volume 3, pages 3104-3109 vol.3, 1994.
55. Murata, T.: Petri nets: Properties, analysis and applications. Proceedings of the IEEE,
77(4):541-580, 1989.
56. Olveczky, P.: Towards formal modeling and analysis of networks of embedded medical
devices in Real-Time maude. In Software Engineering, Artificial Intelligence,
Networking, and Parallel/Distributed Computing, 2008. SNPD '08. Ninth
ACIS International Conference on, pages 241-248, 2008.
57. Pekilis, B. and Seviora, R.: Detection of response time failures of real-time software.
In Proceeding of the Eighth International Symposium On Software Reliability
Engineering, pages 38-47, 1997.
58. Penczek, W. and Piróla, ?.:
Advances in Verification of Time Petri Nets and
Timed Automata: A Temporal Logic Approach (Studies in Computational
Intelligence). Springer-Verlag New York, Inc., 2006.
59. PEP homepage, http://theoretica.informatik.uni-oldenburg.de/~pep/, 2010.
60. Petri, C. and Reisig, W.: Petri net. Scholarpedia, 3(4):6477, 2008.
61. Petri net kernel, http://www2.informatik.hu-berlin.de/top/pnk/, 2010.
62. Pinedo, M.: Scheduling: Theory, Algorithms, and Systems. Prentice Hall, 2 edition,
August 2001.
63. Prandini, M., Hu, J., Lygeros, J., and Sastry, S.: A probabilistic approach to aircraft
conflict detection. Intelligent Transportation Systems, IEEE Transactions on,
1(4): 199-220, 2000.
64. Ramadge, P. J. and Wonham, W. M.: Modular feedback logic for discrete event systems.
SIAM J. Control Optim., 25(5):1202-1218, 1987.
65. Ramadge, P. J. and Wonham, W. M.: Supervisory control of a class of discrete event
processes. SIAM J. Control Optim., 25(l):206-230, 1987.
156
66. Ramchandani, C: ANALYSIS OF ASYNCHRONOUS CONCURRENT SYSTEMS
BY TIMED PETRI NETS. Technical report, Massachusetts Institute of Technology, 1974.
67. Sathaye, A. and Krogh, B.: Synthesis of real-time supervisors for controlled time petri
nets. In Proceedings of 32nd IEEE Conference on Decision and Control, pages
235-236, San Antonio, TX, USA, 1993.
68. Semenov, A. and Yakovlev, A.: Verification of asynchronous circuits using time petri
net unfolding. In Design Automation Conference Proceedings 1996, 33rd, pages
59-62, 1996.
69. Seshia, S., Bryant, R., and Stevens, K.:
Modeling and verifying circuits using generalized relative timing. In Asynchronous Circuits and Systems, 2005.
ASYNC 2005. Proceedings. 11th IEEE International Symposium on, pages 98108, 2005.
70. Sifakis, J.: Automatic Verification Methods for Finite State Systems: International
Workshop, Grenoble, France. June 12-14, 1989. Proceedings. Springer, 1 edition, February 1990.
71. Suh,
L, Yeo, H., Kim, J., Ryoo, J., Oh, S., Lee, C, and Lee,
B.:
Design of a supervisory control system for multiple robotic systems. In Intelligent Robots and Systems '96, IROS 96, Proceedings of the 1996
IEEE/RSJ International Conference on, volume 1, pages 332-339 vol.1, 1996.
72. Takae, A., Takai, S., Ushio, T., Kumagai, S., and Kodama, S.: Maximally permissive controllers for controlled time petri nets. In Decision and Control, 1994.,
Proceedings of the 33rd IEEE Conference on, volume 2, pages 1058-1059 vol.2,
1994.
73. Uzam, M., Jones, H., and Ycel, L:
Using a Petri-Net-Based approach
for the Real-Time supervisory control of an experimental manufactur-
ing system. The International Journal of Advanced Manufacturing Technology,
16:498-515, 2000. 10.1007/s001700070058.
74. Valmari, A.: Error detection by reduced reachability graph generation. In Proc. of
the Ninth European Workshop on the Application and Theory of Petri Nets,
pages 95-112, 1988.
157
75. Valmari, ?.: Heuristics for lazy state space generation speeds up analysis of concurrent
systems. In Proc. of the Finnish Artificial Intelligence Symposium, 2:640-650,
1988.
76. Valmari, A.: A stubborn attack on state explosion.
l(4):297-322, 1992.
Form. Methods Syst. Des.,
77. Verwer, S., de Weerdt, M., and Witteveen, C: An algorithm for learning real-time automata. In Proceedings of the Sixteenth Annual Machine Learning Conference
of Belgium and the Netherlands (Benelearn) , eds, M. van Someren, S. Katrenko, and P. Adriaans, pages 128-135, 2007.
78. Wan, Y., Xiao, L., and Wu, C: Fuzzy Synthesis-Decision model of software optimum
design. In Image and Signal Processing, 2009. CISP '09. 2nd International
Congress on, pages 1-4, 2009.
79. Wang,
H. and Buy, U.:
Deadlock detection with stubborn unfoldings. In Proc. 9th IASTED International Conference on Software Engineering
and Applications (SEA 2008), pages 27-32, 2008.
80. Wang, H., Grigore, L., Buy, U., and Darabi, H.: Enforcing transition deadlines in time
petri nets. In Emerging Technologies and Factory Automation, 2007. ETFA.
IEEE Conference on, pages 604-611, 2007.
81. Weiss, G. and Alur, R.: Automata based interfaces for control and scheduling. In
Hybrid Systems: Computation and Control, eds, A. Bemporad, A. Biechi, and
G. Buttazzo, volume 4416 of Lecture Notes in Computer Science, pages 601-
613. Springer Berlin / Heidelberg, 2007. 10.1007/978-3-540-71493-4_46.
82. Werner, M.: A timed petri net framework to find optimal IRIS schedules. POLISH
ACADEMY OF SCIENCES, 35(3):703, 2006.
83. Yee, S.-T. and Ventura, J. A.: A petri net model to determine optimal assembly sequences with assembly operation constraints. Journal of Manufacturing
Systems, 18(3):203-213, 1999.
84. Yuniarto, M. and Labib, A.: Designing an online and real time simulation, control and monitoring of disturbances in an intelligent manufacturing system.
In Industrial Informatics, 2003. INDIN 2003. Proceedings. IEEE International
Conference on, pages 273-278, 2003.
158
85. Zhang, W., Freiheit, T., and Yang, H.: Dynamic scheduling in flexible assembly
system based on timed petri nets model. Robotics and Computer-Integrated
Manufacturing, 21(6):550-558, 2005.
86. Zhou, S., Zedan, H., and Cau, A.: Run-time analysis of time-critical systems. J. Syst.
Archit., 51(5):331-345, 2005.
VITA
NAME:
Haisheng Wang
EDUCATION:
B. S., Computer Science, Beijing Polytechnic University, China, 1999
M.S., Network Computing, Monash University, Australia, 2005
Ph.D., Computer Science, University of Illinois at Chicago, Chicago, 2010
PUBLICATIONS: Wang, H., Grigore, L., Buy, U., and Darabi, H. Enforcing Transition
Deadlines in Time Petri Nets. Proc. IEEE Int. Conf. on Emerging
Technologies and Factory Automation, 2007 (ETFA 2007),
pages 604-611, 2007.
Wang, H. and Buy, U. Deadlock Detection with Stubborn Unfoldings.
Proc. 9th IASTED International Conference on Software Engineering
and Applications (SEA 2008), pages 27-32, 2008.
Wang, H., Grigore, L., Buy, U., Lehene, M., and Darabi, H.
Supervisory Control of Time Petri Nets Using Net Unfoldings.
IEEE Transactions on Systems, Man, and Cybernetics Part A, in press.
159
Документ
Категория
Без категории
Просмотров
0
Размер файла
5 349 Кб
Теги
sdewsdweddes
1/--страниц
Пожаловаться на содержимое документа