close

Вход

Забыли?

вход по аккаунту

?

IJNKM.2017.084136

код для вставкиСкачать
72
Int. J. Nuclear Knowledge Management, Vol. 7, No. 1, 2017
Influence of cognitive human factor on nuclear
reactor safety: a simple decision support system
for operators in emergency conditions
Mauro Cappelli*
ENEA Casaccia Research Center,
via Anguillarese 301, Rome, Italy
Email: mauro.cappelli@enea.it
*Corresponding author
Adam Maria Gadomski
ECONA Interuniversity Center for Research,
via dei Marsi 78, Rome, Italy
Email: amg.home@email.it
Fabrizio Memmi and Massimo Sepielli
ENEA Casaccia Research Center,
via Anguillarese 301, Rome, Italy
Email: fabrizio.memmi@gmail.com
Email: massimo.sepielli@enea.it
Marta Weronika Wronikowska
Psychology Department,
Sapienza University of Rome,
via dei Marsi 78, 00-185 Rome, Italy
Email: marta.wronikowska@uniroma1.it
Abstract: In this paper a prototype for the TRIGA Research Reactor Decision
Support System is proposed. Its framework is realised on the basis of three
models: the Universal Management Paradigm, the Information-PreferenceKnowledge Model from the TOGA meta-theory, and the Finite State Machine
Model. Such an application uses data from the instrumentation and control
system and according to the human factor state recognises the type of operative
situation and suggests to the operator a decision action as an output. The tool
can be used for helping the operator in accident conditions. Here, for
illustrative purposes only, an accidental variation of the control rod position
has been simulated and results from the proposed tool are analysed.
Keywords: human factor; cognitive decision-making; decision support system;
nuclear reactor safety; TOGA meta-theory IPK model; top-down object-based
goal-oriented approach; finite-state machine.
Copyright © 2017 Inderscience Enterprises Ltd.
Influence of cognitive human factor on nuclear reactor safety
Reference to this paper should be made as follows: Cappelli, M., Gadomski,
A.M., Memmi, F., Sepielli, M., and Wronikowska, M.W. (2017) ‘Influence of
cognitive human factor on nuclear reactor safety: a simple decision support
system for operators in emergency conditions’, Int. J. Nuclear Knowledge
Management, Vol. 7, No. 1, pp.72–89.
Biographical notes: Mauro Cappelli is a Researcher at ENEA, Frascati
Research Center in Rome and the former Head of the Design and Experimental
Engineering Laboratory at ENEA – UTFISST, Casaccia Research Center in
Rome (2013–2015). He got a Laurea degree from the University of Perugia and
a PhD degree from La Sapienza University of Rome, both in Electrical
Engineering. He also got a Master degree in Nuclear Safety Engineering at the
University of Pisa and a Master degree in Fusion Science and Engineering at
the University of Rome Tor Vergata. In 2011, he was Visiting Researcher at
IRSN in Paris. His main research fields are I&C for nuclear applications, radio
frequency and microwave, control systems.
Adam Maria Gadomski is a member of the Scientific Board of ECONA (InterUniversity Center for Research on Cognitive Processing in Natural and
Artificial Systems) at Rome “Sapienza” University. He received MSc in
nuclear physics at the Warsaw University. He is member of international
scientific boards and referee of scientific journals related to intelligent agents
technologies, system, cognitive sciences and risk management. He also was
Assistant Professor and Head of Identification and Diagnostic Lab. at the
Institute of Atomic Energy in Poland, and co-ordinator of the ComputerSystem Project for the Polish National Centre of Oncology. He is the author of
more than 140 scientific papers where he contributed in particular to complex
system modelling, managerial decision-support tools for high-risk humantechnology and intelligence-based socio-cognitive systems. From 1984 to 2009
he has been with ENEA as the leader of socio-cognitive engineering and
systemic design, where he funded the High-Intelligence and Decision Research
Group (HID) at ENEA.
Fabrizio Memmi graduated in Electronics Engineering at the University of
Romatre in Rome in 2011. He has been Research Fellow at the University of
L’Aquila and at the ENEA (the Italian National Agency for New Technologies,
Energy and Sustainable Economic Development) from 2012 to 2014, working
in the field of programmable devices and hardware in the loop simulations for
nuclear applications. In 2015 he got a MSc in Engineering of Infrastructure and
Railway Systems.
Massimo Sepielli graduated in Nuclear Engineering and received his Post-doc.
in “Safety of Nuclear Plants and Radiation Protection”. He has been working
for ENEA since 1983 and from 2010 to 2015 has been the head of the ENEA
Technical Unit for Fission technologies and Facilities and Nuclear Material
Management (UTFISST). In his current position in ENEA’s nuclear
department, he is involved in the study, design, development, qualification and
certification of methodologies, processes, components and systems for power
and research nuclear fission plants of present and future generations, including
the nuclear fuel cycle, and radioactive waste management and disposal.
Marta Weronika Wronikowska is a PhD student in psychology and cognitive
science at University of Rome Sapienza in Italy. In general, her research
interest is focused on the modelling of cognitive decisional errors in
complex human and artificial mental activities, such as inference, decisionmaking and the influence of emotions on their efficacy, as well as mental
attitude, communication methods, risk-perception and cognitive organisational
vulnerabilities.
73
74
M. Cappelli et al.
This paper is a revised and expanded version of two papers entitled ‘A decision
support system prototype including human factors based on the TOGA
meta-theory approach’ and ‘Cognitive decision errors and organization
vulnerabilities in nuclear power plant safety management: modelling using the
TOGA meta-theory framework’ presented at the ICAPP 12, Chicago, USA,
24–28 June 2012.
1
Introduction
The safety operation of nuclear power plants (NPP) is based on simultaneous interactions
of human, organisational and technological factors, and the excellence in operational
and human performance has been a highly promoted activity, for example by the
International Nuclear Safety Advisory Group (INSAG) (IAEA, 1999). The human
performance, as the fundamental issue in safety culture, is analysed by different
specialists and in different domains of activities.
The negative effect of human performance is the human error, which can be defined
in different ways (Watson et al., 2006; Ludtke and Pfeifer, 2007; Wang et al., 2015), but
it always plays a major role. The human error as the result of incorrect human actions has
been the important lesson of significant technological accidents, such has been
underlined after the Three Mile Island nuclear accident (Kemeny, 1979; Ruiz-Sanchez
and Nelson, 2010; Devolpi, 2012). More recently, the effect of human factors has been
clearly seen for the main nuclear accident of the last decade: “Individuals and
organizational and technology issues can be seen to have contributed to the accident at
the Fukushima Daiichi nuclear power plant” (IAEA, 2013).
Typically, the problem is analysed by concentrating on the single event that provoked
a chain of unexpected situations leading to the plant incident or accident. What is needed
is a general framework able to include as many parameters/variables as possible, i.e. both
technological and human factors. This has been recently recognised by the experts, when
it is affirmed that “the systemic approach to safety addresses the whole system by
considering the dynamic interactions within and among all relevant factors of the system
– individual factors (e.g. knowledge, thoughts, decisions, actions), technical factors (e.g.
technology, tools, equipment), and organizational factors (e.g. management system,
organizational structure, governance, resources)” (IAEA, 2013).
A general model is then required, which could allow to envisage omission or
commission errors before they can happen or, alternatively, suggest preferred actions to
do in order to take countermeasures to neutralise the effect of a human error before it
becomes critical.
The possibility of unexpected NPP accidents, often due to errors caused by the
nuclear plant staff, makes the contribution of Socio-Cognitive Engineering (SCE) in
nuclear safety studies continuously increasing. Engineering and social safety
requirements need to enlarge their domain of interest in such a way to include all possible
losses generating events that could be the consequences of an abnormal state of a NPP,
i.e. the so-called super-safety approach.
Influence of cognitive human factor on nuclear reactor safety
75
The identification of such understood NPPs safety condition requires discriminating
between its two basic components: human organisation and technological nuclear plant
system. Their interaction depends on the plant status and on the decisions of plant
operators and managers.
This paper is based on methodological results on possible cognitive vulnerability of
human organisations using the TOGA (Top-down Object-based Goal-oriented Approach)
meta-theory proposed by one of the author in the last years (Gadomski, 1988; 1994;
1997; 1998). Socio-cognitive modelling of Integrated Nuclear Safety Management
(INSM) using the TOGA meta-theory has been discussed in Cappelli et al. (2011, 2012a,
2012b).
In this paper, the different types of cognitive decision-making, which are independent
of the decision-maker’s physical capacities and his/her influence on the organisation
vulnerability (Wronikowska 2014), are considered. In a NPP, the super-safety approach
is also discussed, by taking into account unexpected events and managing them from a
systemic perspective. More detailed aspects of the ontology of cognitive decision-making
enable to develop the taxonomy frameworks for the identification of the possible human
errors and organisational vulnerabilities. It is essential for the quality of the methodology
of cooperating nuclear plant operators and managers and for the building of Intelligent
Decision Support Systems (IDSSs) networks (Gadomski, 1998; Hsieh et al., 2012). It
requires the development of the rules and formalisation of the computational taxonomy
of Cognitive Decision Making (CDM) functions and the description of specific ontology
of cognitive decisional errors. Here, the focus is on the relation between the TOGA-based
cognitive decision-making model and possible types of operators and managers cognitive
decision-making errors. Physical/sensorial functions of CDM are not taken under
consideration in this work.
Recently, it has been recognised (Gadomski, 2009) that the interaction between
technical aspects and human and organisational factors should be of primary importance
both from the design and safety perspective. This interaction involves all the fields
traditionally belonging to NPP engineering. In fact, “human and organizational factors
are those elements that influence the performance of people operating or maintaining
equipment and systems, or undertaking projects or programs; they include behavioural,
medical, operational, task-load, management, machine interface and work environment
factors” (Gadomski and Zimny, 2009).
In this paper, as a simple illustration of the general framework already proposed by
the authors, a possible application of such an IDSS to the occurrence of a typical error is
presented. By means of the simulation of a postulated error, e.g. the unchecked extraction
of control rods during a power variation manoeuvre, it is here shown how the effect of
human errors can affect the so-called ‘performance function’, thus giving rise to different
countermeasures which could call different professional nuclear plant figures into play,
potentially not envisaged in the standard procedure. Section 2 recalls the theoretical
framework of the TOGA meta-theory. Section 3 illustrates the design of a DSS together
with a possible application of the DSS in case of a typical operational error. Section 4
contains the main conclusions of this work and a discussion on the open problems.
76
M. Cappelli et al.
2
Theoretical framework
In this section the theoretical framework of the TOGA meta-theory (Gadomski, 1988,
1994, 1997), as a methodology for modelling complex human-technology systems, such
as nuclear plants, is briefly recalled (for more details, the interested reader is referred to
Gadomski (1988; 1994; 1998)). TOGA (Top-down Object-based Goal-oriented Approach)
is a meta-theory of top-down goal-oriented ordering of available complex knowledge,
originally proposed for the functional modelling of cognitive decision making of an
intelligent agent (IA). TOGA assumes that every real-world identification or design
problem requires taking under consideration the properties of the correspondent
intelligent problem solver. Therefore, it includes an integrated generic computational
model of abstract intelligent agent/entity (AIA) and its domain of activity.
In nuclear reactor safety domain one of the main roles is played by the cognitive
decision-making. It is defined as a complex decision-making pattern of the reasoning
functions of a human mind with its biological constraints, and based on current
information, knowledge and preferences of the decision maker. In the TOGA metatheory, the intelligent agent, called personoid (Gadomski, 1997), is based on the
repetitive IPK (Information, Preferences, Knowledge) elementary unit, and the IPK cell
is founded on three key concepts: Information (I) – data that have meaning, i.e. represent
a specific property of a preselected domain of human or artificial agent’s activity;
Preferences (P) – ordered relations among two states of the domain of activity of the
agent; and Knowledge (K) – every abstract system which is able to transform information
into other information or knowledge or preferences. We assume that the generic decisionmaking frame is based on the application of given criteria to alternatives where the final
choice produces a decision (Gadomski, 1998). Every Cognitive Decision Making (CDM)
of intelligent agent is also based on an ontology (personal ontology), which constitutes a
set of concepts and relations among those concepts necessary to describe a selected
domain for a given AIA goal. The choice of the initial ontology strongly influences the
final decision-making and may be an essential cause of operators’ and managers’
communication errors. Every operator during an emergency event has to make the proper
decisions. Sometimes such situation requires analysis of many alternatives and criteria
according to available information and given knowledge and preferences. Hence,
according to Wronikowska (2013), CDM is described on two conceptualisation
modelling layers. One is based on the IPK model and the second on the criteria or
alternatives in the simple decision-making model (Figure 1).
Figure 1
Influence of the assumed ontology on the components of CDM in its second phase
(Wronikowska, 2013) (see online version for colours)
Influence of cognitive human factor on nuclear reactor safety
77
In other words, the ontology/language used strongly influences the decisions of an
intelligent agent, which plays a key role in cognitive decision-making.
The phases of CDM, can be divided according to the sequence (state  operation 
state…), as follows:

Initial state: perceived signals.

Operation: conceptualisation of signals (association signals with concepts) on the
base of the assumed plant ontology (according to the norms of IAEA and a nuclear
plant provider).

Operation: analyses of the information (using model-knowledge) for the recognition
of the plant situation.

State: the new information is the contribution to the description of the new situation.

Operation: new situation is inserted in the preferences graph (in the IPK model) in
order to indicate the state with highest preferences.

State: such state is considered as the intervention goal of plant operator (an
intelligent agent).

Operations: in the simplest IA reasoning-situation one goal enables to choose one
operational knowledge, which transforms the current situation in the goal situation.
This is a deterministic case.
Figure 2
A general intervention task scenario (here applied to the research reactor used as a
reference plant) (see online version for colours)
78
M. Cappelli et al.
In Figure 2 a general intervention task scenario is shown, with the application to the
research reactor used here as a reference plant, as illustrated later.
In the real-world decisions, we may obtain from the preference systems more than
one state able to satisfy the goal requirements. Operators analysing a lot of information
under stress and time-pressure may choose a wrong alternative. The influence of human
factor on CDM is widely discussed in literature (Dietrich, 2010; Wickens, 2014; Jacobs
and Gaver, 1998).
The reactor operators are able to make such decisions, which either may lead to
jeopardy in unexpected situations, or may prevent the achieving of the previously defined
goal. It means that any human error may initiate highly risky situations or immediate
generation of large losses. Reduction of the probability of human errors can be performed
by actions such as: simplification of human operations, supporting them by computer
automation; reduction and increasing of the precision of safety margins in the plant
supervision tasks by training; construction of protection procedures in form of automatic
barriers or intelligent decision support systems. It should be noted that the searching for
the cause of CDE is based on the assumed domain-dependent model CDM and backward
inference (Figure 3) (Gadomski et al., 2003).
Figure 3
An example of the IPK model application for the goals conflict and its symptoms
(see online version for colours)
The cognitive behaviour of the subject depends on what the neural network has learned
and which rules contain the IPK bases, which are derived from learning, communication
and experience. Operator-designer interaction framework after the recognition of the
operator error is illustrated in Figure 4 (Gadomski et al., 2003).
On the other hand, the invisible human organisation vulnerability can cause cognitive
decision-making errors from operators and their supervisors, as well as incorrect works
of the entire NPP system, which can increase organisation vulnerabilities. This is a closed
system in which all components should function properly.
According to that, vulnerability of the organisation depends on the following factors:
quality of the technology, completeness and congruence of rules and procedures of the
used technology, completeness and congruence of roles and competences of employers
and cognitive states of employee. The use of the technology depends on the definitions
and distributions of the precisely designated or defined roles of humans in the
organisations. Especially in the emergency situation the operators sometimes may not
Influence of cognitive human factor on nuclear reactor safety
79
have sufficient time to deeply reflect on what would be the best approach, therefore an
easy access to ready scenarios for the different possible situations is crucial, as well as
the contact with experts when information are insufficient. In other words, there is a lack
of knowledge or preferences, and the operator must turn to decision-making support. A
software tool giving to operator all information required for coping with unforeseen
event may guarantee lack of undesired human behaviour.
Figure 4
3
Operator-designer interaction framework: an example of the modification framework of
the mental IPK bases of operators after cognitive decision-making error (see online
version for colours)
The decision support system
A decision support system may support the nuclear staff cognitive decision-making
before and during the emergency event. In what follows, we present a possible DSS
based on the above-explained methodological framework.
The DSS simulator we propose has been realised using the LabVIEW© environment
produced by National Instruments© (NI©). LabVIEW© is a graphic programming
language that makes use of icons instead of code lines to create a specific application.
This is a software language based on data flow, which determines a program execution.
In the LabVIEW© environment it is possible to implement a user interface using objects
and tools. This interface is known as front panel; code is added adopting a graphic
representation of different functions to manage front panel objects.
LabVIEW© programs are called Virtual Instruments©. They contained three
fundamental elements: a front panel, a block diagram and one icon with the connector
box. The front panel has indicators, controls, buttons and other elements. Indicators are
graphs, LEDs, etc. Controls simulate instrument input and provide data to the block
diagram. Indicators simulate device output and display data that the block diagram
generates and acquires.
The software application is based on a Finite State Machine (FSM) model (Figure 5).
This is a mathematical abstraction used to define a finite number of states and transitions
wherein a system can be subdivided. A state is a set of measured or observed attributes
that in the preselected instant describes an object/system/domain; transitions instead are
80
M. Cappelli et al.
the rules or transfer functions that transfer system from one state to another one, i.e.
elaborates process parameters. Any kind of system can be managed and a possible
decisional process can be built using these two agents.
Figure 5
Example of FSM model (see online version for colours)
Studying different typologies of systems, fundamental variables can be classified as:

Data input: the carrier of information issues from instrumentation outlining the
general state of the system under test.

Performance function: a parameter describing the level of satisfaction reached in the
implemented elaborations, i.e. how far the simulated state is from the desired
target/goal/required state.

Output variables: represent the final issues and describe a set of operations that
fulfill the requirements and the professional figures tied to this process.
Based on the theory above described, an implementation of the model usable for coping
with real applications is presented in Figure 6.
Figure 6
General scheme of the Decision Support System (DSS), with the definition of different
optimisation thresholds (L = low, M = medium, H = high) related to management
choices. PF is the Professional Figure (Operator/Supervisor/Director), FoP is the
Function of Performance (see online version for colours)
Influence of cognitive human factor on nuclear reactor safety
81
The result of the implementation is a software tool able to give the operator all the
information required for coping with an unforeseen event or undesired behaviour. The
tool can work by just implementing well-scheduled procedures taken from the plant
decision manual, or by introducing the effect of the human factor, whose modelling is
based on the TOGA theory recalled in the preceding paragraphs. The effect of the human
factor is taken into account activating the related button (Figure 7), which is here
patterned, for the sake of generality, as a random noise causing a performance function
decrease.
The operator is then required to indicate the grade of performance demanded to the
Decision Support System (DSS). This function (Figure 8) gives an indication of the
requested grade of satisfaction in terms of responsibilities, competences, knowledge and
experience of the involved professional figures. The higher the performance function, the
higher the level of competence is required.
Figure 7
Human factor set button (see online version for colours)
Figure 8
Performance function visualisation (see online version for colours)
On the basis of the plant organisation chart, and the involved event, an investigation will
be conducted in order to decrease the human factor influence, obtaining a value for the
performance function higher than a determined threshold. This function will be defined
in a normalised {0:1} range of variability. Indeed a significant feature of the human
factor parameter is its dependence on the level of expertise the DSS identifies: in the
proposed application, this value will be halved for each called professional individuality
and subtracted from the original performance function.
Finally, the threshold value, which the performance function is compared with, is
thought to be user-set depending on the optimisation level required and on the available
resources to solve this issue (Figure 9). Three values identifying a low, medium and high
value of final contentment have been implemented.
82
Figure 9
M. Cappelli et al.
Requested optimisation level selector (see online version for colours)
During the execution of the application, a real-time animation of the so-called IPK
triangle, previously described, is shown during the elaboration of the decision carried out
by each professional figure (i.e. operator, supervisor, director), and the correspondent
performance function calculation.
At this stage, the main purpose is to provide the operation that optimises the system
performances together with the decision maker, so as to identify the responsibility level,
which the DSS suggests to resort to. Furthermore, the organisation chart of the studied
process is always observable on the front panel as well as the procedural operations
foreseen for the considered event. The UMP model is implemented in the application
(Figure 10).
In Figure 11, the reactor state indicators for decision-making have been summarised
for the presented model.
Figure 10 UMP model implementation (see online version for colours)
Influence of cognitive human factor on nuclear reactor safety
83
Figure 11 Reactor state indicators for decision-making (see online version for colours)
4
The TOGA-based decision support system: a case study on a TRIGA
research reactor
In order to give an example about a possible employment of the proposed DSS, a specific
customisation has been done for the TRIGA research reactor installed at the ENEA
Casaccia Research Center.
4.1 TRIGA RC-1: basic description
The TRIGA RC-1 is a pool thermal reactor having a core contained in an aluminium
vessel and placed inside a cylindrical graphite reflector, bounded with lead shielding
(Allison 1997; Bologna et al., 2003). The biological shield is provided by concrete with
mean thickness of 2.2 m. Demineralised water, filling the vessel, ensures the functions of
neutron moderator, cooling mean and first biological shield. Four rods ensure reactor
control: two shim rods, one safety fuel-follower rod and one regulation rod. Produced
thermal power is removed by natural water circulation through a suitable thermohydraulic loop including heat exchangers and cooling towers. The core, surrounded by a
graphite reflector, consists of a lattice of fuel elements, graphite dummy elements and
control and regulation rods. There are 127 channels divided in seven concentric rings
(from 1 to 36 channels per ring). The channels are loaded with fuel rods, graphite
dummies and regulation and control rods depending on the power level required. One
channel houses the start-up Am-Be source, while two fixed channels (the central one and
a peripheral) are available for irradiation or experiments. The fuel is a cylinder of a
ternary alloy uranium-zirconium-hydrogen (H-to-Zr atom ratio is 1.7 to 1; the uranium,
enriched to 20% in 235U, makes up 8.5% of the mixture by weight: the total uranium
content of a rod is 190.4 g, of which 37.7 g is fissile) with a metallic zirconium rod
inside. There are two graphite cylinders at the top and bottom of the fuel rod. The
regulation rod has the same morphological aspect as the fuel rod: the only difference is
84
M. Cappelli et al.
that instead of the mixture of the ternary alloy uranium-zirconium-hydrogen there is the
absorber (graphite with powdered boron carbide). The control and safety rods are ‘fuel
followed’: the geometry is similar to that of the regulation rod but with fuel element at its
bottom. Figure 12 shows a horizontal section of the reactor.
The parameters used in order to perform the reactor monitoring can be classified into
three large groups: power monitoring, process monitoring and radiological monitoring.
The reactor power is monitored by means of one starting channel (up to 1 W), two wide
range linear channels (0.5–1 MW) and one safety channel (10 kW to 1.1 MW). The
process monitoring includes six temperatures (fuel elements, primary and secondary
loops, cooling towers), flow rates (primary and secondary loops, water cleaning system,
reactor hall air), liquid levels (reactor pool, shielding tank) and conductivities (primary
loop, shielding tank loop). The radiological control is carried out by monitoring water
activity (primary and secondary loops), air activity (reactor hall and experimental
channels) and environmental radiation levels (reactor hall, control room and experimental
channels) (Memmi et al., 2016).
Figure 12 Horizontal section of TRIGA RC-1 reactor
4.2 TOGA-based DSS: the TRIGA RC-1 case study
Let us consider the postulated event of unchecked withdrawal of control rods as an
example of accident condition (Figure 13). It must be observed that for the described
reactor this is just an ‘academic case’, as its practical realisation is highly unlikely on
the basis of the inherent safety design of the research reactor. As a consequence, the
reference to a real reactor is here considered only for the sake of argument, in order to
provide a simple exemplification, which can be easily extended to a power plant and to
more realistic incidents or accidents.
From a theoretical perspective, the postulated event leads to a reactivity step, which
could take place in two situations: start-up error with a subcritical reactor (low power
accident) or accident due to power variation (1MW power accident). In our applicative
example the second eventuality will be studied.
Influence of cognitive human factor on nuclear reactor safety
85
It should be underlined that this kind of accident would happen only in conjunction
with a functioning fault of control and safety systems together with manoeuvre errors by
the plant operator. The principal features of circuits, dedicated to activate control
systems, have been chosen in order to satisfy the following operational conditions:
1
The safety rod has to be totally extracted before the shim rods rising. This condition
is ensured by an ‘interlock’ system linked to limit stop micro-switches, which shuts
the power off and prevents rods from operating if the safety rod is not in the upperlimit stop position.
2
The rods are activated one by one. The rod selection is managed using a selector,
which powers only a selected rod motor.
3
Two-way movement or rod stop is determined by lever activation.
If the defined interlock system is inefficient, it is thinkable to reach 1MW power through
a complete shim rods extraction, with the safety rod totally inserted. Let us suppose that
an operator subsequently withdraws the safety rod completely. In this condition, the state
of ‘prompt critical’ is never achieved, since the temperature coefficient effects lean to
balance the reactivity introduced by the rods, but it is always good rule to follow the
correct extraction procedure to reach a continuous and safe control.
Figure 13 Scheme of rods displacement (see online version for colours)
Regulation Rod
Safety Rod
Shim
Rods
All the different simulations are differentiated on the basis of the desired level of
performance function, which is a direct indication of the level of the resources
employment. In the example shown below, the case of a medium level of satisfaction has
been defined corresponding to a 0.5 value for the performance function. This means, for
example, that the DSS should provide an answer optimised to this level of requirement,
i.e. not using all resources available. This is for example the case where the
organisational chart does not foresee a redundancy or the presence of all team figures for
all activities at the same time.
86
M. Cappelli et al.
After switching the human factor button on (this means the human factor is included
in the model, thus affecting the response of the DSS), a reactivity surplus appears
(observable from the analogue indicator in a full range position), and a specific value of
performance function is assigned to the operator action, which represents the lower step
of the process hierarchy.
At this stage this assigned quantity is compared with the threshold value of
performance and if this condition is not satisfied, the higher professional figure is
consulted, thus resulting in a human factor influence reduction because of their wider
experience and competence: an elaboration process begins considering these aspects until
the threshold value is exceeded. Finally, the resolving operation is displayed together
with the decision maker and the ultimate value of the performance function.
Figure 14 Synoptic of the rod control system (see online version for colours)
In the considered simulation the accident is solved by lowering all the rods until
criticality is obtained. In this situation the supervisor is the decision-maker: presumably
an operator could not understand that shim2 rod had been exceedingly extracted because
of a possible malfunctioning of the movement lever. Gathering such a problem, shim2
rod is lowered. The (dynamic, real-time) synoptic simulation would show a dynamic
lowering of this rod and consequently the analogue indicator as well (Figure 14).
Obviously the higher is the desired level of efficiency (i.e. a higher value for the
threshold), the greater is the number of iterations and elaborations, therefore the amount
of resources spent in the process increases.
In Figure 15, the whole synoptic is shown in order to allow the operator to easily
interface with the mimic.
Influence of cognitive human factor on nuclear reactor safety
87
Figure 15 Synoptic of the whole Decision Support System (operator front view) (see online
version for colours)
5
Conclusions
This work proposed the study and development of an ontological framework of cognitive
decision-making errors and an approach to the recognition, and a conceptualisation tool
for the diagnostics of unexpected symptoms of the emotional cognition. In order for the
proposed methodological approach to be applied, the developed models have to be
carefully experimentally tested and validated. On the other hand, the formalisation of the
problem enables to use the obtained results for various computer simulations in cognitive
engineering as well as in intelligent agent technologies for the development and testing of
new architectures of Intelligent Decision Support Systems (IDSS) and future intelligent
autonomous robots.
The applicability of the presented model for the study of cognitive abnormalities, and
especially for the descriptions of cognitive errors-generating scenarios, is the object of
the future research work of the authors. In this work, the model has been implemented by
realising a DSS that can be used as a basis for the diagnosis of NPP operators and
managerment errors during normal operations. The proposed tool can be easily extended
in order to include a wider organisation structure (more professional figures) or a larger
number of operational and emergency procedures. A more complex model for the human
factor could also be introduced on the basis of more advanced human reliability analysis
methods. To the authors’ knowledge, the proposed DSS, although simple, is one of the
few tools presented in the literature that is able to implement a theoretical framework for
modelling complex human-technology systems such as nuclear reactors.
88
M. Cappelli et al.
Future works will deal with the application of the developed models to NPP
educational or engineering simulators to be used for operator training. The presented
work allows knowing and analysing a particular ontology of complex systems in a very
reliable and modular way. Owing to the wide variety of possible variables to be included
for taking into account such a complex environment, this can be considered just as the
first step of a complete DSS implementation.
As a future development, an accurate refinement of TOGA model could provide a
better analysis and simulation of more realistic cognitive decision-making scenarios for
nuclear plant operators, by means of the development of top-down and goal-oriented
more specialised ontological layers for domain information, agent’s preferences and the
professional knowledge needed.
A further specification of its main features and an enhancement of the local task
dependent paradigms of the applied conceptualisation is a possible approach to an
improved model. In order to describe how these elements may affect human decisions
(outputs), the human factors need to be modelled, introducing measurable cognitive
parameters of the decisional model of the plant human operators and managers, and the
agent modelling improved to implement a learning process for coping with untested
experiences.
References
Allison, G. (1997) The Essence of Decision, Scott, Foresman & Co, Glenview IL.
Bologna, S., Balducelli, C., Dipoppa, G. and Vicoli, G. (2003) ‘Dependability and Survivability of
Large Complex Critical Infrastructures. Computer Safety, Reliability, and Security’, in
Anderson S., Felici M. and Littlewood B. (Eds): Computer Safety, Reliability, and Security.
Lecture Notes in Computer Science, Vol. 2788/2003, Springer, Berlin, Heidelberg,
pp.342–353.
Cappelli, M, Gadomski, A.M. and Sepielli, M. (2011) ‘Human factors in nuclear power plant safety
management: A socio-cognitive modeling approach using TOGA meta-theory’, Proceedings
of the International Congress on Advances in Nuclear Power Plants 2011, Nice (FR),
2–5 May, France.
Cappelli, M., Gadomski, A.M., Sepielli, M. and Wronikowska, M.W. (2012b) ‘Cognitive decision
errors and organization vulnerabilities in nuclear power plant safety management: modelling
using the TOGA meta-theory framework’, Proceedings of the International Congress on
Advances in NPPs, ICAPP 12, Chicago, USA, 24–28 June.
Cappelli, M., Memmi, F., Gadomski A.M. and Sepielli M. (2012a) ‘A decision support system
prototype including human factors based on the TOGA meta-theory approach’, Proceedings
of the International Congress on Advances in NPPs, ICAPP 12, Chicago, USA, 24–28 June.
Devolpi, A. (2012) Nuclear power safety: Lessons from Three Mile Island and the Fukushima
reactor accidents. Available online at: http://fas.org/pubs/pir/2012summer/Summer2012_
NuclearPowerSafety.pdf
Dietrich, C. (2010) ‘Decision making: Factors that influence decision making, heuristics used, and
decision outcomes’. Student Pulse, Vol. 2, No. 02.
Gadomski, A.M. (1988) ‘Application of System-Process-Goal Approach for description of TRIGA
RC1 System’, Proceedings of the 9th. European TRIGA Users Conference, October 1986,
Roma. Printed by GA Technologies, TOC-19, USA, 1987, also the ENEA Report RT
TIB/88/2, Rome, Italy.
Gadomski, A.M. (1994) ‘TOGA: A Methodological and Conceptual Pattern for modeling of
Abstract Intelligent Agent’, Proceedings of the First International Round-Table on Abstract
Intelligent Agent. Gadomski, A.M. (Ed.) Feb 1993, published by ENEA, Rome, Italy.
Influence of cognitive human factor on nuclear reactor safety
89
Gadomski, A.M. (1997) ‘Personoids organizations: An approach to highly autonomous software
architectures’, Proceedings of the 11th International Conference on Mathematical and
Computer Modeling and Scientific Computing: Concurrent Engineering Based on AgentOriented and Knowledge- Oriented Approaches, Georgetown University Conference Center,
Washington, USA.
Gadomski, A.M. (1998) ‘Risk based reasoning in decision-making for emergency management’,
SRA Europe Annual Conference Risk Analysis: Opening the Process, 11–14 Oct., Paris,
France.
Gadomski, A.M. (2009) ‘Human organization socio-cognitive vulnerability: the TOGA metatheory approach to the modeling methodology’, International Journal of Critical
Infrastructures (IJCIS), Vol. 5, No. 1/2 – Special Issue on Critical Infrastructures as Complex
Systems.
Gadomski, A.M. and Zimny, T.A. (2009) ‘Application of IPK (information, preferences,
knowledge) paradigm for the modelling of precautionary principle based decision-making’,
Proceedings of Critical Information Infrastructure Security. Springer-Verlag.
Gadomski, A.M., Salvatore, A. and Di Giulio, A. (2003) ‘Case study analysis of disturbs in spatial
cognition: Unified TOGA approach’, Proceedings of the 2nd International Conference on
Spatial Cognition, Rome, Italy.
IAEA (1999) Basic Safety Principles for Nuclear Power Plants, 75-INSAG-3 Rev.1, a report by
the International Nuclear Safety Advisory Group, IAEA.
IAEA (2013) Report on Human and Organizational Factors in Nuclear Safety in the Light of the
Accident at the Fukushima Daiichi Nuclear Power Plant, International Experts Meeting
21–24 May 2013, Vienna, Austria.
Jacobs, P.A. and Gaver, D.P. (I998) Human Factors Influencing Decision Making, Naval
Postgraduate School, Monterey, California, NPS-OR-98-003.
Kemeny, J.G. (1979) The President’s Commission On The Accident at Three Mile Island.
Available online at: http://www.threemileisland.org/downloads/188.pdf
Ludtke, A. and Pfeifer, L. (2007) ‘Human error analysis based on a semantically defined cognitive
pilot model, in Saglietti, F. and Oster N. (Eds): Computer Safety Reliability and Security.
Lecture notes in Computer Science, Vol. 4680, pp.134–147.
Memmi, F., Falconi, L., Cappelli, M., Palomba, M. and Sepielli, M. (2016) ‘A user-friendly, digital
console for the control room parameters supervision in old-generation nuclear plants’, Nuclear
Engineering and Design, Vol. 302, pp.12–19.
Ruiz-Sanchez, T. and Nelson, P.F. (2010) ‘Use of ATHENA to review emergency operating
procedures at nuclear power plants’, in Reliability, Risk and Safety: Theory and Applications,
Taylor & Francis Group, London.
Wang, Ch., Zhang, J., Yu, H. and Dai, S. (2015) ‘Marine nuclear power plant human error analysis
and protective measures’, Proceedings of the 14th International Conference on Man-MachineEnvironment System Engineering Lecture Notes in Electrical Engineering, Vol. 318,
pp.33–42,
Watson, M.C., Bond, C.M., Johnston M. and Mearns, K. (2006) ‘Using human error theory to
explore the supply of non-prescription medicines from community pharmacies’, Quality and
Safety in Health Care, Vol. 15, No. 4, pp.244–250.
Wickens, C.D. (2014) ‘Effort in human factors performance and decision making’, Human
Factors, Vol. 56, No. 8, pp.1329–1336.
Wronikowska, M.W. (2013) ‘Coping with the complexity of cognitive decision-making: the TOGA
meta-theory approach’, Proceedings of the European Conference on Complex Systems 2012,
Springer Proceedings in Complexity, pp.427–433.
Документ
Категория
Без категории
Просмотров
2
Размер файла
1 102 Кб
Теги
2017, 084136, ijnkm
1/--страниц
Пожаловаться на содержимое документа