close

Вход

Забыли?

вход по аккаунту

?

1555343417732239

код для вставкиСкачать
732239
2017
EDMXXX10.1177/1555343417732239Journal of Cognitive Engineering and Decision MakingConceptual Frameworks to Guide Design
Special Issue
Conceptual Frameworks to Guide Design
Philip J. Smith, The Ohio State University
This is a response providing some thoughts triggered
by the paper “Issues in Human–Automation Interaction
Modeling: Presumptive Aspects of Frameworks of Types
and Levels of Automation,” by David Kaber. The key
theme is that in order to debate the relative merits
of different conceptual frameworks to guide human–
automation interaction design efforts, we need a richer
understanding of the psychology of design. We need to
better understand how contributions by the field of cognitive engineering really affect the efforts of system designers.
Keywords: levels of automation, psychology of
designer, interaction design
framework used to guide design efforts will
strongly influence the cognitive processes of the
designer (Rouse & Boff, 1987; Rouse, Cody, &
Boff, 1991). Thus, we could contrast the ideas
generated by a designer who is thinking in terms
of “what level of automation [LOA] is appropriate” with the ideas generated by a designer who
focuses on “alternative ways in which the operator and technology could interact” and how the
operator will be influenced to adapt through
these interactions. In short, as a field we need to
develop a richer understanding of how to influence designers.
Introduction
Defining the Research Goal
The underlying question addressed in the
paper by Kaber (2017 [this issue]) is, How
can we help designers generate more effective
system designs that take advantage of increasingly powerful machine capabilities (“automation”)? In contrasting two alternative conceptual
approaches to design, the paper notes,
When approaching design framed as interaction design, for instance, the following questions arise at many levels of detail in the design
(from semantic walk-throughs to consideration
of detailed interface design features):
Dekker and Woods (2002) contended
that the objective of determining “who”
(human vs. machine) does “what” (function) in complex systems control did not
serve to advance [human–automation
interaction] design but rather the most
critical design need is to focus on facilitating human and automation coordination
(i.e., “how do we make them [the human
and machine] get along”).
•• What is the range of scenarios that needs to be
considered?
•• What interactions could be supported (normative
models from the designer’s perspective)?
•• What actual interactions should be anticipated for
a given interaction design for different classes of
scenarios?
•• How might these actual interactions influence the
cognitive processes of the operator(s) and the emergent behaviors of the human–machine system?
•• How will the operator(s) perform in scenarios that
exceed the competence limits of the technology?
This underlying question is fundamentally about
the psychology of the designer, as the conceptual
Address correspondence to Philip J. Smith, Department of
Integrated Systems Engineering, The Ohio State University,
210 Baker Engineering, 1971 Neil Ave., Columbus, OH
43210, USA, smith.131@osu.edu.
Journal of Cognitive Engineering and Decision Making
201X, Volume XX, Number X, Month 2017, pp. 1­–3
DOI: 10.1177/1555343417732239
Copyright © 2017, Human Factors and Ergonomics Society.
Consider, for example, a designer interested in
developing software to support some diagnosis
task who approaches it from a technologycentric perspective. He or she starts by considering the different classes of technologies that
could be used and focuses on developing either
a knowledge-based system or a design based on
a neural network. The designer’s initial inclination is to proceed with the implementation of a
neural network, as he or she believes that the
2Month XXXX - Journal of Cognitive Engineering and Decision Making
collection of a representative set of training
cases will require less effort than the knowledge acquisition activities required to develop a
knowledge-based system.
Case 1. The designer is familiar with some of
the research focused on LOAs, triggering him or
her to explicitly ask the question, “What LOA is
appropriate”? Assuming a designer who is not
overconfident, he or she concludes that there will
likely be scenarios that have not been covered in
the training cases and further decides that the
impacts of misdiagnoses would be highly undesirable when such scenarios arise. He or she therefore concludes that the appropriate “design” is to
indicate when marketing this tool that the role of
software is to assist an expert human, who has
final responsibility for arriving at a diagnosis. That
way, he or she concludes, the anomalous cases
that are beyond the competence of the software
will be detected by the responsible person—ignoring the substantial cognitive engineering literature
that cautions against such a naive assumption as
summarized in Smith (2017). On the basis of this
consideration, the designer proceeds to develop a
neural network.
Case 2. This designer also initially favors
the use of a neural network for the same reasons
but, before making a choice, thinks about different human–automation interaction designs. As
part of this assessment, he or she considers the
different types of interactions embedded in four
different designs.
•• The software is assigned the role of initial problem
solver, looking at the available data and requesting
that additional tests be run and then indicating its
diagnosis—perhaps with some associated level of
confidence. The human expert is then expected
to critique this diagnosis by reviewing displays
of the available data in order to arrive at the final
diagnosis (which may or may not agree with the
software’s conclusion). (Both the knowledgebased system and the neural network could support this design.)
•• The software is assigned the role of initial problem
solver, looking at the available data and requesting
that additional tests be run and then indicating its
diagnosis—perhaps with some associated level of
confidence. Instead of providing just access to the
available data, the software provides an explanation.
(The design based on knowledge-based systems
technology is more amenable to providing this form
of interaction.)
•• The software is designed as a critiquing system
(Miller, 1986; Silverman, 1992). The human
completes the diagnosis independently and then
submits his or her answer to the software, which
then critiques that answer by comparing it with the
result generated by its expert model. The expert
model also is used to generate an explanation.
(The design based on knowledge-based systems
technology again is more amenable to providing
this form of interaction, although sans the explanation, a neural network could also adopt this role.)
•• The software is designed as an interactive critiquing system (Guerlain et al., 1996, 1999; Smith
et al., 2012) that unobtrusively monitors intermediate steps taken by the human during the process
of arriving at a diagnosis and provides immediate,
context-sensitive feedback as soon as it detects a
difference between the human’s steps and the steps
prescribed by its expert model. The software also
makes use of metaknowledge to detect scenarios
that appear to challenge its level of competence
and cautions the user when it detects such cases.
(The design based on knowledge-based systems
technology once again is more amenable to providing this form of interaction.)
After careful consideration, the designer elects
to develop the interactive critiquing system
using knowledge-based systems technology
(but considers embedding a component that further cross-checks the final decision using a neural network to provide converging evidence).
This decision has major implications regarding
the ways in which the operator and technology
would interact.
Note that both of these cases have embedded
somewhat simplistic caricatures of the influences
that the literatures on LOAs and on human–automation teaming could have. It is largely still a
research challenge to develop and validate descriptive cognitive task analyses that reveal how
designers exposed to these different research
thrusts are actually influenced by such awareness.
However, these two cases help to make the point
that, as a field, one of our major challenges is to
develop a psychology of designers and that
debates about what conceptual frameworks are
useful should focus on such models of designers.
Conceptual Frameworks to Guide Design
Determining the Usefulness of
Intermediate Constructs, Process
Models, and Taxonomies
Kaber (2017) notes the need for better
descriptive and predictive models. He notes, for
instance, that meta-analyses by Wickens, Li,
Santamaria, Sebok, and Sarter (2010) and Onnasch,
Wickens, Li, and Manzey (2014) supported “the
intuition that greater human dependence on automation leads to greater performance problems
upon return to manual control under automation
failures” and suggests such results indicate that
the use of taxonomies of LOAs can help to “better
account for actual human behaviors in use of automation.” Kaber also discusses the formal modeling
approach developed by Bolton and Bass (2011)
and suggests that this approach using an operator
function model “could be used as a tool for verification of the implications of specific LOAs on
human performance in various applications and
task conditions.” In both cases, the relevant question is, Does the introduction of the construct of
“LOAs” add insight when a designer attempts to
apply findings, such as those by Wickens et al. and
Onnasch et al., and/or applies a given process-modeling approach, such as that suggested by Bolton
and Bass? The same question applies to findings
about human–automation teamwork.
Implications
The brief discussion above highlights two
considerations. First, it is not enough for our
field to produce descriptive models of human–
automation interaction or to propose conceptual
frameworks or modeling techniques. We also
need to understand their effectiveness in actually influencing designers.
Second, we need to determine what defining
attributes are useful in characterizing alternative
designs. Is it, for instance, useful for a designer to
think in terms of alternative designs based on different LOAs in order to consider their impacts on
the degree of “human dependence on automation,” leading to “greater performance problems
upon return to manual control under automation”?
Similarly, is it useful to think in terms of how different forms of interaction with the automation
could influence the cognitive processes of the
operator and resultant system performance in different scenario contexts? Note that these latter
influences are not captured adequately by labels
like dependence but rather require a much deeper
3
understanding of how the introduction of cognitive tools influences operator behavior.
References
Bolton, M. L., & Bass, E. J. (2011). Evaluating human–automation interaction using task analytic behavior models, strategic
knowledge-based erroneous human behavior generation, and
model checking. In 2011 IEEE Conference on Systems, Man &
Cybernetics (pp. 1788–1794). Piscataway, NJ: IEEE.
Guerlain, S., Smith, P. J., Obradovich, J., Rudmann, S., Smith, J.
W., & Svirbely, J. (1996). Dealing with brittleness in the design
of expert systems for immunohematology. Immunohematology, 12, 101–107.
Guerlain, S., Smith, P. J., Obradovich, J. H., Rudmann, S., Strohm,
P., Smith, J. W., Svirbely, J., & Sachs, L. (1999). Interactive
critiquing as a form of decision support: An empirical evaluation. Human Factors, 41, 72–89.
Kaber, D. (2017). Issues in human–automation interaction modeling: Presumptive aspects of frameworks of types and levels of
automation. Journal of Cognitive Engineering and Decision
Making, XX, XX–XX.
Miller, P. (1986). Expert critiquing systems: Practice-based medical consultation by computer. New York, NY: Springer-Verlag.
Onnasch, L., Wickens, C. D., Li, H., & Manzey, D. (2014). Human
performance consequences of stages and levels of automation:
An integrated meta-analysis. Human Factors, 56, 476–488.
Rouse, W. B., & Boff, K. R. (1987). System design: Behavioral
perspectives on designers, tools, and organizations. Amsterdam, Netherlands: North-Holland.
Rouse, W. B., Cody, W. R., & Boff, K. R. (1991). The human factors of system design: Understanding and enhancing the role
of human factors engineering. International Journal of Human
Factors in Manufacturing, 1, 87–104.
Silverman, B. G. (1992). Survey of expert critiquing systems: Practical and theoretical frontiers. Communications of the ACM, 35,
106–128.
Smith, P. J. (2017). Making brittle technologies useful. In P. J. Smith &
R. R. Hoffman (Eds.), Cognitive systems engineering: The future
for a changing world. Boca Raton, FL: Taylor & Francis.
Smith, P. J., Beatty, R., Hayes, C., Larson, A., Geddes, N., &
Dorneich, M. (2012). Human-centered design of decisions-support systems. In J. Jacko (Ed.), The human–computer interaction
handbook: Fundamentals, evolving technologies, and emerging
applications (3rd ed., pp. 589–621). Boca Raton, FL: CRC Press.
Wickens, C. D., Li, H., Santamaria, A., Sebok, A., & Sarter, N. B.
(2010). Stages and levels of automation: An integrated metaanalysis. In Proceedings of the Human Factors and Ergonomics Society 54th Annual Meeting (pp. 389–393). Santa Monica,
CA: Human Factors and Ergonomics Society.
Philip J. Smith is a professor in the Department of Integrated Systems Engineering at The Ohio State University and a Fellow of the Human Factors and Ergonomics Society. He is recognized as a leader in air traffic
flow management, air traffic control, airline operations
control, flight deck design, human automation interaction, collaborative decision making (CDM), and the
design of distributed work systems in the National
Airspace System. His expertise encompasses cognitive
systems engineering, human factors engineering, artificial intelligence and human-automation interaction.
Документ
Категория
Без категории
Просмотров
2
Размер файла
298 Кб
Теги
1555343417732239
1/--страниц
Пожаловаться на содержимое документа