close

Вход

Забыли?

вход по аккаунту

?

1541931213601998

код для вставкиСкачать
Proceedings of the Human Factors and Ergonomics Society 2017 Annual Meeting
2057
The impact of user biases toward a virtual human’s skin tone on triage errors
within a virtual world for emergency management training
1
Sarah A. Zipp, Tyler Krause, & Scotty D. Craig*
Human Systems Engineering, Arizona State University // szipp@asu.edu // scotty.craig@asu.edu
Biases influence the decisions people make in everyday life, even if they are unaware of it. This behavior
transfers into social interactions in virtual environments. These systems are becoming an increasingly common
platform for training, so it is critical to understand how biases will impact them. The present study investigates
the effect of the ethnicity bias on error behaviors within a virtual world for medical triage training. Two
between subjects variables, participant skin tone (light, dark) and avatar skin tone (light, dark), and one within
subjects variable, agent/patient skin tone (light, dark), were manipulated to create a 2 X 2 X 2 mixed design
with four conditions. Effects on errors were observed on errors made while helping patient (agents). Participants
made considerably more errors while triaging dark-skinned agents which increased the amount of time spent on
them, in comparison to light-skinned agents. Within a virtual world for training, people apply general ethnic
biases against dark-skinned individuals, which is important to consider when designing such systems because
the biases could impact the effectiveness of the training.
Keywords
ethnicity bias; virtual humans; virtual worlds; Emergency response training
Copyright 2017 by Human Factors and Ergonomics Society. DOI 10.1177/1541931213601998
INTRODUCTION
Virtual worlds provide a unique platform for
learning and training by simulating situations that
cannot be easily or feasibly replicated in the real
world for training purposes (Patterson, Pierce, Bell,
Andrews, & Winterbottom, 2009; Alison, et al.,
2013). One example of this is simulating trauma
victims in critical condition in order to train
emergency department team members from around
the world to manage such rare cases (Heinrichs,
Youngblood, Harter, & Dev, 2008). However, as
users interact with one another within these virtual
environments, they apply their implicit biases and
beliefs, just as they would in the real world (Nass &
Moon, 2000). So, it is important to investigate how
these biases and beliefs influence users’ behaviors
within a virtual world.
Ethnicity Biases
The implicit biases we hold, in real and virtual
worlds, may stem from intergroup dynamics, such as
the tendency to favor members perceived to be part of
one’s own group, called the ingroup (Gerard & Hoyt,
1974). Ethnicity bias is a complicated issue that is not
always exclusively directed outside of the group.
Sometimes biases can be strong enough to hold both
across and within groups. These biases across groups
have been shown in both real world and virtual world
settings. For example, physicians have been shown to
be less likely to diagnose harmful conditions in
darker-skinned individuals (Stepanikova, 2012).
Using a first person shooter-based video game,
Correll, Park, Judd, and Wittenbrink (2002) placed
individuals of different ethnicities in a fast-paced
decision-making situation in which they had to decide
to shoot a person encountered within the game. This
study found that the ethnicity of the agent in the game
mattered, with dark-skinned agents being shot more
often. However, the ethnicity of the participant did
not matter. It is argued that dark-skinned agents were
shot more often because of the application of negative
threat schemas based on race (Devine & Elliot, 1995).
Virtual Worlds with Virtual Humans
Virtual worlds have been used extensively for
training (Andrews & Craig, 2015). The situations
created in these systems are usually difficult to
simulate in the real world, even for training purposes.
These systems can be more effective than traditional
training methods. Conradi et al. (2009) evaluated a
virtual world for training paramedic students as an
alternative to typical paper-based methods, in which a
patient scenario is verbally described. The system
employed problem-based learning, presenting the
paramedics with virtual patient scenes, such as a
motorcycle accident, and requiring them to work as a
team to treat the patient. In terms of decision-making,
the paramedic students preferred the virtual world
over the static paper-based method because the
patient’s condition progressed based on their choices.
They were able to practice a variety of treatments,
without harming the patient, and correct their
mistakes.
When humans interact in virtual environments, they
tend to apply expectations and attributions based on
their real-world experience (Reeves & Nass, 1996).
Social interactions are one of the major affordances of
virtual worlds. As more social interactions are added
1
Proceedings of the Human Factors and Ergonomics Society 2017 Annual Meeting
to the virtual worlds, social reactions are
automatically triggered (Nass & Moon, 2000).
Virtual humans are digital representations that look
similar to humans and are an ideal method for
facilitating social interactions (Schroeder, Romine, &
Craig, 2017; Moreno, Mayer, Spires, & lester, 2001).
They are either avatars, controlled by users, or agents,
controlled by a computer algorithm (Craig et al.,
2015; Fox, Bailenson, & Tricase, 2013). The avatar in
which a user is embodied has been shown to have
immediate effects on their behavior in the virtual
environment (Yee & Bailenson, 2007).
McCall, Blascovich, Young, and Persky (2009)
used virtual reality technology to identify a predicting
relationship
between
proxemic
behaviors
(interpersonal distance and head movements) and
aggression during encounters with Black and White
agents. Participants were embodied by avatars were
intended to look like the participants, gender and skin
color, to support the transference of their implicit
beliefs about ethnic biases into their behavior in the
virtual world. As a result, avoidant proxemic
behaviors towards Black agents lead to aggression in
a violent context. Therefore, when participants are
embodied by an avatar highly similar to themselves in
a virtual world, their ethnic biases and subsequent
behaviors are nearly the same as they would be in the
real world.
Current Study
Based on the previous research (Yee & Bailenson,
2007; McCall et al., 2009; Peck, Seinfeld, Aglioti, &
Slater, 2013), it can be hypothesized that embodiment
in an avatar with a different skin tone could result in
applying helping behaviors equally to all
patients/agents. Whereas embodiment in an avatar
with the same skin tone will not alter biases and
therefore not change helping behaviors to
patients/agents.
VCAEST
The current study uses a virtual world-based
training system, Virtual Civilian Aeromedical
Evacuation Sustainment Training (VCAEST).
VCAEST (See Figure 1) teaches healthcare providers
triaging for mass-casualty disasters, or the sorting of
victims for treatment based on a set of priorities.
(Shubeck, Craig, & Hu, 2016).
The system was based off CAEST, a live
simulation designed to train civilian medical
practitioners and public health professionals on triage
and aeromedical evacuation standards.
To address these needs, VCAEST provided a
highly interactive virtual world that simulated the
interactive exercises while using natural language
intelligent tutoring paired with vicarious learning
based modelling of behaviors (Craig, Gholson,
2058
Brittingham, Williams, & Shubeck, 2012; Gholson &
Craig, 2006; Twyford & Craig, 2017) to a more
dynamic version of the didactic information. In
addition, the virtual platform of VCAEST allowed for
the inclusion of features that would not be possible to
implement in a live simulation. VCAEST provides
natural language guidance and feedback to users via
AutoTutor LITE (Learning for Interactive Training
Environments), an Intelligent Tutoring System (Hu,
Cai, Han, Craig, Wang, & Graeser, 2009; Sullins,
Craig, & Hu, 2015).
Figure 1. Screenshot of VCAEST Interface
Upon accessing the VCAEST interface, the user
can choose one of six pre-existing avatars. There are
two avatars, one male and one female, of each skin
tone (light/Caucasian, medium/Asian, dark/AfricanAmerican). The user is also asked to type in a name
that is displayed above their avatar. The user moves
their avatar through the interface in order to identify
six patients (agents) in need of assistance. When
initially encountering a patient, the user assigns a
treatment priority based on physical observations. If
the user inputs incorrect information, AutoTutor
LITE provides guidance and feedback. The responses
may be a brief lecture or a leading question that will
help the user understand their mistake and correct it.
METHOD
Participants
Seventy-six undergraduate psychology students (55
male, 21 female) took part in the study. They were
between the ages of 18 and 44 (M = 20.8, SD = 4.7).
They were offered partial course credit in return for
their participation.
Design
In order to explore the hypotheses, three
independent variables were manipulated: two
between-subject’s variables, participant skin tone
(light, dark) and avatar skin tone (light, dark), and
one within-subjects variable, agent/patient skin tone
(light, dark). The skin tone of the participants was
determined by the experimenter based on the hue of
their skin. Using the Fitzpatrick phototyping scale
Proceedings of the Human Factors and Ergonomics Society 2017 Annual Meeting
(Fitspatrick, 1975), participants with Type I-IV (Pale
white, Olive, and light brown) were considered light
skin tones; whereas, Type V and VI were darker skin
tones (Moderate to dark brown). The mixed 2 x 2 x 2
design created four conditions, based on the betweensubjects variables, which were organized into two
avatar-match
conditions.
Specifically,
two
match/control conditions, light-skinned participant in
a light-skinned avatar and dark-skinned participant in
a dark-skinned avatar, as well as two mismatch
conditions, light-skinned participant in a dark-skinned
avatar and dark-skinned participant in a light-skinned
avatar. In all conditions, the avatar had the same
gender as the participant. Based on random
assignment to the match or mismatch group, the
participant’s avatar was chosen by the experimenter
to accommodate the condition.
Figure 2. VCAEST Avatars
The within subject’s variable, patient/agent skin
tone, was implemented within the virtual world.
Regardless of condition, all participants were required
to triage a light-skinned and a dark-skinned patient.
This allowed for the comparison of behaviors
between the agents across avatar-match conditions.
Subsequently, this provided for exploration of the
alternative hypothesis that biases towards darkskinned individuals would be consistent across
participants, regardless of their own skin tone.
MATERIALS
VCAEST.
The previously described VCAEST interface was
utilized in this study. Because the participants were
not required to have any previous medical training,
the original tutoring modules were revised to include
layperson terms, rather than medical jargon.
Additionally, the modules were condensed to focus
primarily on triage in order to shorten and simplify
the training. In this process, the modules were used as
a pre-training and AutoTutor LITE was turned off
2059
during the VCAEST interaction. This was done to
provide a pure measure of participants’ behavior
without the influence of the agent’s interactive
feedback from the AutoTutor LITE system.
The AutoTutor LITE triage pre-training
emphasized the color tagging system. The five color
tags, black (deceased), grey (expectant), red
(immediate), yellow (delayed), and green (minimal)
indicate the level of treatment or priority for treatment
based on the patients’ condition. After a tag is
assigned, the patient is considered to be triaged.
Interface Training.
In order to teach participants how to use the
VCAEST interface, a three-minute instructional video
was created. The video is a screencast of the interface
with an experimenter providing verbal instructions as
well as demonstrations with the mouse. The video
walks through the entire process of triaging a patient
in the VCAEST interface, including focusing on a
patient, measuring vital signs using the provided
tools, entering and validating values on the triage tag,
selecting a tag color, and error recovery.
Behavioral Assessments.
The quality of assistance was measured number of
errors made while helping a patient. Following
completion of the experiment, participants’ screen
captures were coded to identify these behaviors. For
every participant, these behavioral measures were
taken during their interaction with a light-skinned and
a dark-skinned patient. These two agents were in the
same general section of the virtual world and were
clearly visible in the environment without any
occluding objects (see Figure 3).
Figure 3. Dark-skinned and Light-skinned Patients
Participant errors. The number of errors made
while helping a patient was recorded. These errors
were coded when the participant applied the wrong
information during assessment of the virtual patient
or skipped important steps. Examples of this would
be attempting to choose a triage category prior to
inputting all vital sign values or mixing up vital sign
values.
Procedure
The experiment lasted approximately one hour and
was initially presented as an investigation of the
ability of nonmedical personnel to learn and apply
Proceedings of the Human Factors and Ergonomics Society 2017 Annual Meeting
mass-casualty triage knowledge. At the start of the
study, the participant was seated at a computer and
directed to a consent form. Next, they viewed two
videos: the initial triage training followed by the
interface training video and were allowed to ask
questions to clarify interface usage.
Next, the virtual world interaction was set up by the
experimenter. The entire interaction was recorded
using screencast software, so that the screen would be
recorded, but not the participant’s face or voice. This
was done so that data about behaviors in the virtual
world could be recorded later.
Each participant used a separate instance of the
interface, in order to ensure that participants would
not be influenced by other participants’ behaviors.
The virtual world was loaded and the experimenter
selected the participant’s avatar, depending on the
randomized avatar-match manipulation condition
(match/control, mismatch) and the participant’s
gender, which was always matched (see Figure 2).
The experimenter entered the avatar name, which was
the participant’s individual subject number. After the
virtual world and avatar were ready, the participant
was instructed to apply their triage knowledge to help
patients/agents throughout the virtual world. They
were told to help all six patients as quickly as
possible. During the interaction, the participants were
not allowed to ask questions. After successfully
triaging all patients, the interaction was finished. The
participant was then instructed to answer a brief set of
demographics questions. Upon completion, they were
debriefed on the actual purpose of the study.
RESULTS AND DISCUSSION
Error Analysis
A mixed ANOVA was performed on the within
subjects factor, number of errors made while helping
a patient (agent: dark-skinned, light-skinned) by the
between subjects factors, participant skin tone (light,
dark) and avatar skin tone (light, dark). There was a
significant main effect observed for number of errors
made while helping patients between agent skin
tones, F(1, 72)=7.663, p=.007, ηp2=.096 with
significantly more errors made for the dark-skinned
agent (M = 2.55) than for the light-skinned agent (M
= 1.71). Means and standard deviations for all
conditions are provided in Table 1.
The present research extends previous research on
the impact that virtual humans within a virtual world
can have on behavior (Yee & Bailenson, 2007;
McCall et al., 2009; Peck et al., 2013). This study
specifically examined the effect of bias on the user’s
helping behaviors. Participants’ helping behaviors
within VCAEST were influenced by the skin tone of
the patient/agent being assisted.
Participants made more errors while helping darkskinned agents, compared to light-skinned agents.
This indicates a negative bias towards dark-skinned
individuals, regardless of the user’s own skin tone or
the skin tone of their avatar.
Previous research suggests that behavior in virtual
environments is impacted by avatars. Embodiment in
an avatar can almost instantly influence a user’s
behavior (Yee & Bailenson, 2007). Users are often
embodied in avatars of their same skin tone and
gender to provide a realistic experience. Under these
conditions, in the virtual environment users may
demonstrate negative social behaviors towards darkskinned agents, as they would in the real world
(McCall et al., 2009). Further, Peck et al. (2013)
demonstrated that embodiment in an avatar that is
different from oneself, specifically light-skinned
users in dark-skinned avatars, decreased implicit bias
against dark-skinned people. The current findings do
not indicate that a person’s biases can be modified by
their avatar with a short- intervention.
This difference is likely due to difference between
the experimental settings. The high fidelity (3D)
immersive environments have shown the effect while
the lower fidelity (2D) have not. Previous studies by
Yee and Bailenson (2007) and McCall et al. (2009)
that found behavioral changes with avatars were with
total emersion systems in which participants wore
head-mounted displays and could move around the
environment. It could be that this greater level of
fidelity is needed in order to produce the effect.
The absence of such a finding provides support for
the claim that a person’s more general biases have a
stronger impact on behavior (Correll et al., 2002;
Devine & Elliot, 1995; Steele & Aronson, 1995;
Stepanikova, 2012). If an individual feels
disconnected from their avatar due to the low fidelity
of the virtual environment, they may automatically
apply biases to others as they would in the real world.
The current finding is important because it
indicates that bias can impact the effectiveness of
2060
Proceedings of the Human Factors and Ergonomics Society 2017 Annual Meeting
training within virtual worlds. Participants in the
current study applied negative ethnic biases toward
dark skin virtual humans. This impacted their helping
behaviors towards virtual humans with darker skin
tones and increased errors. However, the participants
were undergraduates. So, it is unclear if the findings
would transfer a medical population. However, this is
possible given Stepanikova’s findings (2012) that
these biases are observable in medical practitioners.
If it holds, this finding is critical in the context of
training. When designing training systems with
virtual humans, it is important to identify these
patterns. Further research is needed to determine if
effective responses to potentially decrease biases
during training. If left uncorrected, it could cause
changes in learners’ behaviors, which could impact
the efficacy of the training environment. Due to this,
it is important that future research on virtual world
training take into consideration the user’s biases.
ACKNOWLEDGEMENTS
This research was partially supported by the
Department of Defense [U.S. Army Medical Research
Acquisition Activity] under award number
(W81XWH-11-2-0171). Views and opinions of, and
endorsements by the author(s) do not reflect those of
the US Army or the Department of Defense. This
research was also partially funded by the Fulton
Undergraduate Research Initiative at Arizona State
University (http://more.engineering.asu.edu/furi/).
REFERENCES
Andrews, D. H. & Craig, S. D. (Eds.). (2015). Readings in
Training and Simulation (Vol. 2): Research articles from 2000 to
2014. Santa Monica, CA: Human Factors and Ergonomics Society.
Backlund, P., Engström, H., Hammar, C., Johannessen, M., &
Lebram, M. (2007, July). Sidh-a game based firefighter training
simulation. In Information Visualization, 2007. IV’07. 11th
International Conference (pp. 899-907). IEEE.
Brewer, M. B. (1999). The psychology of prejudice: ingroup
love or outgroup hate? Journal of Social Issues, 55(3), 429-444.
Conradi, E., Kavia, S., Burden, D., Rice, A., Woodham, L.,
Beaumont, C., Savin-Baden, M., & Poulton, T. (2009). Virtual
patients in a virtual world: training paramedic students for practice.
Medical Teacher, 31(8), 713-720.
Correll, J., Park, B., Judd, C. M., & Wittenbrink, B. (2002). The
police officer's dilemma: using ethnicity to disambiguate
potentially threatening individuals. Journal of Personality and
Social Psychology, 83(6), 1314-1329.
Craig. S. D., Gholson, B., Brittingham, J. K., Williams, J., &
Shubeck, K. T. (2012). Promoting vicarious learning of physics
using deep questions with explanations. Computers & Education,
58, 1042-1048.
Craig, S. D., Twyford, J., Irigoyen, N., & Zipp S. (2015). A Test
of Spatial Contiguity for Virtual Human’s Gestures in Multimedia
Learning Environments. Journal of Educational Computing
Research, 53, 3-14.
Devine, P. G., & Elliot, A. J. (1995). Are racial stereotypes
really fading? The Princeton trilogy revisited. Personality and
Social Psychology Bulletin, 21, 1139–1150.
Fitzpatrick, T. B. (1975). Soleil et peau. Journal de Médecine
Esthétique, 2, 33-34.
Fox, J., Bailenson, J. N., & Tricase, L. (2013). The embodiment
of sexualized virtual selves: the proteus effect and experiences of
self-objectification via avatars. Computers in Human Behavior, 29,
930-938.
Gerard, H. B., & Hoyt, M. F. (1974). Distinctiveness of social
categorization and attitude toward ingroup members. Journal of
Personality and Social Psychology, 29(6), 836-842.
Gholson, B., & Craig, S. D. (2006). Promoting constructive
activities that support vicarious learning during computer-based
instruction. Educational Psychology Review, 18, 119–139.
Heinrichs, W. L., Youngblood, P., Harter, P.M., & Dev, P.
(2008). Simulation for team training and assessment: case studies
of online training with virtual worlds. World Journal of Surgery,
32, 161-170.
Hu, X., Cai, Z., Han, L., Craig, S. D., Wang, T., & Graesser, A.
C. (2009). AutoTutor Lite. In V. Dimitrova, R. Mizoguchi, B. du
Boulay, & A. C. Graesser (Eds.), Artificial Intelligence in
Education, Building Learning Systems That Care: From
Knowledge Representation to Affective Modeling (p. 802).
Washington, DC: IOS Press.
Levy, M., Koch, R. W., & Royne, M. B. (2013). Self-reported
training needs of emergency responders in disasters requiring
military interface. Journal of Emergency Management, 11(2), 143150.
McCall, C., Blascovich, J., Young, A., & Persky, S. (2009).
Proxemic behaviors as predictors of aggression towards Black (but
not White) males in an immersive virtual environment. Social
Influence, 4(1), 1-17.
Moreno, R., Mayer, R. E., Spires, H. A., & Lester, J. C. (2001).
The case for social agency in computer-based teaching: Do
students learn more deeply when they interact with animated
pedagogical agents?. Cognition and instruction, 19(2), 177-213.
Nass, C., & Moon, Y. (2000). Machines and mindlessness:
social responses to computers. Journal of Social Issues, 56(1), 81103.
Patterson, R., Pierce, B., Bell, H. H., Andrews, D., &
Winterbottom, M. (2009). Training robust decision making in
immersive environments. Journal of Cognitive Engineering and
Decision Making, 3(4), 331-361.
Peck, T. C., Seinfeld, S., Aglioti, S. M., & Slater, M. (2013).
Putting yourself in the skin of a black avatar reduces implicit racial
bias. Consciousness and Cognition, 22, 779-787.
Reeves, B., & Nass, C. (1996). The Media Equation: How
people treat computers, television, and new media like real people
and places. New York, NY: Cambridge University Press.
Schroeder, N. L., Romine, W. L., & Craig, S. D. (2017).
Measuring pedagogical agent persona and the influence of agent
persona on learning. Computers & Education, 109, 176-186.
Shubeck, K. T., Craig, S. D., & Hu, X. (2016). Live-action
mass-casualty training and virtual world training: A comparison.
Proceedings of the Human Factors & Ergonomics Society Annual
Meeting (pp. 2103-2107). Los Angeles: SAGE.
Steele, C. M., & Aronson, J. (1995). Stereotype threat and the
intellectual test performance of African Americans. Journal of
Personality and Social Psychology, 69, 797–811.
Stepanikova, I. (2012). Racial-ethnic biases, time pressures, and
medical decisions. Journal of Health and Social Behavior, 53(3),
329-343.
Sullins,. J., Craig, S.D., & Hu, X. (2015). Exploring the
Effectiveness of a Novel Feedback Mechanism within an
Intelligent Tutoring System. International Journal of Learning
Technology, 10, 220-236.
Twyford, J. & Craig, S. D. (2017). Modeling goal setting within
a multimedia environment on complex physics content. Journal of
Educational Computing Research, 55 (3), 374-394.
Yee, N., & Bailenson, J. (2007). The proteus effect: the effect of
transformed
self-representation
on
behavior.
Human
Communication Research, 33, 271-290.
2061
Документ
Категория
Без категории
Просмотров
3
Размер файла
2 723 Кб
Теги
1541931213601998
1/--страниц
Пожаловаться на содержимое документа