Publications

Computer Science Publications

 

 

Teaming Up With Virtual Humans: How Other People Change Our Perceptions of and Behavior with Virtual Teammates

 

Andrew Robb, Andrew Cordar, Samsun Lampotang, Casey White, Adam Wendling, Benjamin Lok (2015)Teaming Up With Virtual Humans: How Other People Change Our Perceptions of and Behavior with Virtual Teammates 21(4), p. 511 - 519

In this paper we present a study exploring whether the physical presence of another human changes how people perceive and behave with virtual teammates. We conducted a study (n = 69) in which nurses worked with a simulated health care team to prepare a patient for surgery. The agency of participants’ teammates was varied between conditions; participants either worked with a virtual surgeon and a virtual anesthesiologist, a human confederate playing a surgeon and a virtual anesthesiologist, or a virtual surgeon and a human confederate playing an anesthesiologist. While participants perceived the human confederates to have more social presence (p < 0.01), participants did not preferentially agree with their human team members. We also observed an interaction effect between agency and behavioral realism. Participants experienced less social presence from the virtual anesthesiologist, whose behavior was less in line with participants’ expectations, when a human surgeon was present.

A comparison of speaking up behavior during conflict with real and virtual humans

 

Andrew Robb, Casey White, Andrew Cordar, Adam Wendling, Samsun Lampotang, Benjamin Lok (2015) A comparison of speaking up behavior during conflict with real and virtual humans 52, p. 12-21, Elsevier Ltd

Breakdowns in team communication are a common source of error. Unfortunately, even when errors are identified, team members may not speak up about the error, often out of fear of confrontation. We pro- pose that virtual humans may be used to help prepare people to speak up. To this end, we conducted a between-subjects study examining speaking up behavior with real and virtual humans. Forty-eight nurses participated in a team training exercise that gave them an opportunity to speak up to a surgeon (either virtual or human, depending on condition) in order to protect a patient’s safety. Our results sug- gest that speaking up behavior with the virtual surgeon closely approximated behavior with the human surgeon: no significant differences were found in participants’ use of influence tactics ðp > 0:149Þ or in the outcomes obtained ðp ¼ 0:788Þ. However, participants were significantly more likely to ask the vir- tual anesthesiologist for input when working with the human surgeon ðp < 0:05Þ. Our findings suggest that participants found speaking up to the real and virtual surgeon to be of comparable difficulty. This is an important prerequisite before virtual humans can be used to prepare people to speak up about errors.

 

Andrea Kleinsmith, Diego Rivera-Gutierrez, Glen Finney, Juan Cendan, Benjamin Lok (2015) Understanding empathy training with virtual patients 52, p. 151-158, Elsevier Ltd

Understanding empathy training with virtual patients

 

While the use of virtual characters in medical education is becoming more and more commonplace, an understanding of the role they can play in empathetic communication skills training is still lacking. This paper presents a study aimed at building this understanding by determining if students can respond to a virtual patient’s statement of concern with an empathetic response. A user study was conducted at the University of Florida College of Medicine in which early stage medical students interacted with vir- tual patients in one session and real humans trained to portray real patients (i.e., standardized patients) in a separate session about a week apart. During the interactions, the virtual and ‘real’ patients presented the students with empathetic opportunities which were later rated by outside observers. The results of pairwise comparisons indicate that empathetic responses made to virtual patients were rated as signif- icantly more empathetic than responses made to standardized patients. Even though virtual patients may be perceived as artificial, the educational benefit of employing them for training medical students’ empa- thetic communications skills is that virtual patients offer a low pressure interaction which allows stu- dents to reflect on their responses.

Utilizing Real-time Human-Assisted Virtual Humans to Increase Real-world Interaction Empathy

 

Michael Borish, Andrew Cordar, Adriana Foster, Thomas Kim, James Murphy, Benjamin Lok (2014) Utilizing Real-time Human-Assisted Virtual Humans to Increase Real-world Interaction Empathy

Empathy is an important aspect of interpersonal communication skills. These skills are emphasized in medical education. The standard source of training is interviews with standardized patients. Standardized patients are trained actors who evaluate students on the effectiveness of their interviews and diagnosis. One source of additional training is interviews with virtual humans. Virtual humans can be used in conjunction with standardized patients to help train medical students with empathy. In this case, empathy training took place as part of a virtual human interaction that represented a patient suffering from depression. However, computers cannot accurately rate empathy, and we thus propose a hybrid experience. We propose a hybrid virtual human approach where hidden workers assist the virtual human. Hidden workers provide real-time empathetic feedback that is shown to the students after their interaction with the virtual human. The students then interview a standardized patient. All empathetic feedback and ratings are based on the Empathic Communication and Coding System (ECCS) as developed for medical student interviews. Fifty-two students took part in the study. The results suggest that students who received feedback after their virtual patient interview did provide more empathetic statements, were more likely to develop good rapport, and did act more warm and caring as compared to the control group that did not receive feedback.

Social presence in mixed agency interactions

 

Andrew Robb, Benjamin Lok (2014) Social presence in mixed agency interactions, p. 111-112, Iee

A Qualitative Evaluation of Behavior During Conflict with an Authoritative Virtual Human

 

Andrew Robb, Casey White, Andrew Cordar, Adam Wendling, Samsun Lampotang, Benjamin Lok (2014) A Qualitative Evaluation of Behavior During Conflict with an Authoritative Virtual Human, p. 1-13

Building Virtual Humans with Back Stories: Training Interpersonal Communication Skills in Medical Students

Andrew Cordar, Michael Borish, Adriana Foster, Benjamin Lok (2014) Building Virtual Humans with Back Stories: Training Interpersonal Communication Skills in Medical Students, Springer

We conducted a study which investigated if we could overcome challenges asso-ciated with interpersonal communication skills training by building a virtual hu-man with back story. Eighteen students interacted with a virtual human who pro-vided back story, and seventeen students interacted with the same virtual human who did not provide back story. Back story was achieved through the use of cutscenes which played throughout the virtual human interaction. Cutscenes were created with The Sims 3 and depicted short moments that occurred in the virtual human’s life. We found medical students who interacted with a virtual human with a back story, when interacting with a standardized patient, were perceived by the standardized patient as more empathetic compared to the students who inter-acted with the virtual human without a back story. The results have practical im-plications for building virtual human experiences to train interpersonal skills. Providing back story appears to be an effective method to overcome challenges associated with training interpersonal skills with virtual humans.

Exploring Gender Biases with Virtual Patients for High Stakes Interpersonal Skills Training

Diego Rivera-Gutierrez, Regis Kopper, Andrea Kleinsmith, Juan Cendan, Glen Finney, Benjamin Lok (2014)Exploring Gender Biases with Virtual Patients for High Stakes Interpersonal Skills Training, Timothy Bickmore, Stacy Marsella, Candace Sidner (ed.), Boston: Springer International Publishing​

The use of virtual characters in a variety of research areas is widespread. One such area is healthcare. The study presented in this paper leveraged virtual patients to examine whether virtual patients are more likely to be correctly diagnosed due to gender and skin tone. Medical students at University of Florida College of Medicine interacted with six virtual patients across two sessions. The six virtual patients comprised various combinations of gender and skin tone. Each virtual patient presented with a dierent cranial nerve injury. The results indicate a signicant dierence in correct diagnosis according to patient gender for one of the cases. In that case, female patients were correctly diagnosed more frequently than their male counterpart. The description of that case required that the virtual patient present with a visible bruise on the forehead. We hypothesize the results obtained could be due to a transfer of a real world gender bias

Towards a Reflective Practicum of Embodied Conversational Agent Experiences​

Diego Rivera-Gutierrez, Andrea Kleinsmith, Teresa Johnson, Rebecca Lyons, Juan Cendan, Benjamin Lok(2014) Towards a Reflective Practicum of Embodied Conversational Agent Experiences, Athens, Greece: IEEE

A reflective practicum is a low-pressure, low-risk learning environment. In a reflective practicum a learner is educated in a professional practice and how to use reflection in the setting of that professional practice. An example of a low-pressure and low-risk learning environment is the use of embodied conversational agents (ECAs) in medicine to provide training for interviewing and diagnostic skills. However, such ECA experiences have not been used to teach how to use reflection in the setting of a professional practice. In this paper we present a framework that supports explicit reflective learning for ECA experiences. Using this framework, ECA experiences become a reflective practicum. This framework was applied to an ECA experience called the Neurological Examination Rehearsal Virtual Environment (NERVE), and created a sample experience called the NERVE Reflective Practicum (NERVE-RP). We conducted a user study in which second-year medical students (n = 76) used NERVE-RP and engaged in reflection based on the experience. The results of the user study show that students engage in valuable reflections during the experience including instances of critical reflection.

Getting the Point Across: Exploring the Effects of Dynamic Virtual Humans in an Interactive Museum Exhibit on User Perceptions

Diego Rivera-Gutierrez, Rick Ferdig, Jian Li, Benjamin Lok (2014) Getting the Point Across: Exploring the Effects of Dynamic Virtual Humans in an Interactive Museum Exhibit on User Perceptions 20(4), p. 636-43​

We have created “You, M.D.”, an interactive museum exhibit in which users learn about topics in public health literacy while interacting with virtual humans. You, M.D. is equipped with a weight sensor, a height sensor and a Microsoft Kinect that gather basic user information. Conceptually, You, M.D. could use this user information to dynamically select the appearance of the virtual humans in the interaction attempting to improve learning outcomes and user perception for each particular user. For this concept to be possible, a better understanding of how different elements of the visual appearance of a virtual human affects user perceptions is required. In this paper, we present the results of an initial user study with a large sample size (n =333) ran using You, M.D. The study measured users’ reactions based on the user’s gender and body-mass index (BMI) when facing virtual humans with BMI either concordant or discordant from the user’s BMI. The results of the study indicate that concordance between the users’ BMI and the virtual human’s BMI affects male and female users differently. The results also show that female users rate virtual humans as more knowledgeable than male users rate the same virtual humans.

Virtual Agent Constructionism: Experiences from Health Professions Students Creating Virtual Conversational Agent Representations of Patients

Shivashankar Halan, Isaac Sla, Michael Crary, Benjamin Lok (2014) Virtual Agent Constructionism: Experiences from Health Professions Students Creating Virtual Conversational Agent Representations of Patients

This paper reports on applying constructionism with virtual agents in an educational setting. We introduce a methodology – Virtual Agent Constructionism, which involves health professions students creating virtual conversational agent representations of patients as part of coursework. The proposed methodology was implemented as an exercise in an educational course for three consecutive academic years. The aim of this paper is threefold – (i) to demonstrate feasibility of health professions students creating virtual agents in an educational setting as part of coursework, (ii) to report feedback from the students about the experience of creating virtual agents, and (iii) to report on initial trends that suggest that creating virtual agents helps health professions students improve their interviewing and interpersonal skills. In addition to these three innovations, we also present the virtual agents created as educational artifacts that can be used to train future students with their interpersonal skills.

Exploring agent physicality and social presence for medical team training​

JH Chuah, A Robb, Casey White (2013) Exploring agent physicality and social presence for medical team training 22(2), p. 141-170

Leveraging Virtual Humans to Effectively Prepare Learners for Stressful Interpersonal Experiences

Andrew Robb, Regis Kopper, Ravi Ambani, Farda Qayyum, David Lind, Li-ming Su, Benjamin Lok (2013)Leveraging Virtual Humans to Effectively Prepare Learners for Stressful Interpersonal Experiences 19(4), p. 662-670​

Stressful interpersonal experiences can be difficult to prepare for. Virtual humans may be leveraged to allow learners to safely gain exposure to stressful interpersonal experiences. In this paper we present a between-subjects study exploring how the presence of a virtual human affected learners while practicing a stressful interpersonal experience. Twenty-six fourth-year medical students practiced performing a prostate exam on a prostate exam simulator. Participants in the experimental condition examined a simulator augmented with a virtual human. Other participants examined a standard unaugmented simulator. Participants reactions were assessed using self-reported, behavioral, and physiological metrics. Participants who examined the virtual human experienced significantly more stress, measured via skin conductance. Participants stress was correlated with previous experience performing real prostate exams; participants who had performed more real prostate exams were more likely to experience stress while examining the virtual human. Participants who examined the virtual human showed signs of greater engagement; non-stressed participants performed better prostate exams while stressed participants treated the virtual human more realistically. Results indicated that stress evoked by virtual humans is linked to similar previous real-world stressful experiences, implying that learners real-world experience must be taken into account when using virtual humans to prepare them for stressful interpersonal experiences.

Constructionism of virtual humans to improve perceptions of conversational partners

Shivashankar Halan, Brent Rossen, Michael Crary, Benjamin Lok (2012) Constructionism of virtual humans to improve perceptions of conversational partners, p. 2387-2387-2392-2392

We propose a methodology to help people improve the accuracy of their mental model of a conversational partner by creating a virtual human representation of the partner. By creating a virtual human, the users will be able to transfer their mental model of the partner to a virtual human representation. Other people can then interact with the virtual human and provide feedback. The feedback will help the creator reduce the gap between their mental model of a partner and the actual qualities of the partner. Reducing this gap in perception is important in learning interpersonal skills. We implemented this methodology in a health professions course using Virtual People Factory, an online application for creating and interacting with virtual humans. The applicability of the methodology to reduce gaps in perception models was investigated through a user study with health professions students (n=32). The results indicate that students can reduce gaps in perceptions of conversational partners by creating virtual humans.

Increasing agent physicality to raise social presence and elicit realistic behavior

Joon Hao Chuah, Andrew Robb, Casey White, Adam Wendling, Samsun Lampotang, Regis Kopper, Benjamin Lok (2012) Increasing agent physicality to raise social presence and elicit realistic behavior, p. 19-22, Ieee

Shader lamps virtual patients: the physical manifestation of virtual patients.

Diego Rivera-Gutierrez, Greg Welch, Peter Lincoln, Mary Whitton, Juan Cendan, David A. Chesnutt, Henry Fuchs, Benjamin Lok (2012) Shader lamps virtual patients: the physical manifestation of virtual patients., p. 372-378, Newport Beach, CA: IOS Press

We introduce the notion of Shader Lamps Virtual Patients (SLVP)– the combination of projector-based Shader Lamps Avatars and interactive virtual hu- mans. This paradigm uses Shader Lamps Avatars technology to give a 3D physi- cal presence to conversational virtual humans, improving their social interactivity and enabling them to share the physical space with the user. The paradigm scales naturally to multiple viewers, allowing for scenarios where an instructor and multi- ple students are involved in the training.We have developed a physical-virtual pa- tient for medical students to conduct ophthalmic exams, in an interactive training experience. In this experience, the trainee practices multiple skills simultaneously, including using a surrogate optical instrument in front of a physical head, convers- ing with the patient about his fears, observing realistic head motion, and practicing patient safety. Here we present a prototype system and results from a preliminary formative evaluation of the system.

A crowdsourcing method to develop virtual human conversational agents

Brent Rossen, Benjamin Lok (2012) A crowdsourcing method to develop virtual human conversational agents70(4), p. 301-319

Educators in medicine, psychology, and the military want to provide their students with interpersonal skills practice. Virtual humans offer structured learning of interview skills, can facilitate learning about unusual conditions, and are always available. However, the creation of virtual humans with the ability to understand and respond to natural language requires costly engineering by conversation knowledge engineers (generally computer scientists), and incurs logistical cost for acquiring domain knowledge from domain experts (educators). We address these problems using a novel crowdsourcing method entitled Human-centered Distributed Conversational Modeling. This method facilitates collaborative development of virtual humans by two groups of end-users: domain experts (educators) and domain novices (students). We implemented this method in a web-based authoring tool called Virtual People Factory. Using Virtual People Factory, medical and pharmacy educators are now creating natural language virtual patient interactions on their own. This article presents the theoretical background for Human-centered Distributed Conversational Modeling, the implementation of the Virtual People Factory authoring tool, and five case studies showing that Human-centered Distributed Conversational Modeling has addressed the logistical cost for acquiring knowledge.

Physical Manifestations of Virtual Patients

Gregory Welch, Diego Rivera-Gutierrez, Peter Lincoln, Mary Whitton, Juan Cendan, David Chesnutt, Henry Fuchs, Benjamin Lok, Richard Skarbez (2011) Physical Manifestations of Virtual Patients, p. 1025

The impact of a mixed reality display configuration on user behavior with a virtual human

Kyle Johnsen, Diane Beck (2010) The impact of a mixed reality display configuration on user behavior with a virtual human, p. 1-7

Understanding the human-computer interface factors that influence users’ behavior with virtual humans will enable more effective human-virtual human encounters. This paper presents experimental evidence that using a mixed reality display configuration can result in significantly different behavior with a virtual human along important social dimensions. The social dimensions we focused on were engagement, empathy, pleasantness, and naturalness. To understand how these social constructs could be influenced by display configuration, we video recorded the verbal and non-verbal response behavior to stimuli from a virtual human under two fundamentally different display configurations. One configuration presented the virtual human at life-size and was embedded into the environment, and the other presented the virtual human using a typical desktop configuration. We took multiple independent measures of participant response behavior using a video coding instrument. Analysis of these measures demonstrate that display configuration was a statistically significant multivariate factor along all dimensions.

Using virtual humans to bootstrap the creation of other virtual humans

Brent Rossen, Juan Cendan (2010) Using virtual humans to bootstrap the creation of other virtual humans, p. 392-398

Virtual human (VH) experiences are increasingly used for training interpersonal skills such as military leadership, classroom education, and doc- tor-patient interviews. These diverse applications of conversational VHs have a common and unexplored thread – a significant additional population would be afforded interpersonal skills training if VHs were available to simulate either interaction partner. We propose a computer-assisted approach to generate a vir- tual medical student from hundreds of interactions between a virtual patient and real medical students. This virtual medical student is then used to train stan- dardized patients – human actors who roleplay the part of patients in practice doctor-patient encounters. Practice with a virtual medical student is expected to lead to greater standardization of roleplay encounters, and more accurate evaluation of medical student competency. We discuss the method for generat- ing VHs from an existing corpus of human-VH interactions and present obser- vations from a pilot experiment to determine the utility of the virtual medical student for training.

High Score!-Motivation Strategies for User Participation in Virtual Human Development

Shivashankar Halan, Brent Rossen, Juan Cendan, Benjamin Lok (2010) High Score!-Motivation Strategies for User Participation in Virtual Human Development, p. 482-488

Conversational modeling requires an extended time commitment, and the difficulty associated with capturing the wide range of conversational stimuli necessitates extended user participation. We propose the use of leaderboards, narratives and deadlines as motivation strategies to persuade user participation in the conversational modeling for virtual humans. We evaluate the applicabil- ity of leaderboards, narratives and deadlines through a user study conducted with medical students (n=20) for modeling the conversational corpus of a vir- tual patient character. Leaderboards, narratives and deadlines were observed to be effective in improving user participation. Incorporating these strategies had the additional effect of making user responses less reflective of real world con- versations.

 

Mixed reality humans: evaluating behavior, usability, and acceptability.​

Aaron Kotranza, Benjamin Lok, Adeline Deladisma, Carla M Pugh, D Scott Lind (2009) Mixed reality humans: evaluating behavior, usability, and acceptability. 15(3), p. 369-82

This paper presents Mixed Reality Humans (MRHs), a new type of embodied agent enabling touch-driven communication. Affording touch between human and agent allows MRHs to simulate interpersonal scenarios in which touch is crucial. Two studies provide initial evaluation of user behavior with a MRH patient and the usability and acceptability of a MRH patient for practice and evaluation of medical students' clinical skills. In Study I (n=8) it was observed that students treated MRHs as social actors more than students in prior interactions with virtual human patients (n=27), and used interpersonal touch to comfort and reassure the MRH patient similarly to prior interactions with human patients (n=76). In the within-subjects Study II (n=11), medical students performed a clinical breast exam on each of a MRH and human patient. Participants performed equivalent exams with the MRH and human patients, demonstrating the usability of MRHs to evaluate students' exam skills. The acceptability of the MRH patient for practicing exam skills was high as students rated the experience as believable and educationally beneficial. Acceptability was improved from Study I to Study II due to an increase in the MRH's visual realism, demonstrating that visual realism is critical for simulation of specific interpersonal scenarios.

Human-centered distributed conversational modeling: Efficient modeling of robust virtual human conversations​

Brent Rossen, Scott Lind (2009) Human-centered distributed conversational modeling: Efficient modeling of robust virtual human conversations, p. 474-481

Currently, applications that focus on providing conversations with virtual humans require extensive work to create robust conversational models. We present a new approach called Human-centered Distributed Conversational Modeling. Using this approach, users create conversational models in a distrib- uted manner. To do this, end-users interact with virtual humans to provide new stimuli (questions and statements), and domain-specific experts (e.g. medi- cal/psychology educators) provide new virtual human responses. Using this process, users become the primary developers of conversational models. We tested our approach by creating an example application, Virtual People Factory. Using Virtual People Factory, a pharmacy instructor and 186 pharmacy students were able to create a robust conversational model in 15 hours. This is approxi- mately 10% of the time typical in current approaches and results in more comprehensive coverage of the conversational space. In addition, surveys dem- onstrate the acceptability of this approach by both educators and students.

Real-time in-situ visual feedback of task performance in mixed environments for learning joint psychomotor-cognitive tasks

Aaron Kotranza, D Scott Lind (2009) Real-time in-situ visual feedback of task performance in mixed environments for learning joint psychomotor-cognitive tasks, p. 125-134

This paper proposes an approach to mixed environment training of manual tasks requiring concurrent use of psychomotor and cognitive skills. To train concurrent use of both skill sets, the learner is provided real-time generated, in-situ presented visual feedback of her performance. This feedback provides reinforcement and correction of psychomotor skills concurrently with guidance in developing cognitive models of the task. The general approach is presented: 1) Sensors placed in the physical environment detect in real-time a learner’s manipulation of physical objects. 2) Sensor data is input to models of task performance which output quantitative measures of the learner’s performance. 3) Pre-defined rules are applied to transform the learner’s performance data into visual feedback presented in real- time and in-situ with the physical objects being manipulated. With guidance from medical education experts, we have applied this approach to a mixed environment for learning clinical breast exams (CBEs). CBE belongs to a class of tasks that require learning multiple cognitive elements and task-specific psychomotor skills. Traditional approaches to learning CBEs and other joint psychomotor-cognitive tasks rely on extensive one-on- one training with an expert providing subjective feedback. By integrating real-time visual feedback of learners’ quantitatively measured CBE performance, a mixed environment for learning CBEs provides on-demand learning opportunities with more objective, detailed feedback than available with expert observation. The proposed approach applied to learning CBEs was informally evaluated by four expert medical educators and six novice medical students. This evaluation highlights that receiving real-time in-situ visual feedback of their performance provides students an advantage, over traditional approaches to learning CBEs, in developing correct psychomotor and cognitive skills.

Virtual Experiences for Social Perspective-Taking

Andrew Raij, Aaron Kotranza, D. Scott Lind, Benjamin Lok (2009) Virtual Experiences for Social Perspective-Taking, p. 99-102, Ieee

This paper proposes virtual social perspective-taking (VSP). In VSP, users are immersed in an experience of another person to aid in understanding the person’s perspective. Users are immersed by 1) providing input to user senses from logs of the target person’s senses, 2) instructing users to act and interact like the target, and 3) reminding users that they are playing the role of the target. These guidelines are applied to a scenario where taking the perspective of others is crucial - the medical interview. A pilot study (n=16) us- ing this scenario indicates VSP elicits reflection on the perspectives of others and changes behavior in future, similar social interactions. By encouraging reflection and change, VSP advances the state-of- the-art in training social interactions with virtual experiences.

Virtual multi-tools for hand and tool-based interaction with life-size virtual human agents

Aaron Kotranza, Kyle Johnsen, Juan Cendan, Bayard Miller, D. Scott Lind, Benjamin Lok (2009) Virtual multi-tools for hand and tool-based interaction with life-size virtual human agents, p. 23-30, Ieee

A common approach when simulating face-to-face interpersonal scenarios with virtual humans is to afford users only verbal interaction while providing rich verbal and non-verbal interaction from the virtual human. This is due to the difficulty in providing robust recognition of user non-verbal behavior and interpretation of these behaviors within the context of the verbal interaction between user and virtual human. To afford robust hand and tool-based non-verbal interaction with life-sized virtual humans, we propose virtual multi-tools. A single hand-held, tracked interaction device acts as a surrogate for the virtual multi- tools: the user's hand, multiple tools, and other objects. By combining six degree-of-freedom, high update rate tracking with extra degrees of freedom provided by buttons and triggers, a commodity device, the Nintendo Wii Remote, provides the kinesthetic and haptic feedback necessary to provide a high- fidelity estimation of the natural, unencumbered interaction provided by one’s hands and physical hand-held tools. These qualities allow virtual multi-tools to be a less error-prone interface to social and task-oriented non-verbal interaction with a life-sized virtual human. This paper discusses the implementation of virtual multi-tools for hand and tool-based interaction with life-sized virtual humans, and provides an initial evaluation of the usability of virtual multi-tools in the medical education scenario of conducting a neurological exam of a virtual human.

Virtual Humans That Touch Back: Enhancing Nonverbal Communication with Virtual Humans through Bidirectional Touch

Aaron Kotranza, Benjamin Lok, Carla M Pugh, D. Scott Lind (2009) Virtual Humans That Touch Back: Enhancing Nonverbal Communication with Virtual Humans through Bidirectional Touch, p. 175-178, Ieee

Touch is a powerful component of human communication, yet has been largely absent in communication between humans and virtual humans (VHs). This paper expands on recent work which allowed unidirectional touch from human to VH, by evaluating bidirectional touch as a new channel for nonverbal communication. A VH augmented with a haptic interface is able to touch her interaction partner using a pseudo-haptic touch or an active-haptic touch from a co-located mechanical arm. Within the context of a simulated doctor-patient interaction, two user studies (n = 54) investigate how touch can be used by both human and VH to communicate. Results show that human-to-VH touch is used for the same communication purposes as human-to-human touch, and that VH-to-human touch (pseudo-haptic and active- haptic) allows the VH to communicate with its human interaction partner. The enhanced nonverbal communication provided by bidirectional touch has the potential to solve difficult problems in VH research, such as disambiguating user speech, enforcing social norms, and achieving rapport with VHs.

Audio analysis of human/virtual-human interaction

Harold Rodriguez, Diane Beck, David Lind (2008) Audio analysis of human/virtual-human interaction

The audio of the spoken dialogue between a human and a vir- tual human (VH) is analyzed to explore the impact of H-VH interaction. The goal is to determine if conversing with a VH can elicit detectable and systematic vocal changes. To study this topic, we examined the H-VH scenario of pharmacy students speaking with immersive VHs playing the role of patients. The audio analysis focused on the students’ reaction to scripted empathetic challenges. Empathetic challenges are VH-initiated dialogue designed to generate an unrehearsed affective response from the human. The responses were analyzed with software developed to analyze vocal patterns during H-VH conversations. The analysis shows that al- though some of the vocal changes are undetectable by the human ear, they do occur, are detectable by digital signal processing, and are con- sistent across participants groups. Further, these changes are correlated with known H-H conversation patterns.

Collocated AAR: Augmenting After Action Review with Mixed Reality

John Quarles, S Lampotang, Ira Fischler (2008) Collocated AAR: Augmenting After Action Review with Mixed Reality

This paper proposes collocated After Action Review (AAR) of training experiences. Through Mixed Reality (MR), collocated AAR allows users to review past training experiences in situ with the user’s current, real-world experience. MR enables a user- controlled egocentric viewpoint, a visual overlay of virtual information, and playback of recorded training experiences collocated with the user’s current experience. Collocated AAR presents novel challenges for MR, such as collocating time, interactions, and visualizations of previous and current experiences. We created a collocated AAR system for anesthesia education, the Augmented Anesthesia Machine Visualization and Interactive Debriefing system (AAMVID). The system was evaluated in two studies by students (n=19) and educators (n=3). The results demonstrate how collocated AAR systems such as AAMVID can: (1) effectively direct student attention and interaction during AAR and (2) provide novel visualizations of aggregate student performance and insight into student understanding for educators.

Virtual human+ tangible interface= mixed reality human an initial exploration with a virtual breast exam patient

Aaron Kotranza (2008) Virtual human+ tangible interface= mixed reality human an initial exploration with a virtual breast exam patient

Virtual human (VH) experiences are receiving increased attention for training real-world interpersonal scenarios. Communication in interpersonal scenarios consists of not only speech and gestures, but also relies heavily on haptic interaction – interpersonal touch. By adding haptic interaction to VH experiences, the bandwidth of human-VH communication can be increased to approach that of human-human communication. To afford haptic interaction, a new species of embodied agent is proposed – mixed reality humans (MRHs). A MRH is a virtual human embodied by a tangible interface that shares the same registered space. The tangible interface affords the haptic interaction that is critical to effective simulation of interpersonal scenarios. We applied MRHs to simulate a virtual patient requiring a breast cancer screening (medical interview and physical exam). The design of the MRH patient is presented. This paper also presents the results of a pilot study in which eight (n = 8) physician-assistant students performed a clinical breast exam on the MRH patient. Results show that when afforded haptic interaction with a MRH patient, users demonstrated interpersonal touch and social engagement similarly to interacting with a human patient.

Virtual humans elicit skin-tone bias consistent with real-world skin-tone biases

Brent Rossen, Kyle Johnsen, Adeline Deladisma, Scott Lind (2008) Virtual humans elicit skin-tone bias consistent with real-world skin-tone biases

In this paper, we present results from a study that shows that a dark skin-tone VH agent elicits user behavior consistent with real world skin-tone biases. Results from a study with medical students (n=21), show participant empathy towards a dark skin-tone VH patient was predicted by their measured bias towards African-Americans. Real world bias was measured using a validated psychological instrument called the implicit association test (IAT). Scores on the IAT were significantly correlated to coders' ratings of participant empathy. This result indicates that VHs elicit realistic responses and could become an important component in cultural diversity training.

A mixed reality approach for merging abstract and concrete knowledge

John Quarles, S Lampotang, Ira Fischler (2008) A mixed reality approach for merging abstract and concrete knowledge(1), p. 27-34

Mixed reality’s (MR) ability to merge real and virtual spaces is applied to merging different knowledge types, such as abstract and concrete knowledge. To evaluate whether the merging of knowledge types can benefit learning, MR was applied to an interesting problem in anesthesia machine education. The Virtual Anesthesia Machine (VAM) is an interactive, abstract 2D transparent reality [14] simulation of the internal components and invisible gas flows of an anesthesia machine. It is widely used in anesthesia education. However when presented with an anesthesia machine, some students have difficulty transferring abstract VAM knowledge to the concrete real device. This paper presents the Augmented Anesthesia Machine (AAM). The AAM applies a magic-lens approach to combine the VAM simulation and a real anesthesia machine. The AAM allows students to interact with the real anesthesia machine while visualizing how these interactions affect the internal components and invisible gas flows in the real world context. To evaluate the AAM’s learning benefits, a user study was conducted. Twenty participants were divided into either the VAM (abstract only) or AAM (concrete+abstract) conditions. The results of the study show that MR can help users bridge their abstract and concrete knowledge, thereby improving their knowledge transfer into real world domains.

Tangible user interfaces compensate for low spatial cognition

John Quarles, S Lampotang, Ira Fischler (2008) Tangible user interfaces compensate for low spatial cognition,p. 11-18

This research investigates how interacting with Tangible User Interfaces (TUIs) affects spatial cognition. To study the impact of TUIs, a between subjects study was conducted (n=60) in which students learned about the operation of an anesthesia machine. A TUI was compared to two other interfaces commonly used in anesthesia education: (1) a Graphical User Interface (a 2D abstract simulation model of an anesthesia machine) and (2) a Physical User Interface (a real world anesthesia machine). Overall, the TUI was found to significantly compensate for low user spatial cognition in the domain of anesthesia machine training.

 

IPSViz: An After-Action Review Tool for Human-Virtual Human Experiences

Andrew B. Raij, Benjamin C. Lok (2008) IPSViz: An After-Action Review Tool for Human-Virtual Human Experiences, p. 91-98, Ieee

This paper proposes after-action review(AAR) with Human-Virtual human (H-VH) experiences. H-VH experiences are seeing in- creased use in training for real-world, H-H experiences. To improve training, the users of H-VH experiences need to review, evaluate, and get feedback on them. AAR enables users to review their H- VH interaction, evaluate their actions, and receive feedback on how to improve future real-world, H-H experiences. The Interpersonal Scenario Visualizer (IPSViz), an AAR tool for H-VH experiences, is also presented. IPSViz allows medical stu- dents to review their interactions with VH patients. To enable re- view, IPSViz generates spatial, temporal, and social visualizations of H-VH interactions. Visualizations are generated by treating the interaction as a set of signals. Interaction signals are captured, logged, and processed to generate visualizations for review, eval- uation and feedback. The results of a user study (N=27) show that reviewing the visualizations helps students become more self-aware of their actions with a virtual human and gain insight into how to improve interactions with real humans.

An Evaluation of Immersive Displays for Virtual Human Experiences

Kyle Johnsen, Benjamin Lok (2008) An Evaluation of Immersive Displays for Virtual Human Experiences, p. 133-136, Ieee

This paper compares a large-screen display to a non-stereo head-mounted display (HMD) for a virtual human (VH) experience. As VH experiences are increasingly being applied to training, it is important to understand the effect of immersive displays on user interaction with VHs. Results are reported from a user study (n=27) of 10 minute human-VH interactions in a VH experience which allows medical students to practice communication skills with VH patients. Results showed that student self-ratings of empathy, a critical doctor-patient communication skill, were significantly higher in the HMD; however, when compared to observations of student behavior, students using the large-screen display were able to more accurately reflect on their use of empathy. More work is necessary to understand why the HMD inhibits students’ ability to self-reflect on their use of empathy.

The validity of a virtual human experience for interpersonal skills education

K. Johnsen, A. Raij, A. Stevens, D.S. Lind, B. Lok (2007) The validity of a virtual human experience for interpersonal skills education, p. 1049–1058, ACM

Any new tool introduced for education needs to be validated. We developed a virtual human experience called the Virtual Objective Structured Clinical Examination (VOSCE). In the VOSCE, a medical student examines a life-size virtual human who is presenting symptoms of an illness. The student is then graded on interview skills. As part of a medical school class requirement, thirty three second year medical students participated in a user study designed to determine the validity of the VOSCE for testing interview skills. In the study, participant performance in the VOSCE is compared to participant performance in the OSCE, an interview with a trained actor. There was a significant correlation (r(33)=.49, p<.005) between overall score in the VOSCE and overall score in the OSCE. This means that the interaction skills used with a virtual human translate to the interaction skills used with a real human. Comparing the experience of virtual human interaction to real human interaction is the critical validation step towards using virtual humans for interpersonal skills education.

Comparing Interpersonal Interactions with a Virtual Human to Those with a Real Human

Andrew B. Raij, Kyle Johnsen, Robert F. Dickerson, Benjamin C. Lok, Marc S. Cohen, Margaret Duerson, Rebecca Rainer Pauly, Amy O. Stevens, Peggy Wagner, D. Scott Lind (2007) Comparing Interpersonal Interactions with a Virtual Human to Those with a Real Human 13(3), p. 443-457

This paper provides key insights into the construction and evaluation of interpersonal simulators - systems that enable interpersonal interaction with virtual humans. Using an interpersonal simulator, two studies were conducted that compare interactions with a virtual human to interactions with a similar real human. The specific interpersonal scenario employed was that of a medical interview. Medical students interacted with either a virtual human simulating appendicitis or a real human pretending to have the same symptoms. In study I (n=24), medical students elicited the same information from the virtual and real human, indicating that the content of the virtual and real interactions were similar. However, participants appeared less engaged and insincere with the virtual human. These behavioral differences likely stemmed from the virtual human's limited expressive behavior. Study II (n=58) explored participant behavior using new measures. Nonverbal behavior appeared to communicate lower interest and a poorer attitude toward the virtual human. Some subjective measures of participant behavior yielded contradictory results, highlighting the need for objective, physically-based measures in future studies.

The validity of a virtual human experience for interpersonal skills education

K. Johnsen, A. Raij, A. Stevens, D.S. Lind, B. Lok (2007) The validity of a virtual human experience for interpersonal skills education, p. 1049–1058, ACM

Any new tool introduced for education needs to be validated. We developed a virtual human experience called the Virtual Objective Structured Clinical Examination (VOSCE). In the VOSCE, a medical student examines a life-size virtual human who is presenting symptoms of an illness. The student is then graded on interview skills. As part of a medical school class requirement, thirty three second year medical students participated in a user study designed to determine the validity of the VOSCE for testing interview skills. In the study, participant performance in the VOSCE is compared to participant performance in the OSCE, an interview with a trained actor. There was a significant correlation (r(33)=.49, p<.005) between overall score in the VOSCE and overall score in the OSCE. This means that the interaction skills used with a virtual human translate to the interaction skills used with a real human. Comparing the experience of virtual human interaction to real human interaction is the critical validation step towards using virtual humans for interpersonal skills education.

Rapidly Incorporating Real Objects for Evaluation of Engineering Designs in a Mixed Reality Environment

Xiyong Wang, Aaron Kotranza, John Quarles, Benjamin Lok, B Danette Allen (2005) Rapidly Incorporating Real Objects for Evaluation of Engineering Designs in a Mixed Reality Environment

We explore using a Mixed Environment (ME) system to rapidly incorporate and visualize real objects into virtual environments (VEs). In this paper we discusses some preliminary results of a system to enable users to manipulate scanned, articulated virtual representations of real objects, such as tools, parts, and physical correlates to complex computer-aided design (CAD) models. Engineers and designers are capable of effectively conducting assembly design verification, as the ME can simulate these tasks at a high degree of fidelity. We have partnered with NASA Langley Research Center, and aim to use MEs to aid in creating assembly documents for upcoming payloads from CAD model designs.

A pipeline for rapidly incorporating real objects into a mixed environment

X. Wang, A. Kotranza, J. Quarles, B. Lok, B.D. Allen (2005) A pipeline for rapidly incorporating real objects into a mixed environment, p. 170-173, IEEE

A method is presented to rapidly incorporate real objects into virtual environments using laser scanned 3D models with color-based marker tracking. Both the real objects and their geometric models are put into a Mixed Environment (ME). In the ME, users can manipulate the scanned, articulated real objects, such as tools, parts, and physical correlates to complex computer-aided design (CAD) models. Our aim is to allow engineering teams to effectively conduct hands-on assembly design verification. This task would be simulated at a high degree of fidelity, and would benefit from the natural interaction afforded by a ME with many specific real objects.

Evaluating a script-based approach for simulating patient-doctor interaction

R Dickerson, K Johnsen, A Raij, B Lok (2005) Evaluating a script-based approach for simulating patient-doctor interaction, p. 79-84

Experiences in Using Immersive Virtual Characters to Educate Medical Communication Skills

K. Johnsen, R. Dickerson, A. Raij, B. Lok, J. Jackson, M. Shin, J. Hernandez, A. Stevens, D.S. Lind (2005)Experiences in Using Immersive Virtual Characters to Educate Medical Communication Skills, p. 179-186,,IEEE

This paper presents a system which allows medical students to experience the interaction between a patient and a medical doctor using natural methods of interaction with a high level of immersion. We also present our experiences with a pilot group of medical and physician assistant students at various levels of training. They interacted with projector-based life-sized virtual characters using gestures and speech. We believe that natural interaction and a high level of immersion facilitates the education of communication skills. We present the system details as well as the participantsý performance and opinions. The study confirmed that the level of immersion contributed significantly to the experience, and participants reported that the system is a powerful tool for teaching and training. Applying the system to formal communication skills evaluation and further scenario development will be the focus of future research and refinement.

The Virtual Experiences Research Group

  • Facebook Clean Grey
  • Twitter Clean Grey
  • LinkedIn Clean Grey

© 2023 by Scientist Personal. Proudly created with Wix.com