29

March
2011

Modeling the Dynamics of Social Interactions with Kinect = Better Healthcare

Modeling the Dynamics of Social Interaction with the Kinect

During Health and Wellness Innovation 2011, The Microsoft Kinect was also used by Jin Joo Lee (Personal Robots group at the MIT Media Lab) to advance her research in modeling the dynamics of social interaction. Her goal is to better understand the subtle cues in human-human communication in order to improve human-robot interactions. Her project applies machine learning and gesture recognition algorithms to the motion capture data from the Kinect in order to detect nonverbal cues including mimicry and synchronous movement.

Improved understanding and tracking of the dynamics of social interaction will have a significant impact on the future of healthcare delivery. Remote medical care is becoming a reality. Many healthcare institutions have telemonitoring programs, and a number of solutions, like American Well, are becoming available online. The issue is that the fidelity of social interactions is currently blunted by the affordances of the technology being used, namely simple webcams. Important social cues will be dropped, but the efficiency and cost-effectiveness of new models of care delivery will likely prove their benefit even in the case of its failures due to miscommunication. New technologies for tracking social cues, however, will improve fidelity and allow even greater advances in care. Body language tracking and facial expression tracking will cue clinicians in to unnoticed patient needs and will help to bolster rapport. This technology may even prove helpful in face-to-face interactions.

Automated on-screen agents and physical robots will also have a significant role in the future of healthcare delivery. Tim Bickmore of Northeastern University, a graduate of the Affective Computing group, has shown overwhelmingly positive patient responses to relational agents, especially in a recent study of the hospital discharge process. Cory Kidd, a graduate from the Personal Robots Group and the founder of Intuitive Automata, produced a substantial research study on the use of a robot as a home weight loss coach. Their research has been successful, even though current on-screen agents and physical robots lack the ability to develop rapport as rich as a humans, because they leveraged the advantages of technology. An on-screen agent has infinite patience. It does not interrupt or keep its hand on the doorknob during conversations. A robot can be there for the patient 24 hours a day, can log information carefully, and can provide meaningful feedback for self-reflection. Improved tracking of the dynamics of social interaction will only improve the potential of these tools. They will be more capable of engaging patients in their care and motivating them to make positive health-related behavior change.

Narration Transcript:

Much like a personal computer, robots can become a technology that is a part of our daily lives.

And as robots began to interact with us, they need to be capable of perceiving and understanding our social nuances to effectively communicate with us.

Researchers in the Personal Robot’s Group are exploring nonverbal cues to design robots capable of “socially synching.”

Research in human social psychology has found that mimicry and synchronous movement behavior are building blocks in fostering trust and rapport between people.

To model such behaviors, we need a full-body digital perception of how people move in a social interaction.

By using motion capture technology like the Xbox kinect, we can track the body movements of people. And through machine learning and gesture recognition algorithms, we can detect what nonverbal cues are being communicated.

We also want to have a close look at people’s faces to detect facial cues like gaze, nodding, and smiling.

By having this head to toe representation, we can model the dynamics of social interactions between people in order to design for more effective human-robot interactions.