About

 Emotecogbanner2

 

The eMotion and eCognition lab pursued two distinct avenues towards understanding the human mind and human communication. First, how do we understand and interact with each other? Second, how should current and future technologies develop to interact in human machine collaborations?

eMotion 

We began from the cognitive and connectionist foundations for investigating the recognition of static human faces and objects, and then attempted to add information based on what real world dynamic changes offer to human observers (even human actors and interactors). On the eMotion side of the lab, movement of the human body and face is recorded using magnetic motion capture and advanced video capture equipment to track posture, gesture, and facial expression during a variety of experimental tasks. The lab uses an Ascension Technologies MotionStar system with 16 sensors. This system can track full body skeletal motion as participants converse or learn and perform motor tasks (see example on our Photos/Videos page).

Dyadic conversations are conducted over a "video-phone". Participants are in separate rooms, each sitting in a sound proof video booth where they see the rear projected image of their fellow conversant. Utilizing cutting edge video and audio processing, the conversant's image and voice can be manipulated in real time in order to understand the dynamics of the coordination between individuals during conversation.

All of these experiments are run from a computerized media control room overlooking two experimental rooms. The control room is sound isolated from the two experiment rooms. Special non-ferrous construction and wooden platforms were used to help reduce interference with the magnetic fields used for the motion capture. A conference room, with internet connectivity, is used for video links to labs at other universities during lab meetings.

eCognition 

The degree to which humans interact with and use technology is increasing exponentially. Even our vacuum cleaners are becoming intelligent and autonomous. On the eCognition side,  lab research attempts to understand how humans interact with technology and new technological or artificial entities.  For example: Do humans regard digital objects differently than tangible objects?  Is it somehow more OK to take a digital music file you don't own than it is to take a bicycle you don't own?  Is it easier to plagiarize something on the web because it is digital?  Do we respond to others more openly and honestly via email than face to face? Does the apparent race or gender of a machine or device affect how we interact with it?  How should machines best present themselves to humans to encourage effective interactions? How and why do we anthropomorphize robots or other artificial entities?  How do humans interact with objects and avatars in virtual reality?

By drawing upon information discovered from the analysis of facial expressions and postures, as well as our studies utilizing autonomous moving and speaking robots, Rudy and Nao, research on the eCognition side of the lab hopes to foster a better psychological understanding and acceptance of human machine interactions and collaborations.

eMoteCog . . .

Emotecogsm Anim

is the abbreviated version of our lab name. It also sounds like some kind of weird entity. So we wondered, "what might an eMoteCog look like?" Here is our best guess (courtesy of Dr. Michael Mangini).