About TEILab

From TEILab
Jump to: navigation, search

TAMU Embodied Interaction Lab (TEILab)

About TEILab

Director: Francis Quek

The TEILab is dedicated to interaction research that is grounded in the proposition that the human mind is embodied. The focus of this lab revolves around the theme of Embodied Interaction and Processes. If the mind is indeed embodied in the sense that it is 'designed' to function within the physical, temporal, and social world, how do we understand thinking, abstraction, learning, and creativity. For HCI, the question is how we might build systems that support learning, creating, knowing, sensemaking, collaborating, and communicating -- in essence how do we enhance experience in all these across all human populations. This is a broad vision. Our current focus is on systems that support human activity in three general areas: Learning, support for individuals with disabilities (blindness and severe visual impairment), and general experience.

The TEILab was 'started' in 2013 when Francis Quek moved to Texas A&M University, but it's history dates back to 1993 as the Vision Interfaces and Systems Lab or VISLab. The name change reflects an evolution in thinking that took us from a focus on 'how' (engineering-wise) we approach Human-Computer Interaction to the 'what' in essence it is that we investigate in HCI.

The VISLab was dedicated to research that sits at the intersection between Human-Computer Interaction and Computer Vision. Although Computer Vision remains within the research repertoire of TEILab, we have expanded our research to include aspects of Physical Computing and Making.

Early research in the lab has been focused on the study of gesture, speech and gaze in real human-to-human multimodal discourse. The theoretical foundation of these research largely arise Quek's strong collaboration with David McNeill (Psychology, U. of Chicago) that began in 1992. The study of human multimodal language ultimately requires an understanding of the production of the multimodal behavioral packages that include visible behavioral displays and vocal utterances. The underlying hypothesis is that the joint speech and embodied behavior spring from the same source. This insight that even language that one might consider the pinnacle of symbolic expression comes from human embodiment fuels the perspective toward which TEILab has evolved.

A major portion of Quek's and TEILab's research revolves around the theme of embodied interaction. TEILab also has a legacy of a more computationally-intensive research arc of computer vision and image/video understanding. Most current research in this area focus on the understanding of multimodal human communication in multiple media streams. Earlier work included a focus in medical image processing.

On this site, we group our research under three headings: Human-Computer Interaction, Multimodal Communication Analysis, and Medical Imaging and Computer Vision.

Examples of projects encompassed by our research include:

  •  Understanding of human multimodal discourse in dyadic and meeting settings
  •  Processing and analysis of temporal events to understand multimodal behavior
  •  Agent-based video analysis of body, head, and hand movements
  •  Support for multimodal instructional discourse for teaching mathematics and science to individuals who are blind or visually impaired
  •  Spatial reading for individuals who are blind or visually impaired using the multimedia and touch capabilities of iPad-type devices with tactile enhancements.
  •  Understanding of the role of the technological medium, especially in the domain of multimedia storytelling, in the creative process in children
  •  Mathematics learning in children using tangible media like tangrams
  •  How devices, handhelds, computers, surfaces, and environmental displays function in technology ecologies, especially to support learning
  •  How distance touch may enhance affective communication and a sense of connectness between individuals
  •  How social physical proximity may be used to index cotemporally accessed information for re-finding and to extend the meaningfulness of the information
  •  Crowd simulations by emergent group behaviors arising from the need for micro- and macro-coordination activities that maintain group ‘common ground’
  •  How medical images of blood vessels may be processed to extract the vessel and artery trees
  •  How 3D medical images may be processed to extract various surface manifolds and their curvature characteristics

Beside embodied interaction and research in human multimodal communication, VISLab continues to have an interest in computer vision research. Our lab website revolves around the ideas and projects that give the laboratory its intellectual life.

TEILab Projects by Domain

Projects in the lab spans a wide variety of domains. The table below shows the classification of VISLab projects based on domains or subfields. 'X's mark the category to which each project belongs in. The last column of the table indicates whether the project is currently ongoing or not:

Projects/Domain Education Media Mediated Comm. Multimodal Analysis Health Social Interaction Entertainment Interaction Tech. Universal Access Methodology Recent/ Current
Avatar for Tutoring X X X X X Dormant
Crowd Modeling X X Recent
Drummer Game X X X X Current
Finger Walking In Place X Recent
Grounded Creativity X X X X X Current
Radical Design in HCI X Current
SocialOrb X X X Recent
STAAR X X X X X Current
TanTab X X X X Recent
Technology Ecologies for Learning X X X X Current
Remote Social Touch X X X X Current
Math for Blind X X X X X X X Recent
Narratives for Elderly X X X X X X Recent
Physical Computing X X X X Current
Agent Tracking X Recent
KABAMM X X Recent
MacVisTA X Recent
Meeting Analysis X X Recent
MirrorTrack X Dormant
Vision-GPU X Recent

About MediaWiki