Attentionally-Based Interaction

From TEILab
Jump to navigation Jump to search

Project Team

  • Dr. Francis Quek
  • Dr. Fady Charbel Neurosurgeon
  • Cemil Kirbas Ph.D. Student - Graduated

Note: This is a past project that is currently dormant

Brief Description

Basic model of Attentionally-Based Interaction

In this research, we investigated attention manipulation as the conduit of communication between the human and computer to support radiologists and cartographers. By controlling what to look for and where to look, the user can drive underlying machine vision processes to extract the appropriate entities from medical images and maps.

The fundamental insight is that humans communicate by modeling each other's conceptual and spatial attention. The former allows us to define a conceptual abstraction hierarchy of increasing specificity (or decreasing abstraction). Machine vision algorithms designed to extract the specified abstraction is run on the image at interactively specified locations to locate and extract the appropriate entity. For example, in the figure, the communicator tells the communicant (the machine) to find the middle cerebral artery (MCA) in an angiogram. The machine complies if it can, and highlights the object for extraction. If the image is too noisy, for example, to extract the MCA automatically, the user may request a context shift to artery boundary. The system will then try to find one edge of the artery, and then the other, as directed by the user. Finally, the user may request a pixel-level abstraction, and manually specify the location of each boundary in the image.

The spatial attention mechanism can be manipulated as well. In a very clean image, the system can use a very large search window in the image. This window follows the cursor as the user moves it across the image. As the system detects objects matching the requested context (e.g., the MCA), the system highlights it, and the user can simply accept the recognition and extraction. With increasing image noise, the user can request a smaller search window. In essence, this reduces the attentional region that has to be processed, trading off human effort with computational needs.

The Attentionally-based Interaction Model (AIM), thus, models a conversational stream through which the communicant manipulates the dual contextual and spatial focus to facilitate 'fail-safe' image interpretation.


  • Quek, F.K.H., Kirbas, C. and Charbel, F., “AIM: An Attentionally-Based System for the Interpretation of Angiography,” Proceedings of the IEEE Medical Imaging and Augmented Reality 2001 Conference, Hong Kong, June 10-12, pp. 168-173.
  • Quek, F., Kirbas, C., and Charbel, F., Neurovascular Image Interpretation with an Attentionally-Based Interaction Model (AIM), Computer Aided Radiology and Surgery (CARS), p. 1010, San Francisco, CA, June 28-July 1, 2000.
  • Quek, F., Kirbas, C. and Charbel, F., “AIM: Attentionally-based interaction model for the interpretation of vascular angiography,” IEEE Transactions in Information Technology in Biomedicine, vol 3. No. 2, June, 1999, pp. 139-150.
  • Quek, F., and Petro, M., “Human-Machine Perceptual Cooperation,” Proceedings of the International Conference on Computer-Human Interaction INTERCHI'93: Human Factors in Computing Systems, pp. 123-130, Amsterdam, The Netherlands, 24-29, April 1993.


This research was supported by the Whittaker Foundation: “Extraction and Registration of the Neurovascular Scaffold in Multimodal Images,” Whitaker-96-0458