Segmentation is the process of identifying features within an image or volumetric data set and is a necessary precursor to constructing a high-level description of structures. Segmentation is usually performed by non-interactive, global processing of an image or volume. The Immersive Segmentation system implements a different user interaction paradigm characterized by the local application of high computational cost algorithms under the direct influence of the user. This paradigm allows the user to employ their expert knowledge in interpreting and guiding the segmentation process.

Capabilities & Requirements

The Immersive Segmentation system was originally implemented at the NASA Ames Biocomputation Center using the Immersive Workbench technology of Fakespace. As a part of a joint project, funded by the National Library of Medicine, with the SUMMIT group of the Stanford Medical School, the system was adapted to operate as a client/server application over next generation internets.

The system is capable of producing multiple visualization streams, allowing remotely separated users to collaborate on the segmentation of structures. Each stream can be transported using a unicast udp protocol or as a multicast stream. Depending on the mode of operation, each stream can range up to 40Mbps.

The client workstation consists of a stereo-capable computer and 3D input device. A popup menu, displayed within the stereo visualization, allows the user to choose between operating modes. The system supports two basic segmentation algorithms, both of which use an analysis of the data within a local neighborhood of the user's probe to determine and adapt the parameters that control the operation of the algorithm.

The first method is a kind of region filling algorithm. Visually, it appears that the user is able to inject an opaque stain into the data set that flows through regions of similar characteristics. The second uses the inital analysis of the local neighborhood to produce a category description of the data values of primary interest to the user. As the user moves through the volume, data elements within this category are rendered opaque.

Haptics

In addition to the visualization stream, the server also computes, and streams to the client, a description of the local neighborhood of the probe. This description consist of continuously updated random sample of visually opaque structures in the neighborhood of the user's position.Using this sample the client produces a contact force force when the user's probe touches segmented structures. This sense of touch is used in two ways. First, it allows the user to more accurately position the probe relative to segmented structures and initiate new segmentation actions. Second, the force information, and the user's response to the force and communicated back to the server and can be used as an additional control input in guiding the course of segmentation actions.

Movies

This cross-eyed stereo pair movie (Streaming Quicktime, MPEG-4 Video, 4'3", 49MB) shows a segmentation of the Visible Human Male data set. The top and bottom halves of the frame are separate visualization streams that would normally be viewed at different workstations. They have been combined in this movie to show the coordination between the streams.

This cross-eyed stereo pair movie (Streaming Quicktime, MPEG-4 Video, 23", 9.5MB) shows the user's probe moving across a segmented rib in the left half of the frame. The right half of the frame shows a visualization of the haptic force as it is computed from the server generated random sample of the probes neighborhood.

Publications & Presentations
  • "An Immersive Environment for the Visualization and Segmentation of Large Volumetric Data Sets", Medicine Meets Virtual Reality VI, San Diego, Jan. 1998.
  • "User Directed Segmentation of the Visible Human Data Sets in an Immersive Environment" Proceedings of the Second Visible Human Conference, NIH, Bethesda Maryland, 1998.
  • "Visualizing and Segmenting Large Volumetric Data Sets", IEEE Computer Graphics and Applications, Vol. 19, No. 3, pp. 32-37, 1999.
  • "Harnessing Remote Computation to Visualize and Segment the Visible Human Over the Next Generation Internet", Proceedings of the Third Visible Human Conference, NIH, Bethesda Maryland, 2000.
  • "Stereoscopic Visualization and Immersive Segmentation of Volumetric Data Sets", Medicine Meets Virtual Reality, Newport Beach, CA. Jan 2001.
  • "Haptic Feedback to Facilitate Interactive Segmentation of Volumetric Data Sets", Medicine Meets Virtual Reality, Newport Beach, CA, Jan. 2002.
  • Demonstration, Internet2 Member Meeting, Los Angeles, Oct. 2002.
  • Demonstration, Radiological Society of North America, Chicago, Nov. 2002.
  • "Collaborative Segmentation of Volumetric Data Over a Next Generation Internet", Medicine Meets Virtual Reality, Newport Beach, CA, Jan. 2003.
  • Demonstration, Board of Regents, National Library of Medicine, Bethesda, Feb. 2003.
  • Hewlett Packard Research Labs, March 2003, Invited presentation.
  • Demonstration, Radiological Society of North America, Chicago, Nov 2003.
  • Demonstration, Internet2 Member Meeting, Austin TX, Sept. 2004
  • Demonistration, Radiological Society of North America, Chicago, Nov. 2004.
  • "Integrating Haptics into an Immersive Environment for the Segmentation and Visualization of Volumetric Data, Proceedings of the First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environments. WHC 2005, Pisa Iltay, IEEE Computer Society, pp 487-490.
  • "Visualizing Volumetric Data Sets Using A Wireless Handheld Computer. Proceedings of Medicine Meets Virtual Reality 13, Long Beach CA, IOS Press, 2005, pp 447-450.
  • Demonstration, Internet2 Member Meeting, Philadelphia PA, Sept. 2005.
  • Invited Presentation, Bradley University, Peoria, IL, Oct. 2005.
  • Demonstration with Internet2, Supercomputing 05, Seattle.
  • Demonstration, Radiological Society of North America, Chicago, Nov. 2005.
Support

This material is based upon work supported by the National Science Foundation under Grant No. 0222519. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.


Last Modified: Mar-06, Server Contact: senger@csfac.uwlax.edu
Copyright © 1995-2006, The University of Wisconsin - La Crosse Scientific Visualization Project