- Shopping Bag ( 0 items )
A comprehensive survey of technological developments in Virtual Reality for use in medical education and simulated procedures
Medicine and the biological sciences have long relied on visualizations to illustrate the relationship between anatomic structure and biologic function. The new multidimensional imaging modalities are powerful counterparts to traditional forms of observation-surgery, postmortem examination, or extensive mental reconstruction. VR technologies have reached unimagined levels of sophistication and utility, giving physicians and students new avenues for planning and practicing surgery and diagnostics.
The two volumes of Information Technologies in Medicine thoroughly explore the use of VR technology in three-dimensional visualization techniques, realistic surgical training prior to patient contact, and actual procedures in rehabilitation and treatment, including telemedicine and telesurgery. Editors Akay and Marsh have brought together all the available information on the subject of VR technologies in medicine and medical training to create the first comprehensive guide to the state of the art in medicine for use by students, doctors, and researchers.
Volume I is devoted to the fundamentals of these new information technologies and their many applications in medical education and practice, especially in the area of medical and surgical simulations. Coverage includes:
* Virtual environment technologies
* The future of VR technologies in the twenty-first century
* Perceptualization of biomedical data
* Visualization in teaching anatomy
RICHARD A. ROBB, PH.D. Director, Biomedical Imaging Resource Mayo Foundation Rochester, Minnesota
1.1.1 Hardware Platforms 1.1.2 Network Strategies
1.2 Methods 1.2.1 Image Processing and Segmentation 1.2.2 Tiling Strategies 1.2.3 Kohonen Shrinking Network 1.2.4 Growing Network 1.2.5 Deformable Adaptive Modeling
1.3 Applications 1.3.1 Prostate Microvessels 1.3.2 Trabecular Tissue 1.3.3 Corneal Cells 1.3.4 Neurons 1.3.5 Surgical Planning 1.3.6 Virtual Endoscopy 1.3.7 4-D Image-Guided Ablation Therapy 1.3.8 Anesthesiology Simulator
1.4 Summary Acknowledgments References
The practice of medicine and major segments of the biologic sciences have always relied on visualizations of the relationship of anatomic structure to biologic function. Traditionally, these visualizations either have been direct, via vivisection and postmortem examination, or have required extensive mental reconstruction, as in the microscopic examination of serial histologic sections. The revolutionary capabilities of new three-dimensional (3-D) and four-dimensional (4-D) imaging modalities and the new 3-D scanning microscope technologies underscore the vital importance ofspatial visualization to these sciences. Computer reconstruction and rendering of multidimensional medical and histologic image data obviate the taxing need for mental reconstruction and provide a powerful new visualization tool for biologists and physicians. Voxel-based computer visualization has a number of important uses in basic research, clinical diagnosis, and treatment or surgery planning; but it is limited by relatively long rendering times and minimal possibilities for image object manipulation.
The use of virtual reality (VR) technology opens new realms in the teaching and practice of medicine and biology by allowing the visualizations to be manipulated with intuitive immediacy similar to that of real objects; by allowing the viewer to enter the visualizations, taking any viewpoint; by allowing the objects to be dynamic, either in response to viewer actions or to illustrate normal or abnormal motion; and by engaging other senses, such as touch and hearing (or even smell) to enrich the visualization. Biologic applications extend across a range of scale from investigating the structure of individual cells through the organization of cells in a tissue to the representation of organs and organ systems, including functional attributes such as electrophysiologic signal distribution on the surface of an organ. They are of use as instructional aids as well as basic science research tools. Medical applications include basic anatomy instruction, surgical simulation for instruction, visualization for diagnosis, and surgical simulation for treatment planning and rehearsal.
Although the greatest potential for revolutionary innovation in the teaching and practice of medicine and biology lies in dynamic, fully immersive, multi-sensory fusion of real and virtual information data streams, this technology is still under development and not yet generally available to the medical researcher. There are, however, a great many practical applications that require different levels of interactivity and immersion, that can be delivered now, and that will have an immediate effect on medicine and biology. In developing these applications, both hardware and software infrastructure must be adaptable to many different applications operating at different levels of complexity. Interfaces to shared resources must be designed flexibly from the outset and creatively reused to extend the life of each technology and to realize satisfactory return on the investment.
Crucial to all these applications is the facile transformation between an image space organized as a rectilinear N-dimensional grid of multivalued voxels and a model space organized as surfaces approximated by multiple planar tiles. The required degree of integration between these realms ranges between purely educational or instructional applications, which may be best served by a small library of static "normal" anatomical models, and individualized procedure planning, which requires routine rapid conversion of patient image data into possibly dynamic models. The most complex and challenging applications, those that show the greatest promise of significantly changing the practice of medical research or treatment, require an intimate and immediate union of image and model with real-world, real-time data. It may well be that the ultimate value of VR in medicine will derive more from the sensory enhancement of real experience than from the simulation of normally sensed reality.
Virtual reality deals with the science of perception. A successful virtual environment is one that engages the user, encouraging a willing suspension of disbelief and evoking a feeling of presence and the illusion of reality. Although arcade graphics and helmeted, gloved, and cable-laden users form the popular view of VR, it should not be defined by the tools it uses but rather by the functionality it provides. VR provides the opportunity to create synthetic realities for which there are no real antecedents and brings an intimacy to the data by separating the user from traditional computer interfaces and real-world constraints, allowing the user to interact with the data in a natural fashion.
Interactivity is key. To produce a feeling of immersion or presence (a feeling of being physically present within the synthetic environment) the simulation must be capable of real-time interactivity; technically, a minimum visual update rate of 30 frames per second and a maximum total computational lag time of 100 ms are required.
1.1.1 Hardware Platforms
My group's work in VR is done primarily on Silicon Graphics workstations, specifically an Onyx/Reality Engine and an Onyx2/Infinite Reality system. Together with Performer, these systems allow us to design visualization software that uses coarse-grained multiprocessing, reduces computational lag time, and improves the visual update rate. These systems were chosen primarily for their graphics performance and our familiarity with other Silicon Graphics hardware.
We support "fish-tank" immersion through the use of Crystal Eyes stereo glasses and fully immersive displays via Cybereye head-mounted displays (HMDs). By displaying interlaced stereo pairs directly on the computer monitor, the stereo glasses provide an inexpensive high-resolution stereo display that can be easily shared by multiple users. Unfortunately, there is a noticeable lack of presence and little separation from the traditional computer interface with this type of display. The HMD provides more intimacy with the data and improves the sense of presence. We chose the Cybereye HMD for our initial work with fully immersive environments on a cost/performance basis. Although it has served adequately for the initial explorations, its lack of resolution and restricted field of view limit its usefulness in serious applications. We are currently evaluating other HMDs and display systems to improve display quality, including the Immersive Workbench (FakeSpace) and the Proview HMD (Kaiser Electro-Optics).
Our primary three space tracking systems are electromagnetic 6 degree of freedom (DOF) systems. Initially, three space tracking was done using Polhemus systems, but we are now using an Ascension MotionStar system to reduce the noise generated by computer monitors and fixed concentrations of ferrous material. In addition to electomagnetic tracking, we support ultrasonic and mechanical tracking systems.
Owing to the nature of many of our simulations, we incorporate haptic feedback using a SensAble Technology's PHANToM. This allows for 3 degrees of force feedback, which we find adequate for simulating most puncture, cutting, and pulling operations.
1.1.2 Network Strategies
VR simulations can run the gamut from single-user static displays to complex dynamic multiuser environments. To accommodate the various levels of complexity while maintaining a suitable degree of interactivity, our simulation infrastructure is based on a series of independent agents spread over a local area network (LAN). Presently, the infrastructure consists of an avatar agent running on one of the primary VR workstations and a series of device daemons running on other workstations on the network. The avatar manages the display tasks for a single user, and the daemon processes manage the various VR input/ output (I/O) devices. The agents communicate via an IP Multicasting protocol. IP multicasting is a means of transmitting IP datagrams to an unlimited number of hosts without duplication. Because each host can receive or ignore these packets at a hardware level simply by informing the network card which multicast channels to access, there is little additional computational load placed on the receiving system. This scheme is scalable, allows for efficient use of available resources, and off loads the secondary tasks of tracking and user feedback from the primary display systems.
1.2.1 Image Processing and Segmentation
Image sources for the applications discussed here include spiral CT, both conventional MRI and magnetic resonance (MR) angiography, the digitized macrophotographs of whole-body cryosections supplied by the National Library of medicine (NLM) Visible Human project, serial stained microscope slides, and confocal microscope volume images. Many of these images are monochromatic, but others are digitized in full color and have a significant amount of information encoded in the color of individual voxels.
In all cases, the image data must be segmented, i.e., the voxels making up an object of interest must be separated from those not making up the object. Segmentation methods span a continuum between manual editing of serial sections and complete automated segmentation of polychromatic data by a combination of statistical color analysis and shape-based spatial modification.
All methods of automated segmentation that use image voxel values or their higher derivatives to make boundary decisions are negatively affected by spatial inhomogeneity caused by the imaging modality. Preprocessing to correct such inhomogeneity is often crucial to the accuracy of automated segmentation. General linear and nonlinear image filters are often employed to control noise, enhance detail, or smooth object surfaces.
Serial section microscope images must generally be reregistered section to section before segmentation, a process that must be automated as much as possible, although some manual correction is usually required for the best results. In trivial cases (such as the segmentation of bony structures from CT data), the structure of interest may be readily segmented simply by selecting an appropriate grayscale threshold, but such basic automation can at best define a uniform tissue type, and the structure of interest usually consists of only a portion of all similar tissue in the image field. Indeed, most organs have at least one "boundary of convention," i.e., a geometric line or plane separating the organ from other structures that are anatomically separate but physically continuous; thus it is necessary to support interactive manual editing regardless of the sophistication of automated segmentation technologies available.
Multispectral image data, either full-color optical images or spatially coregistered medical volume images in multiple modalities, can often be segmented by use of statistical classification methods. We use both supervised and unsupervised automated voxel classification algorithms of several types. These methods are most useful on polychromatically stained serial section micrographs, because the stains have been carefully designed to differentially color structures of interest with strongly contrasting hues. There are, however, startling applications of these methods using medical images, e.g., the use of combined T1 and T2-weighted images to image multiple sclerosis lesions. Color separation also has application in the NLM Visible Human images; but the natural coloration of tissues does not vary as widely as specially designed stains, and differences in coloration do not always correspond to the accepted boundaries of anatomic organs.
Voxel-based segmentation is often incomplete, in that several distinct structures may be represented by identical voxel values. Segmentation of uniform voxel fields into subobjects is often accomplished by logical means (i.e., finding independent connected groups of voxels) or by shape-based decomposition.
All the models discussed in this chapter have been processed and segmented using the automated and manual tools in [Analyze.sub.AVW], a comprehensive medical imaging workshop developed by the Biomedical Imaging Resource of the Mayo Foundation. [Analyze.sub.AVW] supports the segmentation of volumetric images into multiple object regions by means of a companion image, known as an object map, that stores the object membership information of every voxel in the image. In the case of spatially registered volume images, object maps allow structures segmented from different modalities to be combined with proper spatial relationships.
1.2.2 Tiling Strategies
For the imaging scientist, reality is 80 million polygons per frame (4) and comes at a rate of 30 frames per second (2400 million polygons per second). Unfortunately, current high-end hardware is capable of displaying upward of only 10 million polygons per second. So although currently available rendering algorithms can generate photorealistic images from volumetric data, they cannot sustain the necessary frame rates. Thus the complexity of the data must be reduced to fit within the limitations of the available hardware.
We have developed a number of algorithms, of which three will be discussed here, for the production of efficient geometric (polygonal) surfaces from volumetric data. An efficient geometric surface contains a prespecified number of polygons intelligently distributed to accurately reflect the size, shape, and position of the object being modeled while being sufficiently small in number to permit real-time display on a modern workstation. Two of these algorithms use statistical measures to determine an optimal polygonal configuration and the third is a refinement of a simple successive approximation technique.
Our modeling algorithms assume that the generation of polygonal surfaces occurs in four phases: segmentation, surface detection, feature extraction, and polygonization. Of these phases, the modeling algorithms manage the last three. Data segmentation is managed by other tools found in [Analyze.sub.AVW] and AVW.
Excerpted from Information Technologies in Medicine, Volume 1 Copyright © 2001 by John Wiley & Sons, Inc.. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
PART I: ARTIFICAL ENVIRONMENT AND MEDICAL STIMULATOR/EDUCATION.
1. Virtual Reality in Medicine and Biology (Richard A. Robb).
2. VEs in Medicine; Medicine in Ves (Adrie C. M. Dumay).
3. Virtual Reality and Its Integration into a Twenty-First Century Telemedical Information Society (Andy Marsh).
4. Virtual Reality and Medicine—Challenges for the Twenty-First Century (Joseph M. Rosen).
5. Virtual Reality Laboratory for Medical Applications (Gabriele Faulkner).
6. Medical Applications of Virtual Reality in Japan (Makoto Yoshizawa, Ken-ichi Abe, Tomoyuki Yambe, and Shin-ichi Nitta).
7. Perceptualization of Biomedical Data (Emil Jovanov, Dusan Starcevic, and Vlada Radivojevic).
8. Anatomic VisualizeR: Teaching and Learning Anatomy with Virtual Reality (Helene Hoffman, Margaret Murray, Robert Curlee, and Alicia Fritchle).
9. Future Technologies for Medical Applications (Richard M. Satava).