Volume rendering of visible human data for an anatomical virtual environment.Stud Health Technol Inform. 1996; 29:352-70.SH
In this work, we utilize the axial anatomical human male sections from the National Library of Medicine's Visible Human Project to generate three-dimensional (3-D) volume representations of the human male subject. The two-dimensional (2-D) projection images were produced by combining ray tracing techniques with automated image segmentation routines. The resultant images provide accurate and realistic volumetric representations of the Visible Human data set which is ultimately needed in medical virtual environment simulation. Ray tracing techniques provide methods by which 2-D volume views of a 3-D voxel array can be produced. The cross-sectional images can be scanned at different angles to produce rotated views of the voxel array. By combining volume views at incremental angles over 360 degrees a full volumetric representation of the voxel array, in this case the human male data set, can be computer generated and displayed without the speed and memory limitations of trying to display the entire data array. Additional texture and feature information can be obtained from the data by applying optical property equations to the ray scans. The imaging effects that can be added to volume renderings using these equations include shading, shadowing, and transparency. The automated segmentation routines provide a means to distinguish between various anatomical structures of the body. These routines can be used to differentiate between skin, fat, muscle, cartilage, blood vessels, and bone. By combining automated segmentation routines with the ray-tracing techniques, 2-D volume views of various anatomical structures and features can be isolated from the full data set. Examples of these segmentation abilities are demonstrated for the human male data set which include volume views of the skeletal systems, the musculoskeletal system, and part of the vascular system. The methods described above allow us to generate lifelike images, NURBS surface models, and realistic texture maps of specific anatomical structures. We have the capability to generate images that are both accurate and lifelike, much like photographic anatomical atlases. We can also generate images, models, and textures that have the clarity of medical artwork/illustrations, by highlighting the coloring of the ray traced structures with conventional colors instead of the natural color of the specimen. We are currently in the process of generating a comprehensive reference atlas of volume rendered images of the human body, soon to be published by Mosby-Year Book. The segmentation techniques needed to create this atlas also offer the accuracy and realism needed to create surface models and texture maps for a virtual environment for surgery simulation.