Get Our e-AlertsSubmit Manuscript
Health Data Science / 2022 / Article

Review Article | Open Access

Volume 2022 |Article ID 9840519 | https://doi.org/10.34133/2022/9840519

Liang Zhou, Mengjie Fan, Charles Hansen, Chris R. Johnson, Daniel Weiskopf, "A Review of Three-Dimensional Medical Image Visualization", Health Data Science, vol. 2022, Article ID 9840519, 19 pages, 2022. https://doi.org/10.34133/2022/9840519

A Review of Three-Dimensional Medical Image Visualization

Received25 Dec 2021
Accepted17 Mar 2022
Published05 Apr 2022

Abstract

Importance. Medical images are essential for modern medicine and an important research subject in visualization. However, medical experts are often not aware of the many advanced three-dimensional (3D) medical image visualization techniques that could increase their capabilities in data analysis and assist the decision-making process for specific medical problems. Our paper provides a review of 3D visualization techniques for medical images, intending to bridge the gap between medical experts and visualization researchers. Highlights. Fundamental visualization techniques are revisited for various medical imaging modalities, from computational tomography to diffusion tensor imaging, featuring techniques that enhance spatial perception, which is critical for medical practices. The state-of-the-art of medical visualization is reviewed based on a procedure-oriented classification of medical problems for studies of individuals and populations. This paper summarizes free software tools for different modalities of medical images designed for various purposes, including visualization, analysis, and segmentation, and it provides respective Internet links. Conclusions. Visualization techniques are a useful tool for medical experts to tackle specific medical problems in their daily work. Our review provides a quick reference to such techniques given the medical problem and modalities of associated medical images. We summarize fundamental techniques and readily available visualization tools to help medical experts to better understand and utilize medical imaging data. This paper could contribute to the joint effort of the medical and visualization communities to advance precision medicine.

1. Introduction

In recent years, with advances in computing power and algorithms, the impact of technology in medicine is greater than ever and keeps increasing. Strategic plans focusing on promoting the application and development of medical technologies have been made worldwide. In 2018, the General Office of the State Council of China issued “Opinions on Promoting the Development of ‘Internet Plus Healthcare,’” calling for strengthening the integration, sharing, and application of clinical and research data and supporting the research and development of health-related artificial intelligence (AI) technology, medical robots, etc. (http://www.gov.cn/zhengce/content/2018-04/28/content_5286645.htm). A three-year action plan was released to develop AI, which puts a priority on expanding the clinical application such as medical image-assisted diagnosis systems (http://www.cac.gov.cn/2017-12/15/c_1122114520.htm). In the same year, the National Institutes of Health (NIH), USA, released the NIH Strategic Plan for Data Science, which proposed that technological innovations such as machine learning, deep learning, AI, and virtual reality (VR) could revolutionize biomedical research over the next 10 years (https://datascience.nih.gov/nih-strategic-plan-data-science).

Medical images, such as computerized tomography (CT), magnetic resonance imaging (MRI), and diffusion tensor imaging (DTI), are the backbone of modern medical practices and research. Furthermore, medical images are a core data source and target for analysis in AI and VR in aforementioned strategic plans. Therefore, the analysis and understanding of medical images are of utmost importance in medical technologies. In practice, most if not everyone in health science areas is familiar with medical images, and many use medical images in daily work. Medical images are massive and complex and hard to explore and gain insights with traditional statistical methods that do not involve the expertise of humans. Therefore, technologies that have humans in the loop are needed for medical image exploration and analysis, and medical experts could make the best of the potential values of medical images to enable high-quality healthcare solutions. Vision is known to be the major and most efficient perception mechanism for humans. Visualization transforms data into interactive visual representations to facilitate data understanding and exploration through visual perception and human-computer interaction, and it can present medical images in 3D with high accuracy. Visualization integrates the cognitive advantage of humans and the computational advantage of computers for data mining [1] and decision-making [2] and is an effective data analysis technology for medical images. Throughout this paper, we understand the term medical image visualization as 3D visualization of medical images.

We first report on the scope and sources of papers involved (Section 2). We then summarize fundamental techniques that enable the visualization of various types of medical images (Section 3). Next, in Section 4, specialized medical imaging visualization methods are reviewed based on our medical procedure-oriented and scale-based taxonomy. In Section 5, we summarize visualization techniques that may have potential medical applications and discuss limitations of medical image visualization. Finally, we list free software tools that are readily available online for medical image visualization (Section 6) and conclude the overview in Section 7.

2. Scope

Medical visualization, in general, is systematically covered in the textbook by Preim and Bartz [3]. In this paper, we focus on techniques of 3D medical image visualization and specific techniques for various medical problems based on imaging categorized by the medical procedure and the scale of studies.

There are several reviews of visualization techniques for medical images [48]. However, the target audience of these reviews is visualization researchers rather than medical experts. These technical-oriented reviews offer in-depth discussions on visualization techniques for specific medical data types with fine-grained taxonomy, including perceptually motivated 3D medical image data visualization [4], multimodal medical data [6], medical flow data [8], cardiac 4D data [5], and flattening-based medical visualization [7].

Our paper, instead, is to provide medical experts with a general overview of this highly relevant field. We believe that our review achieves a balance between the coverage of techniques and the relevance for medical experts, and our goal is to convey existing visualization techniques that are potentially useful for health care experts in their research and clinical practices.

We focus on techniques that generate 3D visualizations of medical images. While other representations of 3D medical images exist, e.g., 2D representations with flattening visualization techniques, we do not include them in our review and readers are referred to a survey elsewhere [7]. Our review includes classic papers for fundamental techniques (Section 3) and visualization papers from mainstream visualization venues, e.g., IEEE Transactions on Visualization and Graphics (TVCG), Computer Graphics Forum (CGF), the IEEE VIS conference, the EuroVis conference, the IEEE PacificVis conference, and Computers and Graphics. Specifically, with a few exceptions, papers covered in Section 4 were selected by first searching for the term “medical visualization” on the two major venues—IEEE TVCG (including the VIS conference) and CGF (including the EuroVis conference)—with the publication time within the last 15 years (2006–2021), and then, we carefully examined each paper found in the search and included papers with direct relevance to medicine and 3D visualization of medical images.

Unlike a previous review on biomedical visualization techniques [9], our overview covers advanced visual analysis methods for medical images that are arranged based on a taxonomy of medical problems from individuals to populations (Section 4) and also includes advances in fundamental visualization methods (Section 3). This paper also features an overview of techniques that could improve spatial perception, which is vital for the understanding of medical images in 3D. Overall, rather than a comprehensive review of the literature on the topic of medical image visualization, this review provides a big picture of the subject of study and introduces state-of-the-art techniques driven by specific medical problems.

3. Fundamental Medical Image Visualization Techniques

We understand medical images as data over 3D spatial domains that have values defined for each point in space and possibly in time (e.g., time-varying data), namely, fields. Medical imaging data can be abstracted as fields where data values are defined everywhere within the spatial domain, for example, fluid characteristics of tissues in an MR image.

In practice, the field data is sampled and stored as discrete data points in images of various formats, e.g., DICOM (Digital Imaging and Communications in Medicine), NII (NIfTI-1 data format), TIFF (Tag Image File Format), and RAW (Raw image). We denote a data point of a medical image data as where are the spatial coordinates, is the time, and are physical attributes of this data point, for example, the Hounsfield scale (for CT) or T1- (longitudinal relaxation time) weighted, T2- (transverse relaxation time) weighted, and FLAIR-fluid-attenuated inversion recovery, and denotes the number of attributes.

Medical image data can be classified into three types: scalar, vector, and tensor, based on the mathematical properties of physical attributes . When , the medical image data are multifield data that can be a mixture of various data types, e.g., scalars and vectors or scalars and tensors, that are typical in real medical practices. Advanced techniques for multifield visualization are covered in Section 4. Figure 1 shows the classification of visualization techniques for medical images with examples (Figures 1(a)–1(c)) of a scalar CT scan of a torso, a flow simulation of a torso, and a tensor field visualization of a brain. In the remainder of this section, we focus on the case of single-typed data, i.e., each attribute of has the same type, and discuss fundamental visualization techniques for each type of data.

3.1. Scalar Image Visualization

If each attribute of of equation (1) is a scalar, i.e., a quantity without direction, the data is scalar volumetric data in the context of 3D medical imaging. In general, volume visualization techniques can be categorized into two classes: indirect volume visualization and direct volume visualization. Indirect volume visualization extracts surfaces with certain data values to indirectly visualize a volumetric data with surface meshes. Since the extracted surfaces there have the same data value, for example, parts of the skin that have a specific Hounsfield scale in a CT scan, such methods are also called isosurface rendering. In contrast, direct volume visualization or volume rendering directly visualizes the volume data without extracting any surfaces and allows the “see-through” of internal structures of a medical image, for example, the brain within a head MR scan.

Isosurface rendering is usually realized with the marching cubes method [12]. The marching cubes algorithm traverses data cells, i.e., cubes, through the data volume and uses a lookup table to efficiently determine the topology of the isosurface and computes the intersection of the isosurface and the cell to construct triangles accordingly. The triangle mesh is then visualized with a color palette specified by the user. Isosurfaces can be generalized to “thick” interval volumes and even be combined with direct volume rendering in a unified framework [13]. However, indirect volume rendering shows only a limited number of features as surfaces, and it does not provide the volumetric look within medical images.

3.1.1. Direct Volume Rendering

For a systematic introduction to direct volume rendering, we refer the reader to the course notes [14] or the book on real-time volume graphics [15]. Volume rendering is based on optics, and all techniques are approximations to the solution of the light propagation equation [16] in volumetric materials. Later, volume rendering was extended to include medical images—initially focusing on CT human head scans—along with surface shading [17, 18].

An optical model determines how particles in the media interact with light [19]. We illustrate the direct volume rendering problem with typical optical models in Figure 2: the eye symbol represents the viewer, the cloud indicates the scalar volumetric data, a ray (the straight line) is shot from the viewer through the volume, the current volume sample is indicated by a gray dot, and the light source is drawn as the sun. The final color of the ray “seen” by the viewer is determined by solving the volume rendering integral with a given optical model. Particles emit and absorb light in the emission and absorption model (Figure 2(a)), and the volume sample only receives lights from samples to its back, e.g., the white dot, along the ray. On top of the emission and absorption model, a local illumination model adds local reflections of light from the light source. Here, light from the source along the straight line to the volume sample may be attenuated as illustrated in Figure 2(b). Alternatively, attenuation between the light source and the point in the volume might be neglected. Finally, global illumination (Figure 2(c)) considers a full model of scattering that leads to complex light paths and interactions, as indicated by the irregular polylines. Currently, most applications rely on local illumination (often, without attenuation from the light source to the point of local illumination) as a compromise between rendering quality and computational cost. In contrast, the global illumination model can achieve photorealistic visual effects, such as shadows and translucency, that improve spatial perception, which is important for medicine as discussed in Section 3.1.3.

Volume rendering, even with a simple optical model, has high computational cost. Due to the limitation of computer hardware, interactivity was not achieved even with hardware acceleration [20] in early work. Therefore, practical use of direct volume rendering of medical images was not feasible back then.

With the advent of graphics processing units (GPUs), interactive volume rendering techniques were devised [2123]. Among them, the GPU-based ray casting [23] method is currently the standard volume rendering technique in most applications. The main advantages of GPU-based ray casting lie in that no geometry has to be generated, and the implementation is straightforward. Interactive volume rendering is also available on mobile devices [24] that can facilitate ubiquitous visualization and analysis of medical images.

3.1.2. Visual Data Exploration with Transfer Functions

The medical image data and its rendering are linked by a transfer function—a mapping from the data value to optical properties, i.e., color and opacity of the volume. The idea is already realized in early volume rendering papers [17, 18], but they did not use transfer functions as a data exploration tool. However, for medical visualization, transfer functions are the main means of interactive visual mining/exploration of scalar medical images. A survey of transfer functions for volume rendering can be found elsewhere [25].

The basic and most frequently used transfer functions are one-dimensional (1D) transfer functions that typically map data values of the scalar volume, i.e., the grayscale value of images, to color and opacity. An example of volume rendering of a CT head scan is shown in Figure 2 with a screenshot of a 1D transfer function widget (Figure 2(e)). The distribution of data values is shown in blue in the background, and the transfer function is shown in the foreground: the line in black indicates the opacity, and the color is set by linear interpolation of the colored dots. Here, the skin and muscles are set to yellow and brown of low opacity, blood vessels are assigned red with high opacity, and bones are set to white of high opacity. Although relatively easy to use, 1D transfer functions have limited feature classification capability; for example, they cannot clearly separate the bone and the blood vessels as shown in Figure 2(d), due to the partial-volume effect of medical imaging. A preintegrated technique can be employed to improve the quality of volume rendering with 1D transfer functions [26].

Artifacts caused by the partial-volume effect cannot be resolved by using only the single scalar image data value. Therefore, more attributes are measured as in the multimodal medical imaging, e.g., T1, T2, and FLAIR channels in MRI, or derived from the original image data. Accordingly, multidimensional transfer functions use the attributes jointly for improved classifications. Two-dimensional transfer functions that enhance boundaries of volumes are the typical choice on most occasions as the second attribute is easily derived from the original data [27, 28]. Interaction widgets for 2D transfer functions [28] have become standard in most of the current visualization tools described in Section 6.

Designing multidimensional transfer functions for more than two attributes is challenging. One common approach is to explore the value domain of multimodal medical images. Several techniques [2931] based on multidimensional diagrams facilitate selections of specific value ranges of each data attribute. However, value domain approaches are unfamiliar to medical experts. In contrast, the spatial domain, e.g., on the slices of images, is preferred. Accordingly, slice-based multidimensional transfer function methods are available. For example, a machine learning-based method allows the user to draw directly on slices for training [32]. Another example is slice-based sketching combined with parallel coordinates and scatterplots for multimodal medical images [33]. A semiautomatic method sets transfer functions through drawing features of interest on slices with probabilistic boundary lassos and approximate optimization [34].

3.1.3. Techniques for Improved Spatial Perception

Although local illumination is widely used in volume rendering, more realistic rendering is often necessary for clinical purposes as it provides important depth cues allowing accurate spatial perception. Therefore, global illumination has been a core research topic in volume rendering. A general and comprehensive global illumination model that creates various visual phenomenon is available in computer graphics [37].

In the case of direct volume rendering, Monte Carlo sampling enables global illumination [38]; however, the gradual rendering from a coarse to fine visualization and the noisy look makes its integration to the clinical pipeline premature. Shadows and translucency are enabled by half-angle slicing [22], and directional occlusion is also available [35]. Figure 3(b) shows the volume rendering of an MRI brain scan with directional occlusion. Compared to Figure 3(a) with the traditional Phong illumination—a frequently used local illumination method—directional occlusion (Figure 3(b)) creates important depth cues that improve the perception of the complex creases of the brain.

Scattering and shadows are also made possible with GPU-based ray casting that achieves high image quality [39]. Advanced global illumination models [36, 40] are available to include scattering and soft shadowing in ray-casting volume rendering. Figure 3(c) shows the volume rendering of a CT scan with a low-pass shadowing model with scattering that provides translucency. These methods could aid medical experts in their clinical work to quickly and accurately locate features of interest in the visualization, which is not possible with traditional local illumination models.

3.2. Vector Image Visualization

When attributes (equation (1)) are vectors, the medical image to be visualized is a vector field. Vector field visualization is extensively used and studied in computational fluid dynamics for science and engineering, and a survey of general vector field visualization techniques is available [43]. In medicine, vector field data in terms of fluid flow from phase-contrast MR (PC-MR) scans are typically used for understanding pathological cerebral aneurysm haemodynamic, nasal aerodynamics, and aortic haemodynamics; bioelectric simulations based on electrocardiography (ECG) in cardiology and electroencephalography (EEG) or magnetoancephalography (MEG) in neurology are another source of vector image data [44].

For more details on medical vector field data visualization, we refer the reader to several surveys [5, 8, 45]. In this section, we briefly review medical vector field visualization techniques in the following order: direct methods, geometry methods, feature methods, and techniques for accurate spatial perception. Segmentation and mesh generation are also important processes in vector medical image visualization but are beyond of the scope of our review.

3.2.1. Direct Methods

Direct methods do not explicitly extract any geometry or features in a vector field. One strategy visualizes vector information with glyphs on 2D slices [46]. Another strategy converts vector information into multiple scalars and visualizes them with color coding in slices or volumes using direct volume rendering. However, direct methods do not visualize trajectories of particles in the vector field, i.e., global path features, which is one of the focal points of vector field analysis. Therefore, direct methods see limited use in vector medical image visualization.

3.2.2. Geometry Methods

Extracting and visualizing representative geometry in the vector field, e.g., streamlines, pathlines, or streaklines, are effective and well-received methods. Geometry methods extract global path features from seeded points in the image [47]. Streamlines are visualized as 3D lines or tubes colored by the desired property, for example, the electrical potentials of electric fields of a torso in Figure 4(v) or a brain with the volume rendering as context in Figure 4(d). The analysis of flow patterns is aided with image slices [48] similar to the case of volume rendering, where a cutting plane is often used. The extraction of global path features is sensitive to seed points; a probing tool facilitates seeding in images [49]. However, automatic seeding remains as a challenging and open question.

Alternative to visualizing 3D geometry as lines, tubes, or surfaces, global path features can be shown with textures. Line integral convolution (LIC) [50] visualizes the vector field by convolving a noise image with a low-pass filter along streamlines extracted from the data image. The LIC method can be extended to curved surfaces [41, 51], for example, on the surface of a heart [41] as shown in Figure 4(a) or a torso as shown in Figure 4(b).

3.2.3. Feature Methods

Feature methods extract features of interests, e.g., sinks, sources, saddle points, vortices, and regions of abnormal flow velocity, in vector fields. We cover specialized methods for medical fluid flows; for other vector field data, we refer the readers to a general overview of texture-based vector field visualization [43]. Extracting vortex regions is a vital process in a number of vector medical image visualization techniques [52, 53]. Vortex cores in blood vessels are related to blood transportation mechanisms in the aorta and the left ventricle [53]. Vortex regions are extracted for analysis of potential malfunctions [52].

Line predicates [54] are functions used for querying integral lines (streamlines or pathlines) that meet certain properties of interest. These functions are flexible and especially suitable for vector medical images as flow features such as velocity, shapes of lines, and distances to the vessel wall can be encoded. Blood with high residence times [55] and impingement zones of cerebral aneurysms [56] can be extracted with line predicates.

3.2.4. Techniques for Improved Spatial Perception

Spatial perception of path geometry, lines or tubes, is difficult in 3D due to many factors, e.g., cluttering, inaccurate depth perception, occlusion, and inaccurate perception of orientation. Methods that improve spatial perception are available to complement aforementioned visualization techniques (Sections 3.2.13.2.3). Particles in the flow field can be drawn as different shaped glyphs [52] or cartoon-styled stretched ellipsoids or short pathlines (pathlets) [49]. Drawing whole extracted lines as illuminated streamlines [57] or with halos [58, 59] is another strategy for visualization with enhanced spatial perception.

Global illumination is superior to local illumination in user performance for locating and comparing features with tube/line renderings as shown in a comparative study [60]. Shadows are especially important for depth perception of vector field visualization as they provide depth cues. Directional ambient occlusion [61] or ray tracing for tubes [62] is able to create shadow effects for vector field data or field lines extracted from tensor fields (Figures 5(c) and 5(d)).

3.3. Tensor Image Visualization

Tensor image visualization techniques concern medical images with the attribute(s) of equation (1) as tensors—here, we consider only second-order tensors [65], i.e., matrices, and most applications in medicine assume symmetric matrices. DTI captures the diffusion information of water molecules in tissues as symmetric matrices at each point. In medicine, clinical experts often use the term diffusion-weighted imaging (DWI) to refer to the isotropic diffusion map—a scalar measurement of DTI. To avoid confusion, we use the term DTI to refer to the imaging that measures tensor fields and do not use DWI. We classify techniques into three categories for tensor image visualization: voxel-based, glyph-based, and tractography. As in previous sections, we also discuss available techniques for improved spatial perception for tensor medical images.

3.3.1. Voxel-Based Methods

By computing scalar descriptions of tensors and using 2D slices and/or volume rendering, voxel-based methods convert tensor data visualization to scalar data visualization. One such description is fractional anisotropy [66], and other anisotropy measurements, including linear anisotropy, planar anisotropy, and isotropy are also helpful for describing the shape and size of tensors. Moreover, measurements based on characteristic curves, for example, finite separation ratio, are calculated to describe coherent fiber structures [64]. Tract-based spatial statistics uses nonlinear registration and fractional skeleton to improve the analysis of multisubject DT images [67]. Strategies of voxel-based visualization using direct volume rendering are available, including designing transfer functions based on tensor information and using diffusion volume textures [68].

3.3.2. Glyph-Based Methods

Glyphs allow encoding the complex tensor information using multiple visual channels, for example, shape, size, and color. Classic glyph design uses ellipsoids [69, 70] (spherical glyphs) or cuboids that change shape and orientation based on diffusion information—the anisotropy and orientation of principal components. A composite glyph can directly encode linear, planar, and spherical information [71]. Superquadric glyphs [72] with general principles of usage [73] avoid the shortcomings of spherical or cubical glyphs. Uncertainty-aware visualization is available for augmenting spherical, superquadric glyphs for DTI and fourth-order homogeneous polynomial glyphs for high angular resolution diffusion imaging (HARDI) [63]. Aforementioned glyph shapes can be found in Figure 5(a). The spatial arrangement of glyphs is another crucial issue. A glyph packing technique leads to efficient use of space and correct perception of the visualization [74].

3.3.3. Tractography

The main goal of DTI is to reconstruct the white matter fiber tracks of the brain. Tractography extracts fibers or tracts from DTIs and visualizes extracted fibers with vector field visualization techniques. Therefore, tractography is probably the most well-known and popular tensor visualization technique among medical experts. A review on tractography can be found elsewhere [75]. Fiber tracts are typically visualized as lines [57, 76] or tubes [77]. GPU acceleration enables interactive visualization of fiber tracks as tubes with tuboids and level-of-detail techniques [78] or hybrid rendering of triangle stripes and point sprites [79].

Visualizing fiber tracts in the context of the scalar medical image [80] helps medical experts better analyze the data. Grouping and clustering fiber tracts reduce clutter and facilitate understanding the anatomy of these tracts. Grouped or clustered tracts are typically color labeled by anatomical bundles [81]. Similar to graph visualization, edge bundling of fiber tracts can also enhance the readability of the visualization [82, 83].

3.3.4. Techniques for Improved Spatial Perception

Perception of glyphs is difficult in 3D due to loss of information from projection to the 2D image plane and the ambiguity in the 3D shape representation. There are superquadric glyphs that avoid the ambiguity and improve over spherical and cubical glyphs [72, 73]. The glyphs are typically rendered with illuminated surfaces and shadow effects in 3D to further enhance depth perception.

Similarly, perception is an important issue in tractography. Illustrative visualization that provides depth cues helps reduce visual clutter and enhance spatial perception. A survey on illustrative visualization is available elsewhere [84]. Depth-dependent halos [59] enhance depth perception with line-based visualization and depth-based contrast enhancement. Line-based ambient occlusion [85] is devised to enhance depth cues of the tractography visualization in both grayscale and color. Ambient occlusion also enhances depth perception for combined direct volume rendering and fiber tract visualization [61] as shown in Figure 5(b). Shadows could also be generated with ray tracing. In Figure 5(a), a whole brain tractography is visualized by 3D tubes with ray-traced shadows [62]. These techniques complement visualization in context [86] and fiber clustering and bundling [83] for more effective analysis of fiber tracts.

4. Medical Visualization Methods for Health Applications

Medical studies on individuals are aimed at providing precise and tailored solutions for the specific anatomy and pathology for individual patients. With the power of medical image visualization, medical experts could deepen the understanding of the data and, therefore, potentially improve the quality of personalized medicine.

Of equal importance is to understand health problems in populations. More specifically, medical image visualization for studies of populations involves ensemble data, i.e., a collection of data members that are individual 3D medical images. Understanding such datasets requires specialized ensemble visualization and visual analysis techniques built on top of basic methods covered in Section 3. Ensemble visualization is aimed at achieving one or more of the following goals: visualizing the main trend of the ensemble, visualizing outliers of the ensemble, and comparing specific members to the main trend or other members. Therefore, we believe that it is necessary and helpful to distinct visualization methods that support medical studies for individuals from those for populations.

In this section, we focus on medical image visualization methods specifically designed to address clinical and public health problems. We cover a broad range of medical visualization methods focusing on medical problems to be tackled (diagnosis, treatment, or prognosis) and with a classification of the scale of the medical study—from individuals to populations.

A summary of papers reviewed in this section can be found in Tables 13 for diagnosis, treatment, and prognosis, respectively. Each paper is documented with the following properties for readers to quickly locate techniques of their interests: supported data types (data type), scale of studies (scale), body locations of the problem (location), and modalities of medical images (modalities). We classify data type into scalar, vector, and tensor; scale is classified into individual and population; location is defined by the target body part(s) of medical image(s); and modalities show the modality of scanners of medical images.


ReferenceData typeScaleLocationModalities

Lawonn et al. 2016 [87]ScalarIndividualWhole bodyPET+CT
Jung et al. 2013 [88]ScalarIndividualWhole bodyPET+CT
Wiemker et al. 2013 [89]ScalarIndividualLymph nodes, lungs, breast, whole bodyPET+CT+MR
Termeer et al. 2007 [90]ScalarIndividualHeartPerfusion-MR+MR
Oeltze et al. 2006 [91]ScalarIndividualHeartPerfusion-MR+CTA
Hennemuth et al. 2008 [92]ScalarIndividualHeartMR
Kirili et al. 2014 [93]ScalarIndividualHeartCTA+SPECT
Meyer-Spradow et al. 2008 [94]ScalarIndividualHeartSPECT
Williams et al. 2008 [95]ScalarIndividualColonCT
Mirhosseini et al. 2019 [96]ScalarIndividualColonCT
Song et al. 2017 [97]ScalarIndividualChest, abdomenCT
Viola et al. 2008 [98]ScalarIndividualLiverUS+CT
Zhou and Hansen 2014 [34]ScalarIndividualBrainMR
Jösson et al. 2020 [99]ScalarPopulationBrainfMR+MR
Elbaz et al. 2014 [53]VectorIndividualHeartPC-MR
Meuschke et al. 2016 [100]VectorIndividualHeartPC-MR
Köhler et al. 2013 [52]VectorIndividualHeartPC-MR
Born et al. 2013 [55]VectorIndividualHeartPC-MR
van Pelt et al. 2010 [101]VectorIndividualHeartPC-MR
van Pelt et al. 2011 [49]VectorIndividualHeartPC-MR
Zhang et al. 2016 [102]TensorPopulationBrainDT
Zhang et al. 2017 [103]TensorPopulationBrainDT


ReferenceData typeScaleLocationModalities

Rieder et al. 2008 [104]ScalarIndividualBrainMR
Weiler et al. 2011 [105]ScalarIndividualBrainMR
Khlebnikov et al. 2011 [106]ScalarIndividualAbdomenCT
Beyer et al. 2007 [107]ScalarIndividualBrainMR
Dick et al. 2011 [108]ScalarIndividualBoneCT
Lundstrom et al. 2011 [109]ScalarIndividualBoneCT
Smit et al. 2007 [110]ScalarIndividualPelvicMR
Butson et al. 2013 [111]ScalarIndividualBrainMR
Vorwerk et al. 2020 [112]ScalarIndividualBrainMR
Bock et al. 2013 [113]ScalarIndividualBrainMR
Athwale et al. 2019 [114]ScalarIndividualBrainMR
Blaas et al. 2007 [115]TensorIndividualBrainfMR+MR+DT
Born et al. 2009 [116]TensorIndividualBrainfMR+MR+DT
Diepenbrock et al. 2011 [117]TensorIndividualBrainfMR+MR+DT
Joshi et al. 2008 [118]TensorIndividualBrainfMR+MR+DT
Rieder et al. 2008 [119]TensorIndividualBrainfMR+MR+DT
Dick et al. 2009 [120]TensorIndividualBoneCT+sim

Note: sim: simulation.

ReferenceData typeScaleLocationModalities

Raidou et al. 2016 [127]ScalarPopulationProstateMR
Karall et al. 2018 [128]ScalarPopulationBreastMR
Raidou et al. 2018 [129]ScalarPopulationBladderCT
Furmanová et al. 2021 [130]ScalarPopulationProstate, bladder, rectumCT

4.1. Diagnosis

Methods designed for diagnosis are summarized in Table 1. While the majority of these methods handle studies on the individual level, a few recent works support studies of populations.

4.1.1. Individual

Diagnosis for individuals often requires multimodal medical images to provide sufficient information. In oncology, positron emission tomography (PET) images that show physiological functions and CT images that represent anatomical structures are used jointly for the diagnosis of tumors. Multimodal visualization techniques are, therefore, required to analyze the combination of the two types of scans. Focusing only on potential PET anomaly regions with the CT anatomy as the context, i.e., focus-and-context visualization, is an effective visualization strategy for PET+CT images. An illustrative technique allows us to visualize the CT data as the context in cartoon style and the PET data as focus with a see-through lens that quickly draws the attention of medical experts [87]. There, the focus can be interactively manipulated and contents within focal regions are controlled by interactive transfer functions. Alternatively, a visibility-based transfer function for PET+CT data allows users to select regions of interest for further analysis [88]. Shape-encoded rendering combines shape analysis with volume rendering to highlight tubular and nodular structures [89]. The method aids the diagnosis of anomalies in CT scans of lungs or PET/CT scans for oncological practices.

In the field of cardiology diagnosis, perfusion data from MR or single photon emission computed tomography (SPECT) are used in conjunction with the regular MR or CT images that are of higher resolutions to indicate the underlying anatomy. The analysis of coronary artery disease is achieved by integrating the perfusion MR with the morphologic data from CT angiography (CTA) using visual analysis with multiple-linked views [91]. A well-accepted analysis tool in cardiac diagnosis, the bull’s eye plot is extended for rest and stress comparison and is used interactively to drive the 3D exploration with colored height fields, icons, and synchronized lenses. The bull’s eye plot is further extended to be continuous and as a volume to assess transmurality in a 3D anatomical context [90]. Diagnosis is achieved with visual analysis supported by a comprehensive visualization and interactive exploration with multiple-linked views, and several segmented volumes and enhanced MR images are used for the joint rendering. A multivariate glyph-based method enables the structured analysis of myocardial perfusion using 3D glyphs encoding parameters of the left ventricular myocardium [94]. By linking the 3D view with 2D slices, the method supports the analysis of normal case, various types of ischemia, and heart failure. CTA and perfusion SPECT images are combined and jointly analyzed to diagnose coronary artery disease [93]. A study comparing the method to the traditional practice shows that the visualization method is advantageous in terms of diagnostic performance. Contrast-enhanced cardiac images, including perfusion images, whole-heart coronary angiography, and late enhanced images, are analyzed by aligning different datasets together and visualized as multiple isosurfaces [92].

Another important medical image visualization-based diagnosis approach is virtual endoscope visualization, e.g., virtual colonoscopy and gastroscopy, where the visualization of inner surfaces of tubular structures is the main focus. For example, an immersive virtual colonoscopy method supports the exploration of the colon within volume visualization in a virtual reality environment [96]. A hybrid technique that combines the inner surface rendering and volume rendering of colons is available [95]. For a full review of flattening visualization techniques, we refer readers to a survey elsewhere [7].

Extensive training is required for diagnosis with medical images. A visual analysis method enables comparative visualization of gaze data of several radiologists reading slices and volume rendering of medical images [97]. By setting in a real diagnostic environment, the method is useful for training radiologists. Ultrasound (US) images are frequently used in clinical practice, but effective diagnosis with such images requires extensive training. A joint 2D US and 3D CT image visualization method registers the 2D plane of the US image to segmented 3D structures of CT to assist the learning of liver examinations [98].

The 4D phase-contrast MR (PC-MR) is a recent advancement in medical imaging that is designed for measuring time-varying flow fields in the body, which is specifically used for hemodynamics analysis. Among other features, the vortex is especially useful in the analysis and diagnosis of cardiac flow data. Vortex rings in the left ventricle are extracted and visualized in 3D to analyze inflow during early and late diastolic filling of normal subjects [53]. Quantitative parameters characterizing vortex flow for these phases are formulated for normal subjects. Aortic vortex flow is classified based on the orientation, shape, and temporal occurrence of the vortex for PC-MR data of healthy subjects and ones with cardiovascular diseases [100]. The classification results are visualized with 2D vortex plots and 3D glyph visualization.

The flow field in the heart and aorta is analyzed by semiautomatic segmentation with line predicates that extract vortices and visualized with arrows [52]. The most suitable cardiac blood flow vortex extraction criterion is found through comparison, investigating pathologies like coarctations, tetralogy of Fallot, and aneurysms. A visual analysis method provides flexible interactive exploration of cardiac blood flow using line predicates that generate bundles with similar flow characteristics [55]. The technique can be applied to healthy and pathological hearts and shows aspects of flow that cannot be seen with traditional methods.

4.1.2. Population

Diagnosis can benefit from studying health problems in a population, for example, with a cohort study, and by comparing different individuals. Traditionally, cohort studies with medical images rely on hypothesis formation and statistical analysis, but the visualization and exploration of the imaging data are ignored. A visual analysis method combines hypothesis formation and reasoning with interactive volume rendering of multivariate brain MRI and fMRI cohort study data [99]. With multiple-linked views, the method supports the exploration of the bidirectional correlations between the volume rendering and clinical parameters and the comparison of different patient groups.

Diagnosis can be potentially further improved by including tensor information of DTIs. However, visualization of DTIs in a population is challenging because, on top of the occlusion issue of spatially overlapping images, each voxel there encodes complex information. As a first step, effective comparative visualization of two DTI images is required. A glyph-based technique visualizes three aspects of tensors, namely, the scale, the anisotropy type, and the orientation [102]. By showing the glyphs on 2D slices, the method is able to compare two DTI images. As an example, the brain DTI of a healthy subject is compared to an HIV-infected subject. An overview+detail visualization is devised for DTI image ensembles: aggregate tensor glyphs show an overview of the ensemble in the spatial layout, and visualizations of tensor properties (scale, shape, and orientation) are used for detailed analysis [103]. A case study demonstrates that the method is able to visualize and analyze a cohort DTI study of 46 subjects.

4.2. Treatment

Treatments aided by medical visualization are mainly surgical planning and therapeutical intervention planning. Therefore, treatment-related visualization methods are individual-based as shown in Table 2. A thorough introduction of various applications of visualization in surgical planning can be found elsewhere [121].

Neurosurgery preoperative planning is a major task for visualization techniques for brain imaging. Due to the complexity and importance of the brain, multimodal 3D medical images are used in neurosurgery to locate different anatomical structures. Heterogeneous pathological tissues are visualized with volume rendering by registering multimodal volumes, e.g., T1, T2, and FLAIR MRI, and automatically segmented mask volumes [104]. A slice-based interface that is familiar to medical experts is used to drive the visual exploration of multimodal brain images with direct manipulation on the 2D images with lassos [34]. Transfer functions are semiautomatically designed based on user-selected 2D regions, and then, brain tumors and edema can be visually segmented in 3D and visualized with volume rendering. Vascular structures in the brain are extracted and visualized with volume rendering to aid neurosurgery planning for arteriovenous malformations [105]. Here, feeding arteries, draining veins, and arteries “en passage” are segmented and visualized together with the brain rendered as the context. A high-quality multimodal scalar volume visualization method facilitates the actual planning of neurosurgeries [107]. The method supports specific operation planning, for example, the optimal skin incision and skull opening for the pathology and customized surgery of deep-seated lesions for a given patient; specialized visualization of superficial brain anatomy, function, and metabolism facilitates the planning.

Using DT images jointly with fMRI and regular MR scans could further improve the quality of surgical planning as fiber tracts and functional regions of the brain around the tumor could be analyzed. DTI, fMRI, and regular MRIs are combined and visualized with volume rendering and tube-based rendering for brain tumor resection planning [115]. Fiber bundles can be interactively selected so that those around the tumor could be avoided in the planning. The fMRI activation areas, functional areas of the brain, and fiber tracts connecting these areas are jointly visualized with illustrative rendering [116]. Interactive probing of fMRI, DTI, and MRIs within the brain visualization is proposed for neurosurgical planning [117]. Uncertainty of these images is also visualized in the method to provide additional information to the user. Interactions, especially, cropping or cutting operations from the surface of the brain to the inside, are critical for neurosurgery planning. Volume clipping with complex geometries [122] is the foundation for such tailored cropping operations. Cropping views with different shapes, e.g., sphere, cube, and cylinder, are combined with an image-guided navigation system that visualizes MRI, fMRI, DTI, and SPECT for epilepsy neurosurgery [118]. Distance information is critical in preoperative planning, and DTI and fMRI provide such data for fiber tracts and functional regions. In a comprehensive method designed for neurosurgical planning, the tumor and neighboring fiber tracts are rendered as the focus with distance-based enhancements while the volume-rendered brain provides the context [119]. The planned path can be interactively manipulated and is visualized as a line and as a cylindrical cropping window on the brain.

In an oncology surgery, multiple possible paths to a tumor may exist. However, the safety of paths is not equal and has to be considered during the planning. A ray-based method estimates the safety of all straight access paths to the tumor in volume rendering and provides the area and path safety information [106]. Clear evidence shows that the method is liked by medical experts and can be used in clinical practice with little overhead. Pelvic oncology surgery planning is aided with a visual analysis method based on preoperative MR scans [110]. The method is built on an atlas by registering the MRIs of a patient to visualize the context (organs around the tumor), the target (the tumor), and risks (autonomic nerves) of the surgery. Distances between nerves to the mesorectum and the tumor to the mesorectum, which are critical to the surgery, are calculated as a distance field. A linked-view tool comprising a 3D model view, an MRI view, and a distance field-based unfolded view is implemented. Five medical experts evaluated the method and considered that it has potential in surgical planning and surgical training for oncologic surgeons.

Precise preoperative planning is also critical in orthopedics. A medical visualization table is available to visualize CT scans using volume rendering with user interface and interactions designed for low learning effort and similarity to the real working scenarios for surgeons [109]. A user study shows that the table system is liked by surgeons and potentially beneficial for planning. Hip joint replacement is an important surgery in orthopedics. The optimal implant positioning can be aided by an interactive distance field visualization technique that uses glyphs and slices in a 3D isosurface context to show distances between the implant and the bone boundaries [108]. Stress simulation is an effective method for the design and the planning of the implant; however, the resulting stress tensor fields need special visualization methods as most methods are designed for diffusion tensors. With volume rendering and line rendering, a focus-and-context method is proposed to visualize time-varying stress tensor fields generated by such simulations [120]. The method supports the interactive exploration of the simulation and reacts to changes in the simulation and therefore could compare the physiological stress distribution before and after the simulated replacement surgery.

Deep brain stimulation (DBS) is an accepted neuromodulation therapy for treating the motor symptoms of Parkinson’s disease. The DBS device that generates electrical stimulation as an alternation of neural activity is comprised of a multielectrode lead implanted in the brain and a connected subcutaneous implantable pulse generator. The accuracy of the multielectrode lead placement is the key to the effectiveness of the therapy. A mobile device-based visualization tool supports volume rendering and isosurface rendering to compare different settings of DBS and help healthcare providers to choose the optimal configuration for a patient [111]. A further improvement uses a client-server approach to achieve efficient interactive visualization and simulation of DBS [112]. The usefulness of the method is demonstrated by a postoperative example and another example of DBS surgery pre- and interoperative planning. The precision of DBS electrode positioning is related to the uncertainty introduced by the resolution of brain imaging. The positional uncertainty of each electrode is quantified and visualized with uncertainty-aware volume and isosurface visualization techniques [114]. Multimodal volumes and their associated uncertainty are quantified and fused in a multiview visual analysis method to assist the planning of DBS electrodes [113]. This comprehensive method covers the planning, recording, and placement phases of the treatment and uses volume and geometry rendering, spatiotemporal visualization, and uncertainty visualization for corresponding elements in the procedure.

A relevant topic that requires high-quality visualization is the segmentation of 3D anatomy and lesions from medical images. Segmentation is often necessary for the diagnosis and treatment of individuals and, therefore, is a prerequisite for many of the aforementioned visualization techniques. Accurate and efficient segmentation of specific anatomical structures, for example, the heart [123, 124] and the prostate [125], has been an active and long-standing research area in medical imaging. A discussion of state-of-the-art segmentation techniques is beyond the scope of our paper and can be found elsewhere [126]. In Section 6, we list free visualization software dedicated to 3D segmentation.

4.3. Prognosis

In contrast to the case of treatment, the value of visualization lies in its capability of comparison in a population to aid the prognosis evaluation. Therefore, all techniques for prognosis summarized in Table 3 are population-based. A number of techniques focus on radiotherapy treatment evaluation. A visualization method is available for exploring and understanding tumor control probability models for cohorts [127]. By combining visualizations of medical images and statistical models, the method supports the exploration of uncertainty, parameter sensitivity analysis, interpatient response variability identification, and finding treatment strategies that result in the desired outcome.

A cohort study of patients who underwent breast cancer chemotherapy treatment is analyzed with a visualization method that shows different aspects of the study using multiple-linked views [128]. The method combines medical images and nonimage information in an interactive visualization tool that allows for analyzing individual patients, comparing different chemotherapy treatment strategies and comparing different patients.

Radiotherapy-induced bladder toxicity is analyzed with a visualization technique tailored to investigating individual patients and cohorts in the whole treatment process of a cohort study [129]. This method focuses on the analysis of the impact of shape variations on the accuracy of dose delivery by integrating the spatial visualization of bladders, dimensionality reduction and clustering, and dose distribution visualizations. The idea is further extended to visualize and analyze more organs that may impact the accuracy of dose delivery in radiotherapy treatment for prostate cancer [130], and its usefulness has been demonstrated through the exploration of cohort studies by health experts.

5. Future Directions and Limitations of Medical Image Visualization

Some visualization methods support exploring and analyzing medical image data from basic medical research that can potentially address unsolved health challenges in the future. In this section, we discuss some of these important future directions and also limitations of medical image visualization. In Table 4, we summarize medical image visualization techniques with potential health science applications. Here, we list the features of these methods as a reminder to the readers of what are available in the visualization tool box in the future.


ReferenceScaleLocationModalitiesFeatures

Zachow et al. 2009 [131]IndividualNoseCT+simFluid dynamics: speed, pressure, humidity, temperature
Gasteiger et al. 2012 [56]IndividualBrainCTA+simFluid dynamics: inflow jet and impingement zone
Meuschke et al. 2019 [132]IndividualBrainCTA+simFluid dynamics: vortex, blood flows
Rosen et al. 2016 [133]IndividualHeartMR+DT+simBioelectric fields
Meuschke et al. 2017 [134]IndividualBrainSimRupture risk of aneurysms, stress tensor
Zhou et al. 2021 [135]PopulationBrain, heartMR+simQuantitative comparison of scalar medical images

Note: sim: simulation.

Simulations are an important approach in medical research to understand complex, invisible, and/or perpetual activities in human bodies. Nasal flow simulation is an important means for understanding the physiological nasal breathing that improves the overall traditional statistics-based summary of flow behaviors. A visual analysis method aids in the exploration of a computational fluid dynamics simulation on an anatomically correct model of the upper respiratory tract [131]. With multiple-linked views, the method allows users to analyze multiple attributes, e.g., speed, pressure, humidity, and temperature, of the complex flow simulation to derive intervention plans. Hemodynamic characteristics in cerebral aneurysms are studied with computational fluid dynamic simulations and segmented data from CTA scans [56]. The visual analysis tries to understand the inflow jet and impingement zone that are correlated with the risk of rupture. A vortex classification method automatically classifies blood flows in cerebral aneurysms and visualizes the clusters as streamlines in an aneurysm-based and a hemisphere-based visualization [132].

Cardiac diseases are typically related to malfunctions in bioelectric fields in the body that are difficult to measure in vivo. Simulation of such bioelectric fields then becomes a feasible alternative, and, therefore, in-depth analysis of simulation results is important. Early work focuses on the visualization of torso electric field simulations [136]. Specialized for the 3D myocardial ischemia simulation, a multiple-linked view approach is devised to perform the visual analysis of the multiple simulation runs of bioelectric fields on the heart [133], which could also potentially be used for diagnosis. A systematic discussion of computational and numerical methods for bioelectric fields problems can be found in a review, where related visualization techniques are also discussed [44].

Comparative visualization is important for understanding an ensemble of simulation runs and comparing between patients or different measures of medical images. Scalar image ensembles, for example, brain MR atlas data (https://www.oasis-brains.org/), are a common form of ensemble medical image data. Direct visualization of image members in 2D and 3D is not effective due to occlusion, and quantitative comparison is not feasible in this way either. One possible solution is to reduce the dimensionality of image data to one (1D) with space-filing curves [135, 137, 138]. Data-driven space-filling curves [135] better preserve spatial coherency in the resulting 1D representation than static curves [137, 138], which potentially break spatially coherent features into distant 1D fragments. The method is applied for visualizing brain MRI atlas and an ensemble of 3D myocardial ischemia simulations and could potentially be used for diagnosis or research by finding anomalies of subjects through comparison to the main trends. Comparative visualization of stress tensors is available for analyzing rupture risks of cerebral aneurysms based on computational fluid dynamic simulations [134]. With several glyph designs, the method supports the comparison of local stress tensors on the inner and outer vessel walls. Medical experts consider that this method introduces the often overlooked wall structure information for rupture assessment and could contribute to the development of a comprehensive risk factor of aneurysms in the future.

Medical image visualization has its limitations. First, customized techniques and tools are required for specific medical problems, which demand close collaborations between the visualization and medical experts. Typically, an iterative process with several prototypes is required for a method and its associated software tool to become usable, which is often time-consuming. Second, a learning process is required for medical experts to familiarize themselves with new concepts or interactions, for example, transfer function design in volume rendering. Nevertheless, medical image visualization has the advantage of combining the expertise of humans and the computational power of machines, which is vital for health science, making it a promising research direction.

6. Software Tools for Medical Image Visualization

Both commercial tools and free software are available for medical image visualization. In this paper, we list representative free software that can be found on the Internet and readily used for 3D medical images of various types and file formats. As shown in Table 5, we briefly summarize these tools with data types that can be handled (data types) as well as their featuring characteristics (features).


NameData typesFeatures

ParaViewScalar, vector, tensorAnalysis, large datasets, parallel/super computing
VoreenScalar, vectorRapid prototype
InviwoScalar, vectorRapid prototype
MegaMolScalar, vectorParticles, rapid prototype
SCIRunScalar, vector, tensorModeling, simulation, analysis
3D SlicerScalarAI, segmentation
Seg3DScalarSegmentation
ImageVis3DScalarLarge datasets
FluoRendererScalarConfocal microscopy data

Visualization and analysis tools for general scientific problems and data provide flexible rapid prototyping frameworks. ParaView (https://www.paraview.org/) is a cross-platform open-source visualization tool designed for interactive visualization and data analysis for a wide range of research and engineering areas, and various types of medical images (scalar, vector, and tensor) are supported [139]. Voreen (voreen.uni-muenster.de) is an open-source rapid application development framework for the interactive visualization and analysis of multimodal volumetric datasets [140]. Inviwo (https://inviwo.org/) is a framework for rapid prototyping visualizations and provides a rich visual interface for creating customized visualization [141]. MegaMol (https://megamol.org/) is a comprehensive cross-platform visualization prototyping framework evolved from particle rendering for molecular datasets [142, 143]. SCIRun (https://www.sci.utah.edu/software/scirun.html) is a software environment for scientific problem simulation, modeling, and visualization, and it supports various types of medical images.

A number of tools are available for scalar medical image visualization and analysis. 3D Slicer (https://download.slicer.org/) is a tool for visualization and analysis of medical images and features interface for medical devices, for example, surgical navigation system and robotic devices [144]. Seg3D (https://www.sci.utah.edu/software/seg3d.html) is a medical volume segmentation and processing tool that allows for flexible manual segmentation and a number of automatic segmentation algorithms. ImageVis3D (https://www.sci.utah.edu/software/imagevis3d.html) is a scalable and multiplatform volume visualization tool that supports large datasets and works on mobile devices [145]. FluoRenderer (https://www.sci.utah.edu/software/fluorender.html) features the visualization of confocal microscopy data and multichannel scalar volume datasets.

These software tools contain example datasets, tutorials, detailed user guides, and supporting communities. Readers are encouraged to try out these tools with included datasets and their own medical images to have first-hand experience of some key techniques discussed in this paper.

7. Conclusion

In this paper, we have provided an overview of 3D medical image visualization techniques. Starting from a classification of medical images in terms of data characteristic, i.e., the mathematical properties of field data, we review fundamental visualization techniques for scalar, vector, and tensor medical images. The discussion covers seminal work that lays foundations for each type of visualization and features techniques that allows for accurate spatial perception that is important in medical practices. Next, specialized medical visualization techniques are categorized based on the supported data type, medical procedures, the scale of the concerned medical problems, locations of the studied problems, and modalities of images. We describe the medical aspects as well as technical aspects of each technique to facilitate medical experts to choose proper techniques for their specific problems. Then, we discuss works that have potential health science applications and the limitations of medical image visualization. Finally, state-of-the-art free visualization software is listed so that medical experts can have a first-hand experience of some of the aforementioned techniques and experiment with their own data to gain insights into visualization.

Conflicts of Interest

The authors declare no conflicting interests.

Authors’ Contributions

L. Zhou provided the overall supervision of the work, collected papers for review, and drafted and edited the manuscript. MJ. Fan provided clinical support as the medical expert and contributed to the structuring of the manuscript and the collection and summarizing of papers. C. Hansen and C. R. Johnson contributed to the editing of the manuscript. D. Weiskopf contributed to the structuring of the manuscript and critically edited the manuscript. Liang Zhou and Mengjie Fan contributed equally to this work.

Acknowledgments

This research was supported by the Data for Better Health Project of Peking University-Master Kong and by NIH (R01 EB031872).

References

  1. D. Keim, “Information visualization and visual data mining,” IEEE Transactions on Visualization and Computer Graphics, vol. 8, no. 1, pp. 1–8, 2002. View at: Publisher Site | Google Scholar
  2. D. Streeb, M. El-Assady, D. A. Keim, and M. Chen, “Why visualize? Arguments for visual support in decision making,” IEEE Computer Graphics and Applications, vol. 41, no. 2, pp. 17–22, 2021. View at: Publisher Site | Google Scholar
  3. B. Preim and D. Bartz, Visualization in Medicine, Morgan Kaufmann, Burlington, 2007. View at: Publisher Site
  4. B. Preim, A. Baer, D. Cunningham, T. Isenberg, and T. Ropinski, “A survey of perceptually motivated 3D visualization of medical image data,” Computer Graphics Forum, vol. 35, no. 3, pp. 501–525, 2016. View at: Publisher Site | Google Scholar
  5. B. Kohler, S. Born, R. F. P. van Pelt, A. Hennemuth, U. Preim, and B. Preim, “A survey of cardiac 4D PC-MRI data processing,” Computer Graphics Forum, vol. 36, no. 6, pp. 5–35, 2017. View at: Publisher Site | Google Scholar
  6. K. Lawonn, N. Smit, K. Buhler, and B. Preim, “A survey on multimodal medical data visualization,” Computer Graphics Forum, vol. 37, no. 1, pp. 413–438, 2018. View at: Publisher Site | Google Scholar
  7. J. Kreiser, M. Meuschke, G. Mistelbauer, B. Preim, and T. Ropinski, “A survey of flattening-based medical visualization techniques,” Computer Graphics Forum, vol. 37, no. 3, pp. 597–624, 2018. View at: Publisher Site | Google Scholar
  8. S. Oeltze-Jafra, M. Meuschke, M. Neugebauer et al., “Generation and visual exploration of medical flow data: survey, research trends and future challenges,” Computer Graphics Forum, vol. 38, no. 1, pp. 87–125, 2019. View at: Publisher Site | Google Scholar
  9. C. R. Johnson and X. Tricoche, “Biomedical visualization,” in Advances in Biomedical Engineering, P. Verdonck, Ed., pp. 211–273, Elsevier, Amsterdam, Netherlands, 2009. View at: Publisher Site | Google Scholar
  10. L. Zhou, M. Schott, and C. Hansen, “Transfer function combinations,” Computers Graphics, vol. 36, no. 6, pp. 596–606, 2012. View at: Publisher Site | Google Scholar
  11. Y. T. Weldeselassie, G. Hamarneh, and D. Weiskopf, “Tensor dissimilarity based adaptive seeding algorithm for DT-MRI visualization with streamtubes,” in Medical Imaging 2007: Visualization and Image-Guided Procedures, K. R. Cleary and M. I. Miga, Eds., vol. 6509 of International Society for Optics and Photonics, pp. 900–908, SPIE, 2007. View at: Publisher Site | Google Scholar
  12. W. E. Lorensen and H. E. Cline, “Marching cubes: a high resolution 3D surface construction algorithm,” SIGGRAPH Computer Graphics, vol. 21, no. 4, pp. 163–169, 1987. View at: Publisher Site | Google Scholar
  13. M. Ament, D. Weiskopf, and H. Carr, “Direct interval volume visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 16, no. 6, pp. 1505–1514, 2010. View at: Publisher Site | Google Scholar
  14. K. Engel, M. Hadwiger, J. M. Kniss, A. E. Lefohn, C. R. Salama, and D. Weiskopf, “Real-time volume graphics,” in ACM SIGGRAPH 2004 Course Notes, ser. SIGGRAPH '04, p. 29, Association for Computing Machinery, Los Angeles, CA, USA, 2004. View at: Publisher Site | Google Scholar
  15. M. Hadwiger, J. M. Kniss, C. Rezk-salama, D. Weiskopf, and K. Engel, Real-Time Volume Graphics, A. K. Peters, Ltd., 2006.
  16. J. T. Kajiya and B. P. Von Herzen, “Ray tracing volume densities,” SIGGRAPH Computer Graphics, vol. 18, no. 3, pp. 165–174, 1984. View at: Publisher Site | Google Scholar
  17. M. Levoy, “Display of surfaces from volume data,” IEEE Computer Graphics and Applications, vol. 8, no. 3, pp. 29–37, 1988. View at: Publisher Site | Google Scholar
  18. R. A. Drebin, L. Carpenter, and P. Hanrahan, “Volume rendering,” SIGGRAPH Computer Graphics, vol. 22, no. 4, pp. 65–74, 1988. View at: Publisher Site | Google Scholar
  19. N. Max, “Optical models for direct volume rendering,” IEEE Transactions on Visualization and Computer Graphics, vol. 1, no. 2, pp. 99–108, 1995. View at: Publisher Site | Google Scholar
  20. B. Cabral, N. Cam, and J. Foran, “Accelerated volume rendering and tomographic reconstruction using texture mapping hardware,” in Proceedings of the 1994 Symposium on Volume Visualization, pp. 91–98, Tysons Corner, VA, USA, 1994. View at: Publisher Site | Google Scholar
  21. S. Roettger, S. Guthe, D. Weiskopf, T. Ertl, and W. Strasser, “Smart hardware-accelerated volume rendering,” in Eurographics/IEEE VGTC Symposium on Visualization, G.-P. Bonneau, S. Hahmann, and C. D. Hansen, Eds., pp. 231–238, The Eurographics Association, 2003. View at: Publisher Site | Google Scholar
  22. J. Kniss, S. Premoze, C. Hansen, P. Shirley, and A. McPherson, “A model for volume lighting and modeling,” IEEE Transactions on Visualization and Computer Graphics, vol. 9, no. 2, pp. 150–162, 2003. View at: Publisher Site | Google Scholar
  23. J. Kruger and R. Westermann, “Acceleration techniques for GPU-based volume rendering,” in IEEE Visualization Conference 2003, pp. 287–292, Seattle, USA, 2003. View at: Publisher Site | Google Scholar
  24. M. Moser and D. Weiskopf, “Interactive volume rendering on mobile devices,” in In Vision, Modeling, and Visualization VMV '08 Conference Proceedings, pp. 217–226, Konstanz, Germany, 2008. View at: Google Scholar
  25. P. Ljung, J. Krüger, E. Groller, M. Hadwiger, C. D. Hansen, and A. Ynnerman, “State of the art in transfer functions for direct volume rendering,” Computer Graphics Forum, vol. 35, no. 3, pp. 669–691, 2016. View at: Publisher Site | Google Scholar
  26. K. Engel, M. Kraus, and T. Ertl, “High-quality pre-integrated volume rendering using hardware-accelerated pixel shading,” in Proceedings of the ACM SIGGRAPH/EUROGRAPHICS Workshop on Graphics Hardware, pp. 9–16, Los Angeles, CA, USA, 2001. View at: Publisher Site | Google Scholar
  27. G. Kindlmann and J. Durkin, “Semi-automatic generation of transfer functions for direct volume rendering,” in IEEE Symposium on Volume Visualization, pp. 79–86, Research Triangle Park, NC, USA, 1998. View at: Publisher Site | Google Scholar
  28. J. Kniss, G. Kindlmann, and C. Hansen, “Multidimensional transfer functions for interactive volume rendering,” IEEE Transactions on Visualization and Computer Graphics, vol. 8, no. 3, pp. 270–285, 2002. View at: Publisher Site | Google Scholar
  29. H. Akiba and K.-L. Ma, “A tri-space visualization interface for analyzing time-varying multivariate volume data,” in Eurographics/IEEE-VGTC Symposium on Visualization, K. Museth, T. Moeller, and A. Ynnerman, Eds., pp. 115–122, The Eurographics Association, 2007. View at: Publisher Site | Google Scholar
  30. X. Zhao and A. Kaufman, “Multi-dimensional reduction and transfer function design using parallel coordinates,” in IEEE/EG Symposium on Volume Graphics, R. Westermann and G. Kindlmann, Eds., pp. 69–76, The Eurographics Association, 2010. View at: Publisher Site | Google Scholar
  31. H. Guo, H. Xiao, and X. Yuan, “Scalable multivariate volume visualization and analysis based on dimension projection and parallel coordinates,” IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 9, pp. 1397–1410, 2012. View at: Publisher Site | Google Scholar
  32. F.-Y. Tzeng, E. Lum, and K.-L. Ma, “An intelligent system approach to higher-dimensional classiffication of volume data,” IEEE Transactions on Visualization and Computer Graphics, vol. 11, no. 3, pp. 273–284, 2005. View at: Publisher Site | Google Scholar
  33. L. Zhou and C. Hansen, “Transfer function design based on user selected samples for intuitive multivariate volume exploration,” in 2013 IEEE Pacific Visualization Symposium, pp. 73–80, Sydney, NSW, Australia, 2013. View at: Publisher Site | Google Scholar
  34. L. Zhou and C. Hansen, “GuideME: slice-guided semiautomatic multivariate exploration of volumes,” Computer Graphics Forum, vol. 33, no. 3, pp. 151–160, 2014. View at: Publisher Site | Google Scholar
  35. M. Schott, V. Pegoraro, C. Hansen, K. Boulanger, and K. Bouatouch, “A directional occlusion shading model for interactive direct volume rendering,” Computer Graphics Forum, vol. 28, no. 3, pp. 855–862, 2009. View at: Publisher Site | Google Scholar
  36. M. Ament, F. Sadlo, C. Dachsbacher, and D. Weiskopf, “Low-pass filtered volumetric shadows,” IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 12, pp. 2437–2446, 2014. View at: Publisher Site | Google Scholar
  37. M. Ament, C. Bergmann, and D. Weiskopf, “Refractive radiative transfer equation,” ACM Transactions on Graphics, vol. 33, no. 2, 2014. View at: Publisher Site | Google Scholar
  38. T. Kroes, F. H. Post, and C. P. Botha, “Exposure render: an interactive photo-realistic volume rendering framework,” PLoS One, vol. 7, no. 7, 2012. View at: Publisher Site | Google Scholar
  39. T. Ropinski, C. Döring, and C. Rezk-Salama, “Interactive volumetric lighting simulating scattering and shadowing,” in 2010 IEEE Pacific Visualization Symposium (PacificVis), pp. 169–176, Taipei, Taiwan, 2010. View at: Publisher Site | Google Scholar
  40. M. Ament, F. Sadlo, and D. Weiskopf, “Ambient volume scattering,” IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 12, pp. 2936–2945, 2013. View at: Publisher Site | Google Scholar
  41. G.-S. Li, X. Tricoche, D. Weiskopf, and C. D. Hansen, “Flow charts: visualization of vector fields on arbitrary surfaces,” IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 5, pp. 1067–1080, 2008. View at: Publisher Site | Google Scholar
  42. T. Schafhitzel, F. Roler, D. Weiskopf, and T. Ertl, “Simultaneous visualization of anatomical and functional 3D data by combining volume rendering and ow visualization,” in Medical Imaging 2007: Visualization and Image-Guided Procedures, K. R. Cleary and M. I. Miga, Eds., vol. 6509 of International Society for Optics and Photonics, pp. 21–29, SPIE, 2007. View at: Publisher Site | Google Scholar
  43. R. S. Laramee, H. Hauser, H. Doleisch, B. Vrolijk, F. H. Post, and D. Weiskopf, “The state of the art in flow visualization: dense and texture-based techniques,” Computer Graphics Forum, vol. 23, no. 2, pp. 203–221, 2004. View at: Publisher Site | Google Scholar
  44. C. R. Johnson, “Computational and numerical methods for bioelectric field problems,” Biomedical Engineering, vol. 25, no. 1, pp. 1–81, 1997. View at: Publisher Site | Google Scholar
  45. A. Vilanova, B. Preim, R. V. Pelt, R. Gasteiger, M. Neugebauer, and T. Wischgoll, “Visual exploration of simulated and measured blood flow,” in Scientific Visualization: Uncertainty, Multifield, Biomedical, and Scalable Visualization, C. D. Hansen, M. Chen, C. R. Johnson, A. E. Kaufman, and H. Hagen, Eds., pp. 305–324, Springer, London, UK, 2014. View at: Publisher Site | Google Scholar
  46. D. Laidlaw, R. Kirby, C. Jackson et al., “Comparing 2D vector field visualization methods: a user study,” IEEE Transactions on Visualization and Computer Graphics, vol. 11, no. 1, pp. 59–70, 2005. View at: Publisher Site | Google Scholar
  47. M. H. Buonocore, “Visualizing blood flow patterns using streamlines, arrows, and particle paths,” Magnetic Resonance in Medicine, vol. 40, no. 2, pp. 210–226, 1998. View at: Publisher Site | Google Scholar
  48. L. Wigström, T. Ebbers, A. Fyrenius et al., “Particle trace visualization of intracardiac flow using time-resolved 3D phase contrast MRI,” Magnetic Resonance in Medicine, vol. 41, no. 4, pp. 793–799, 1999. View at: Publisher Site | Google Scholar
  49. R. van Pelt, J. Olivan Bescos, M. Breeuwer et al., “Interactive virtual probing of 4D MRI blood-flow,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, pp. 2153–2162, 2011. View at: Publisher Site | Google Scholar
  50. B. Cabral and L. C. Leedom, “Imaging vector fields using line integral convolution,” in Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH'93, pp. 263–270, Anaheim, CA, USA, 1993. View at: Publisher Site | Google Scholar
  51. D. Weiskopf and T. Ertl, “A hybrid physical/device-space approach for spatio-temporally coherent interactive texture advection on curved surfaces,” in Proceedings of Graphics Interface 2004, pp. 263–270, London, Ontario, Canada, 2004. View at: Google Scholar
  52. B. Köhler, R. Gasteiger, U. Preim, H. Theisel, M. Gutberlet, and B. Preim, “Semi-automatic vortex extraction in 4D PC-MRI cardiac blood flow data using line predicates,” IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 12, pp. 2773–2782, 2013. View at: Publisher Site | Google Scholar
  53. M. S. M. Elbaz, E. E. Calkoen, J. J. M. Westenberg, B. P. F. Lelieveldt, A. A. W. Roest, and R. J. van der Geest, “Vortex flow during early and late left ventricular filling in normal subjects: quantitative characterization using retrospectively-gated 4D flow cardiovascular magnetic resonance and three-dimensional vortex core analysis,” Journal of Cardiovascular Magnetic Resonance, vol. 16, no. 1, p. 78, 2014. View at: Publisher Site | Google Scholar
  54. T. Salzbrunn, C. Garth, G. Scheuermann, and J. Meyer, “Pathline predicates and unsteady flow structures,” The Visual Computer, vol. 24, no. 12, pp. 1039–1051, 2008. View at: Publisher Site | Google Scholar
  55. S. Born, M. Pfeie, M. Markl, M. Gutberlet, and G. Scheuermann, “Visual analysis of cardiac 4D MRI blood flow using line predicates,” IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 6, pp. 900–912, 2013. View at: Publisher Site | Google Scholar
  56. R. Gasteiger, D. J. Lehmann, R. van Pelt et al., “Automatic detection and visualization of qualitative hemodynamic characteristics in cerebral aneurysms,” IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 12, pp. 2178–2187, 2012. View at: Publisher Site | Google Scholar
  57. M. Zockler, D. Stalling, and H.-C. Hege, “Interactive visualization of 3D-vector fields using illuminated stream lines,” in Proceedings of IEEE Visualization Conference, pp. 107–113, San Francisco, CA, USA, 1996. View at: Publisher Site | Google Scholar
  58. V. Interrante and C. Grosch, “Strategies for effectively visualizing 3D flow with volume LIC,” in Proceedings of IEEE Visualization Conference, pp. 421–424, Phoenix, AZ, USA, 1997. View at: Google Scholar
  59. M. H. Everts, H. Bekker, J. B. Roerdink, and T. Isenberg, “Depth-dependent halos: illustrative rendering of dense line data,” IEEE Transactions on Visualization and Computer Graphics, vol. 15, no. 6, pp. 1299–1306, 2009. View at: Publisher Site | Google Scholar
  60. C. Weigle and D. Banks, “A comparison of the perceptual benefits of linear perspective and physically-based illumination for display of dense 3D streamtubes,” IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1723–1730, 2008. View at: Publisher Site | Google Scholar
  61. M. Schott, T. Martin, A. P. Grosset, S. T. Smith, and C. D. Hansen, “Ambient occlusion effects for combined volumes and tubular geometry,” IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 6, pp. 913–926, 2013. View at: Publisher Site | Google Scholar
  62. M. Han, I. Wald, W. Usher et al., “Ray tracing generalized tube primitives: method and applications,” Computer Graphics Forum, vol. 38, no. 3, pp. 467–478, 2019. View at: Publisher Site | Google Scholar
  63. F. Jiao, J. M. Phillips, Y. Gur, and C. R. Johnson, “Uncertainty visualization in HARDI based on ensembles of ODFs,” in 2012 IEEE Pacific Visualization Symposium, pp. 193–200, Songdo, Republic of Korea, 2012. View at: Publisher Site | Google Scholar
  64. M. Hlawatsch, J. E. Vollrath, F. Sadlo, and D. Weiskopf, “Coherent structures of characteristic curves in symmetric second order tensor fields,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 6, pp. 781–794, 2011. View at: Publisher Site | Google Scholar
  65. P. Basser, J. Mattiello, and D. Lebihan, “Estimation of the effective self-diffusion tensor from the NMR spin echo,” Journal of Magnetic Resonance, Series B, vol. 103, no. 3, pp. 247–254, 1994. View at: Publisher Site | Google Scholar
  66. P. J. Basser and C. Pierpaoli, “Microstructural and physiological features of tissues elucidated by quantitative-diffusion-tensor MRI,” Journal of Magnetic Resonance, Series B, vol. 111, no. 3, pp. 209–219, 1996. View at: Publisher Site | Google Scholar
  67. S. M. Smith, M. Jenkinson, H. Johansen-Berg et al., “Tract-based spatial statistics: voxelwise analysis of multi-subject diffusion data,” NeuroImage, vol. 31, no. 4, pp. 1487–1505, 2006. View at: Publisher Site | Google Scholar
  68. G. Kindlmann, D. Weinstein, and D. Hart, “Strategies for direct volume rendering of diffusion tensor fields,” IEEE Transactions on Visualization and Computer Graphics, vol. 6, no. 2, pp. 124–138, 2000. View at: Publisher Site | Google Scholar
  69. C. Pierpaoli and P. J. Basser, “Toward a quantitative assessment of diffusion anisotropy,” Magnetic Resonance in Medicine, vol. 36, no. 6, pp. 893–906, 1996. View at: Publisher Site | Google Scholar
  70. D. Laidlaw, E. Ahrens, D. Kremers, M. Avalos, R. Jacobs, and C. Readhead, “Visualizing diffusion tensor images of the mouse spinal cord,” in Proceedings of IEEE Visualization Conference, pp. 127–134, Research Triangle Park, NC, USA, 1998. View at: Publisher Site | Google Scholar
  71. C. F. Westin, S. E. Maier, B. Khidhir, P. Everett, F. A. Jolesz, and R. Kikinis, “Image processing for diffusion tensor magnetic resonance imaging,” in Medical Image Computing and Computer-Assisted Intervention– MICCAI'99, C. Taylor and A. Colchester, Eds., pp. 441–452, Springer, Berlin, Heidelberg, 1999. View at: Google Scholar
  72. G. Kindlmann, “Superquadric tensor glyphs,” in Eurographics/IEEE VGTC Symposium on Visualization, O. Deussen, C. Hansen, D. Keim, and D. Saupe, Eds., pp. 147–154, The Eurographics Association, 2004. View at: Publisher Site | Google Scholar
  73. T. Schultz and G. L. Kindlmann, “Superquadric glyphs for symmetric second-order tensors,” IEEE Transactions on Visualization and Computer Graphics, vol. 16, no. 6, pp. 1595–1604, 2010. View at: Publisher Site | Google Scholar
  74. G. Kindlmann and C.-f. Westin, “Diffusion tensor visualization with glyph packing,” IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 5, pp. 1329–1336, 2006. View at: Publisher Site | Google Scholar
  75. S. Mori and P. C. M. van Zijl, “Fiber tracking: principles and strategies – a technical review,” NMR in Biomedicine, vol. 15, no. 7-8, pp. 468–480, 2002. View at: Publisher Site | Google Scholar
  76. L. Zhukov and A. Barr, “Oriented tensor reconstruction: tracing neural pathways from diffusion tensor MRI,” in Proceedings of IEEE Visualization Conference, pp. 387–394, Boston, MA, USA, 2002. View at: Publisher Site | Google Scholar
  77. S. Zhang, C. Demiralp, and D. Laidlaw, “Visualizing diffusion tensor MR images using streamtubes and streamsurfaces,” IEEE Transactions on Visualization and Computer Graphics, vol. 9, no. 4, pp. 454–462, 2003. View at: Publisher Site | Google Scholar
  78. V. Petrovic, J. Fallon, and F. Kuester, “Visualizing whole-brain DTI tractography with GPU-based tuboids and LoD management,” IEEE Transactions on Visualization and Computer Graphics, vol. 13, no. 6, pp. 1488–1495, 2007. View at: Publisher Site | Google Scholar
  79. D. Merhof, M. Sonntag, F. Enders, C. Nimsky, P. Hastreiter, and G. Greiner, “Hybrid visualization for white matter tracts using triangle strips and point sprites,” IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 5, pp. 1181–1188, 2006. View at: Publisher Site | Google Scholar
  80. T. Schultz, N. Sauber, A. Anwander, H. Theisel, and H.-P. Seidel, “Virtual Klingler dissection: putting fibers into context,” Computer Graphics Forum, vol. 27, no. 3, pp. 1063–1070, 2008. View at: Publisher Site | Google Scholar
  81. S. Zhang, S. Correia, and D. H. Laidlaw, “Identifying white-matter fiber bundles in DTI data using an automated proximity-based fiber-clustering method,” IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 5, pp. 1044–1053, 2008. View at: Publisher Site | Google Scholar
  82. G. Kindlmann, X. Tricoche, and C.-F. Westin, “Delineating white matter structure in diffusion tensor MRI with anisotropy creases,” Medical Image Analysis, vol. 11, no. 5, pp. 492–502, 2007. View at: Publisher Site | Google Scholar
  83. M. H. Everts, E. Begue, H. Bekker, J. B. T. M. Roerdink, and T. Isenberg, “Exploration of the brain's white matter structure through visual abstraction and multi-scale local fiber tract contraction,” IEEE Transactions on Visualization and Computer Graphics, vol. 21, no. 7, pp. 808–821, 2015. View at: Publisher Site | Google Scholar
  84. T. Isenberg, “A survey of illustrative visualization techniques for diffusion-weighted MRI tractography,” in Visualization and Processing of Higher Order Descriptors for Multi-Valued Data, I. Hotz and T. Schultz, Eds., pp. 235–256, Springer International Publishing, Cham, 2015. View at: Google Scholar
  85. S. Eichelbaum, M. Hlawitschka, and G. Scheuermann, “LineAO—improved three-dimensional line rendering,” IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 3, pp. 433–445, 2013. View at: Publisher Site | Google Scholar
  86. P. Svetachov, M. H. Everts, and T. Isenberg, “DTI in context: illustrating brain fiber tracts in situ,” Computer Graphics Forum, vol. 29, no. 3, pp. 1023–1032, 2010. View at: Publisher Site | Google Scholar
  87. K. Lawonn, S. Glaer, A. Vilanova, B. Preim, and T. Isenberg, “Occlusion-free blood flow animation with wall thickness visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, pp. 728–737, 2016. View at: Publisher Site | Google Scholar
  88. Y. Jung, J. Kim, S. Eberl, M. Fulham, and D. D. Feng, “Visibility-driven PET-CT visualisation with region of interest (ROI) segmentation,” The Visual Computer, vol. 29, no. 6, pp. 805–815, 2013. View at: Publisher Site | Google Scholar
  89. R. Wiemker, T. Klinder, M. Bergtholdt, K. Meetz, I. C. Carlsen, and T. Bulow, “A radial structure tensor and its use for shape-encoding medical visualization of tubular and nodular structures,” IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 3, pp. 353–366, 2013. View at: Publisher Site | Google Scholar
  90. M. Termeer, J. Olivan Bescos, M. Breeuwer, A. Vilanova, and F. Gerritsen, “CoViCAD: comprehensive visualization of coronary artery disease,” IEEE Transactions on Visualization and Computer Graphics, vol. 13, no. 6, pp. 1632–1639, 2007. View at: Publisher Site | Google Scholar
  91. S. Oeltze, A. Ku, F. Grothues, A. Hennemuth, and B. Preim, “Integrated visualization of morphologic and perfusion data for the analysis of coronary artery disease,” in Proceedings of the Eighth Joint Eurographics/IEEE VGTC Conference on Visualization, pp. 131–138, Lisbon, Portugal, 2006. View at: Google Scholar
  92. A. Hennemuth, A. Seeger, O. Friman et al., “A comprehensive approach to the analysis of contrast enhanced cardiac MR images,” IEEE Transactions on Medical Imaging, vol. 27, no. 11, pp. 1592–1610, 2008. View at: Publisher Site | Google Scholar
  93. H. A. Kirişli, V. Gupta, R. Shahzad et al., “Additional diagnostic value of integrated analysis of cardiac CTA and SPECT MPI using the SMARTVis system in patients with suspected coronary artery disease,” Journal of Nuclear Medicine, vol. 55, no. 1, pp. 50–57, 2014. View at: Publisher Site | Google Scholar
  94. J. Meyer-Spradow, L. Stegger, C. Doring, T. Ropinski, and K. Hinrichs, “Glyph-based SPECT visualization for the diagnosis of coronary artery disease,” IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1499–1506, 2008. View at: Publisher Site | Google Scholar
  95. D. Williams, S. Grimm, E. Coto, A. Roudsari, and H. Hatzakis, “Volumetric curved planar reformation for virtual endoscopy,” IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 1, pp. 109–119, 2008. View at: Publisher Site | Google Scholar
  96. S. Mirhosseini, I. Gutenko, S. Ojal, J. Marino, and A. Kaufman, “Immersive virtual colonoscopy,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 5, pp. 2011–2021, 2019. View at: Publisher Site | Google Scholar
  97. H. Song, J. Lee, T. J. Kim, K. H. Lee, B. Kim, and J. Seo, “Gaze Dx: interactive visual analytics framework for comparative gaze analysis with volumetric medical imagesIEEE Transactions on Visualization and Computer Graphics,” vol. 23, no. 1, pp. 311–320, 2017. View at: Publisher Site | Google Scholar
  98. I. Viola, K. Nylund, O. K. Øye, D. M. Ulvang, O. H. Gilja, and H. Hauser, “Illustrated ultrasound for multimodal data interpretation of liver examinations,” in Eurographics Workshop on Visual Computing for Biomedicine, C. Botha, G. Kindlmann, W. Niessen, and B. Preim, Eds., The Eurographics Association, 2008. View at: Publisher Site | Google Scholar
  99. D. Jönsson, A. Bergström, C. Forsell et al., “VisualNeuro: a hypothesis formation and reasoning application for multi-variate brain cohort study data,” Computer Graphics Forum, vol. 39, no. 6, pp. 392–407, 2020. View at: Publisher Site | Google Scholar
  100. M. Meuschke, B. Köhler, U. Preim, B. Preim, and K. Lawonn, “Semi-automatic vortex flow classification in 4D PC-MRI data of the aorta,” Computer Graphics Forum, vol. 35, no. 3, pp. 351–360, 2016. View at: Publisher Site | Google Scholar
  101. R. Van Pelt, J. O. Bescós, M. Breeuwer et al., “Exploration of 4D MRI blood flow using stylistic visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 16, no. 6, pp. 1339–1347, 2010. View at: Publisher Site | Google Scholar
  102. C. Zhang, T. Schultz, K. Lawonn, E. Eisemann, and A. Vilanova, “Glyph-based comparative visualization for diffusion tensor fields,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, pp. 797–806, 2016. View at: Publisher Site | Google Scholar
  103. C. Zhang, M. Caan, T. Höllt, E. Eisemann, and A. Vilanova, “Overview + detail visualization for ensembles of diffusion tensors,” Computer Graphics Forum, vol. 36, no. 3, pp. 121–132, 2017. View at: Publisher Site | Google Scholar
  104. C. Rieder, M. Schwier, H. K. Hahn, and H.-O. Peitgen, “High-quality multimodal volume visualization of intracerebral pathological tissue,” in Proceedings of the First Eurographics Conference on Visual Computing for Biomedicine, pp. 167–176, Delft, The Netherlands, 2008. View at: Google Scholar
  105. F. Weiler, C. Rieder, C. A. David, C. Wald, and H. K. Hahn, “AVM-explorer: multi-volume visualization of vascular structures for planning of cerebral AVM surgery,” in Eurographics 2011 - Dirk Bartz Prize, K. Buehler and A. Vilanova, Eds., pp. 9–12, The Eurographics Association, 2011. View at: Publisher Site | Google Scholar
  106. R. Khlebnikov, B. Kainz, J. Muehl, and D. Schmalstieg, “Crepuscular rays for tumor accessibility planning,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, pp. 2163–2172, 2011. View at: Publisher Site | Google Scholar
  107. J. Beyer, M. Hadwiger, S. Wolfsberger, and K. Bühler, “High-quality multimodal volume rendering for preoperative planning of neurosurgical interventions,” IEEE Transactions on Visualization and Computer Graphics, vol. 13, no. 6, pp. 1696–1703, 2007. View at: Publisher Site | Google Scholar
  108. C. Dick, R. Burgkart, and R. Westermann, “Distance visualization for interactive 3D implant planning,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, pp. 2173–2182, 2011. View at: Publisher Site | Google Scholar
  109. C. Lundstrom, T. Rydell, C. Forsell, A. Persson, and A. Ynnerman, “Multi-touch table system for medical visualization: application to orthopedic surgery planning,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, pp. 1775–1784, 2011. View at: Publisher Site | Google Scholar
  110. N. Smit, K. Lawonn, A. Kraima et al., “PelVis: atlas-based surgical planning for oncological pelvic surgery,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, pp. 741–750, 2017. View at: Publisher Site | Google Scholar
  111. C. R. Butson, G. Tamm, S. Jain, T. Fogal, and J. Krüger, “Evaluation of interactive visualization on mobile computing platforms for selection of deep brain stimulation parameters,” IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 1, pp. 108–117, 2013. View at: Publisher Site | Google Scholar
  112. J. Vorwerk, D. McCann, J. Krüger, and C. R. Butson, “Interactive computation and visualization of deep brain stimulation effects using duality,” Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, vol. 8, no. 1, pp. 3–14, 2020. View at: Publisher Site | Google Scholar
  113. A. Bock, N. Lang, G. Evangelista, R. Lehrke, and T. Ropinski, “Guiding deep brain stimulation interventions by fusing multimodal uncertainty regions,” in 2013 IEEE Pacific Visualization Symposium (PacificVis), pp. 97–104, Sydney, NSW, Australia, 2013. View at: Publisher Site | Google Scholar
  114. T. M. Athawale, K. A. Johnson, C. R. Butson, and C. R. Johnson, “A statistical framework for quantification and visualisation of positional uncertainty in deep brain stimulation electrodes,” Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, vol. 7, no. 4, pp. 438–449, 2019. View at: Publisher Site | Google Scholar
  115. J. Blaas, C. P. Botha, C. Majoie, A. Nederveen, F. M. Vos, and F. H. Post, “Interactive visualization of fused fMRI and DTI for planning brain tumor resections,” in Medical Imaging 2007: Visualization and Image-Guided Procedures, K. R. Cleary and M. I. Miga, Eds., vol. 6509 of International Society for Optics and Photonics, pp. 599–610, SPIE, 2007. View at: Publisher Site | Google Scholar
  116. S. Born, W. Jainek, M. Hlawitschka et al., “Multimodal visualization of DTI and fMRI data using illustrative methods,” in Bildverarbeitung für die Medizin, pp. 6–10, Springer, Berlin Heidelberg, 2009. View at: Publisher Site | Google Scholar
  117. S. Diepenbrock, J.-S. Prassni, F. Lindemann, H.-W. Bothe, and T. Ropinski, “2010 IEEE visualization contest winner: interactive planning for brain tumor resections,” IEEE Computer Graphics and Applications, vol. 31, no. 5, pp. 6–13, 2011. View at: Publisher Site | Google Scholar
  118. A. Joshi, D. Scheinost, K. Vives, D. Spencer, L. Staib, and X. Papademetris, “Novel interaction techniques for neurosurgical planning and stereotactic navigation,” IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1587–1594, 2008. View at: Publisher Site | Google Scholar
  119. C. Rieder, F. Ritter, M. Raspe, and H.-O. Peitgen, “Interactive visualization of multimodal volume data for neurosurgical tumor treatment,” Computer Graphics Forum, vol. 27, no. 3, pp. 1055–1062, 2008. View at: Publisher Site | Google Scholar
  120. C. Dick, J. Georgii, R. Burgkart, and R. Westermann, “Stress tensor field visualization for implant planning in orthopedics,” IEEE Transactions on Visualization and Computer Graphics, vol. 15, no. 6, pp. 1399–1406, 2009. View at: Publisher Site | Google Scholar
  121. B. Preim and C. Botha, Visual Computing for Medicine: Theory, Algorithms, and Applications, Morgan Kaufmann, 2014. View at: Publisher Site
  122. D. Weiskopf, K. Engel, and T. Ertl, “Interactive clipping techniques for texture-based volume visualization and volume shading,” IEEE Transactions on Visualization and Computer Graphics, vol. 9, no. 3, pp. 298–312, 2003. View at: Publisher Site | Google Scholar
  123. Z. Shi, G. Zeng, L. Zhang et al., “Bayesian VoxDRN: a probabilistic deep voxelwise dilated residual network for whole heart segmentation from 3D MR images,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, A. F. Frangi, J. A. Schnabel, C. Davatzikos, C. Alberola-López, and G. Fichtinger, Eds., pp. 569–577, Springer International Publishing, Cham, 2018. View at: Publisher Site | Google Scholar
  124. X. Zhuang, L. Li, C. Payer et al., “Evaluation of algorithms for multi-modality whole heart segmentation: an open-access grand challenge,” Medical Image Analysis, vol. 58, pp. 101–537, 2019. View at: Publisher Site | Google Scholar
  125. Y. Jin, G. Yang, Y. Fang et al., “3d pbv-net: an automated prostate MRI data segmentation method,” Computers in Biology and Medicine, vol. 128, pp. 104–160, 2021. View at: Publisher Site | Google Scholar
  126. G. Litjens, T. Kooi, B. E. Bejnordi et al., “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42, pp. 60–88, 2017. View at: Publisher Site | Google Scholar
  127. R. Raidou, O. Casares-Magaz, L. Muren et al., “Visual analysis of tumor control models for prediction of radiotherapy response,” Computer Graphics Forum, vol. 35, no. 3, pp. 231–240, 2016. View at: Publisher Site | Google Scholar
  128. N. Karall, M. E. Gröller, and R. G. Raidou, “ChemoExplorer: a dashboard for the visual analysis of chemotherapy response in breast cancer patients,” in EuroVis 2018 – Short Papers, J. Johansson, F. Sadlo, and T. Schreck, Eds., pp. 49–53, The Eurographics Association, 2018. View at: Publisher Site | Google Scholar
  129. R. Raidou, O. Casares-Magaz, A. Amirkhanov et al., “Bladder runner: visual analytics for the exploration of RT-induced bladder toxicity in a cohort study,” Computer Graphics Forum, vol. 37, no. 3, pp. 205–216, 2018. View at: Publisher Site | Google Scholar
  130. K. Furmanová, L. P. Muren, O. Casares-Magaz et al., “PREVIS: predictive visual analytics of anatomical variability for radiotherapy decision support,” Computers Graphics, vol. 97, pp. 126–138, 2021. View at: Publisher Site | Google Scholar
  131. S. Zachow, P. Muigg, T. Hildebrandt, H. Doleisch, and H.-C. Hege, “Visual exploration of nasal airflow,” IEEE Transactions on Visualization and Computer Graphics, vol. 15, no. 6, pp. 1407–1414, 2009. View at: Publisher Site | Google Scholar
  132. M. Meuschke, S. Oeltze-Jafra, O. Beuing, B. Preim, and K. Lawonn, “Classification of blood flow patterns in cerebral aneurysms,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 7, pp. 2404–2418, 2019. View at: Publisher Site | Google Scholar
  133. P. Rosen, B. Burton, K. Potter, and C. R. Johnson, “muView: a visual analysis system for exploring uncertainty in myocardial ischemia simulations,” in Visualization in Medicine and Life Sciences III, L. Linsen, B. Hamann, and H.-C. Hege, Eds., pp. 49–69, Springer International Publishing, Cham, 2016. View at: Google Scholar
  134. M. Meuschke, S. Voß, O. Beuing, B. Preim, and K. Lawonn, “Glyph‐based comparative stress tensor visualization in cerebral aneurysms,” Computer Graphics Forum, vol. 36, no. 3, pp. 99–108, 2017. View at: Publisher Site | Google Scholar
  135. L. Zhou, C. R. Johnson, and D. Weiskopf, “Data-driven space-filling curves,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, pp. 1591–1600, 2021. View at: Publisher Site | Google Scholar
  136. R. MacLeod, C. Johnson, and M. Matheson, “Visualization of cardiac bioelectricity-a case study,” in Proceedings of IEEE Visualization Conference, pp. 411–418, Los Alamitos, CA, USA, 1992. View at: Publisher Site | Google Scholar
  137. I. Demir, C. Dick, and R. Westermann, “Multi-charts for comparative 3D ensemble visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 12, pp. 2694–2703, 2014. View at: Publisher Site | Google Scholar
  138. J. Weissenböck, B. Fröhler, E. Gröller, J. Kastner, and C. Heinzl, “Dynamic volume lines: visual comparison of 3D volumes through space-filling curves,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, pp. 1040–1049, 2019. View at: Publisher Site | Google Scholar
  139. J. Ahrens, B. Geveci, and C. Law, “36-ParaView: an end-user tool for large-data visualization,” in Visualization Handbook, C. D. Hansen and C. R. Johnson, Eds., pp. 717–731, Butterworth-Heinemann, Burlington, 2005. View at: Publisher Site | Google Scholar
  140. J. Meyer-Spradow, T. Ropinski, J. Mensmann, and K. Hinrichs, “Voreen: a rapid-prototyping environment for ray-casting-based volume visualizations,” IEEE Computer Graphics and Applications, vol. 29, no. 6, pp. 6–13, 2009. View at: Publisher Site | Google Scholar
  141. D. Jönsson, P. Steneteg, E. Sundén et al., “Inviwo - a visualization system with usage abstraction levels,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 11, pp. 3241–3254, 2020. View at: Publisher Site | Google Scholar
  142. S. Grottel, M. Krone, C. Müller, G. Reina, and T. Ertl, “MegaMol—a prototyping framework for particle-based visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 21, no. 2, pp. 201–214, 2015. View at: Publisher Site | Google Scholar
  143. P. Gralka, M. Becher, M. Braun et al., “MegaMol – a comprehensive prototyping framework for visualizations,” The European Physical Journal Special Topics, vol. 227, no. 14, pp. 1817–1829, 2019. View at: Publisher Site | Google Scholar
  144. A. Fedorov, R. Beichel, J. Kalpathy-Cramer et al., “3D Slicer as an image computing platform for the Quantitative Imaging Network,” Magnetic Resonance Imaging, vol. 30, no. 9, pp. 1323–1341, 2012. View at: Publisher Site | Google Scholar
  145. T. Fogal and J. Krüger, “Tuvok, an architecture for large scale volume rendering,” in Proceedings of the 15th International Workshop on Vision, Modeling, and Visualization, pp. 139–146, Siegen, Germany, 2010. View at: Google Scholar

Copyright © 2022 Liang Zhou et al. Exclusive Licensee Peking University Health Science Center. Distributed under a Creative Commons Attribution License (CC BY 4.0).

 PDF Download Citation Citation
Views470
Downloads196
Altmetric Score
Citations