Open access peer-reviewed chapter

3D Polarized Light Imaging Portrayed: Visualization of Fiber Architecture Derived from 3D-PLI

By Nicole Schubert, Markus Axer, Uwe Pietrzyk and Katrin Amunts

Submitted: May 24th 2017Reviewed: November 16th 2017Published: December 20th 2017

DOI: 10.5772/intechopen.72532

Downloaded: 283

Abstract

3D polarized light imaging (3D-PLI) is a neuroimaging technique that has recently opened up new avenues to study the complex architecture of nerve fibers in postmortem brains at microscopic scales. In a specific voxel-based analysis, each voxel is assigned a single 3D fiber orientation vector. This leads to comprehensive 3D vector fields. In order to inspect and analyze such high-resolution fiber orientation vector field, also in combination with complementary microscopy measurements, appropriate visualization techniques are essential to overcome several challenges, such as the massive data sizes, the large amount of both unique and redundant information at different scales, or the occlusion issues of inner structures by outer layers. Here, we introduce a comprehensive software tool that is able to visualize all information of a typical 3D-PLI dataset in an adequate and sophisticated manner. This includes the visualization of (i) anatomic structural and fiber architectonic data in one representation, (ii) a large-scale fiber orientation vector field, and (iii) a clustered version of the field. Alignment of a 3D-PLI dataset to an appropriate brain atlas provides expert-based delineation, segmentation, and, ultimately, visualization of selected anatomical structures. By means of these techniques, a detailed analysis of the complex fiber architecture in 3D is feasible.

Keywords

  • polarized light imaging
  • scientific visualization
  • neuroinformatics
  • neuroimaging and fiber architecture

1. Introduction

3D-PLI is an essential microscopy method that makes the derivation of 3D nerve fiber orientations possible [1, 2, 3]. It provides 3D fiber orientation models that are interpreted by a voxel-based analysis, i.e., each tissue voxel is assigned a single 3D fiber orientation vector. The 3D reconstruction of images of serial brain section by means of image registration yields a virtual brain model reflecting local fiber orientations. The unique value of 3D-PLI data was demonstrated in detailed studies of the course of fibers and fiber tracts in section-wise 3D analysis [4, 5], i.e., the fibers were traced across the sections by means of 2D visualizations. A structural analysis of fiber orientation models in 3D requires specific visualization techniques due to the challenges the 3D visualization of fiber architecture is confronted with, such as the huge amount of data, the occlusion of the inner structures by the outer layers, and the visual clutter caused by the enormous number of vectors contained in the datasets.

In this chapter, we will introduce 3D visualization techniques that extract important information of the 3D-PLI data and present them appropriately. First, the method of 3D-PLI is briefly summarized including the tissue processing, image acquisition, and image processing. The methods that are used to visualize our fiber orientation models are illustrated as well as the structural modalities that will be needed as anatomical context. In addition examples are presented of how these techniques can be used to trace the 3D courses of fibers in 3D in human and rat brains. As such new methods are provided for quantitative 3D analysis of the fiber architecture of mammalian brains.

2. 3D polarized light imaging in a nutshell

The polarization microscopy technology referred to as 3D-PLI is able to reveal the brain’s fiber architecture at the micro- to the mesoscale (i.e., in the range of 1–100 μm) in serial large-sized unstained histological brain sections [1, 2, 3, 4, 5]. 3D-PLI demonstrated exceptional performance in providing fiber/non-fiber contrasts in both deep white matter and cortical regions even for entire human brain sections with an area size of up to 200 cm2 scanned at very high spatial resolution (down to 1.3 μm, in-plane). No histological staining or labeling is needed by optical methods that utilize intrinsic tissue properties able to modify the polarization state of light. Birefringence, the main polarization property of interest for biological tissues, is caused by a difference in index of refraction that results in a phase shift between orthogonal polarization states and is often exhibited by fibrous structures, such as nerve fibers (i.e., myelinated and unmyelinated axons). 3D-PLI utilizes the birefringence of myelinated and to a minor extent also unmyelinated axons in histological sections to contrast fibers with nerve cell bodies or glial cells and to determine their spatial courses in a form of 3D fiber orientation vectors. Apparently, 3D-PLI images disclose an intriguing fiber architecture, which integrates classic myeloarchitecture [6] with tissue anisotropy as revealed by diffusion magnetic resonance imaging on a microscopic level [3, 5, 7, 8]. 3D-PLI has an obvious important bridging function between the macroscopic and the microscopic world of fiber architecture, i.e., between fiber pathways and single fibers.

3D-PLI involves the preparation of histological brain sections, their imaging with polarimetric setups, the calculation of fiber orientations based on a physical model, and the section realignment with subsequent data interpretation as briefly described in the following.

2.1. Tissue sectioning and block face imaging

The entire brains of an adult human or a rat were immersion fixed in 4% buffered formaldehyde. After two cryoprotection steps (10% glycerine for 3 days, followed by 20% glycerine for 14 days at +4°C), the brains were deep frozen in isopentane at −50°C and serially sectioned in the coronal, sagittal, or horizontal plane at 60 μm thickness using cryostat microtomes (Leica Microsystems, Germany). The ensuing sections were placed on glass slides and stored at −80°C in airtight plastic bags until further processing. They were thaw mounted and coverslipped with 20% glycerine the day before image acquisition took place. Note that there was no staining applied to the tissue, since the imaging technique 3D-PLI solely relies on intrinsic optical properties. The results of the procedure are sequential series of sections of complete brains. During sectioning of the brain, block face images of every section were taken with a CCD camera (AVT Oscar F-810 C, 3272 × 2469 pixels, 15 μm × 15 μm, RGB) which was installed vertically above the cryostat. 3D alignment of the block face images (cf. Section 2.4) yields an undistorted reference brain volume essential for both 3D histological reconstruction and visualization.

2.2. Image acquisition

Two polarimetric devices are used to address complementary scales to study fiber architecture: the large area polarimeter (LAP) and the polarizing microscope (PM) [1, 2, 3]. The LAP enables a single-shot imaging of whole human brain sections at 64 μm pixel size in-plane covering a field of view with a diameter of up to 20 cm. The PM covers a much smaller field of view (2.7 mm × 2.7 mm) but provides 1.3 μm pixel size in-plane. In order to scan large areas with the PM, a motorized scanning stage has been built into the microscope, which acquires entire section images in tiles. The tiles have to be stichted together during post processing. The general optical setup used in the LAP is shown in Figure 1(a) , in which the specimen is sandwiched between two linear polarization filters with orthogonal transmission axes and a quarter-wave retarder. A customized LED light source provides homogeneous green wavelength illumination. During simultaneous optical filter rotation, the intensity of the transmitted light varies strongly in a sinusoidal manner ( Figure 1(b) ), depending on the orientations of the underlying fibers or fiber tracts, respectively. This effect is caused by the nerve fibers’ birefringence. The light intensities are measured at discrete angles in the range from 0 to 180° by a CCD camera (AxioCam HRc Rev.2, Zeiss, Germany) dedicated to microscopic imaging (see Figure 1 ).

Figure 1.

The polarimetric setup of 3D-PLI: A LED light source illuminates the brain section located between two linear polarizers and a quarter-wave retarder (a). The simultaneous rotation of the filters changes the signal captured by a CCD camera depending on the fiber orientation in each voxel of the section. The signal follows a sinusoidal course (b, left) and indicates the fiber angles (b, right) as well as the transmittance of the light (horizontal green line in (b, left), left in (c)), the direction of the fiber (orange arrow in (b, left), center in (c)), and the retardation (blue range in (b, left), right in (c)).

2.3. 3D-PLI modalities

The measured sinusoidal signal (per image pixel) is interpreted by fitting a physical model derived from the Jones calculus [9] to it, as described in [1] ( Figure 1(b) and (c) ). This allows retrieving information about the light retardation, the light transmittance, and the fiber direction angle φ and inclination angle α reflecting the orientation of a fiber in 3D. The retardation is derived from the relative amplitude of the sinusoidal signal and encodes the local birefringence strength together with the fiber (out-of-section) inclination angle. The transmittance represents the mean value of the sinusoidal signal and describes the amount of light transmitted through the tissue, reduced by absorption and scattering processes. The fiber (in-plane) direction angle is defined by the phase of the sinusoidal signal. Both direction and inclination angles are combined to fiber orientation vectors building the fiber orientation maps (FOMs) for each brain section. A 3D fiber orientation model is generated by a 3D reconstruction of the maps. Note that FOMs represent vector-like data, while all other 3D-PLI modalities are scalar-valued data types.

2.4. 3D reconstruction

Nonlinear deformations introduced by brain sectioning and mounting are corrected using block face images as undistorted references for the spatial alignment of the 3D-PLI modalities [10]. In the first step, the block face images have to be 3D reconstructed. Briefly, the block face reconstruction method consists of a two-phase registration: a marker-based alignment of the images and a median-based refinement of the pre-reconstructed volume using 3D information. First, the coordinates of markers (ARTag, a marker adopted from augmented reality) labeled on the microtome chuck are extracted and aligned to the corresponding markers in the neighboring images by means of a translation transformation. Processing all images leads to an almost smoothly reconstructed 3D stack of block face images of the brain. However, this approach causes perspective errors due to the different heights of the sectioning plane and microtome chuck with the markers and thus their different distances to the camera lens. Therefore, in the second part of the method, the median along the z-direction of the marker-based reconstructed block face volume is calculated to eliminate the outliers caused by perspective errors. The marker-based reconstructed volume is aligned slice-by-slice onto the median volume using a translation transform estimated by an intensity based image registration algorithm. This technique takes advantage of 3D information in an actually 2D slice-by-slice registration method. This leads to an accurately aligned volume of block face images that serves as an important reference to recover the spatial coherence of the nonlinearly deformed sections corresponding to the block face images.

The 3D reconstruction of the 3D-PLI data consists of two steps: a rigid slice-by-slice registration of the 3D-PLI images to the corresponding block face images and a nonrigid refinement method. The first step is based on estimating a transformation of the 3D-PLI images to the corresponding image of the reconstructed block face volume by image registration. To align the 3D-PLI images to the block face images, the masks of the brain tissue of both datasets are required. A 3D watershed algorithm is used to segment the reconstructed block face volume, while the 3D-PLI images are segmented manually. Using the segmented images, the centers of gravity of the corresponding brain masks are calculated and aligned. Based on this initial transformation, an intensity-based rigid registration is performed. The second step, the refinement, is performed by a slice-by-slice B-spline registration yielding to 3D reconstructions of all 3D-PLI modalities.

2.5. Brain models

The techniques applied for visualization were investigated on two datasets, one rat brain and one human brain. All animal procedures were approved by the institutional animal welfare committee at the Research Centre of Jülich and were in accordance with the European Union (National Institutes of Health) guidelines for the use and care of laboratory animals. The human brain was acquired in accordance with local legal and ethical requirements.

The entire rat brain and one hemisphere of the human brain were serially cut and have been fully processed with both LAP and PM. The rat brain was sectioned into 455 sections of 60 μm thickness. The hemisphere of the human brain was cut from anterior to posterior along the coronal sectioning plane in 843 sections with a thickness of 70 μm. The generated 3D reconstructed fiber orientation model of the rat consists of a vector field with a size of 588 × 723 × 413 voxels and a resolution of 64 μm × 64 μm × 60 μm (LAP). The human hemisphere has a reconstructed vector field of 1350 × 1950 × 228 voxels and a voxel size of 64 μm × 64 μm × 70 μm (LAP).

In addition, the rat brain was spatially aligned to a common rat brain atlas, the Waxholm Space (WHS) atlas of the Sprague Dawley® rat brain [11, 12]. The three-dimensional atlas is publicly accessible and provided by the International Neuroinformatics Coordinating Facility (INCF) Software Center. The atlas is based on high-resolution MRI and DTI datasets of the brain of the Sprague Dawley rat anchored in the Waxholm Space and the stereotaxic space. The T2*-weighted anatomical MRI (512 × 1024 × 512 pixels) with isotropic local resolution of 39 μm was acquired ex vivo with a 7 T small animal MRI system. The DTI dataset has an isotropic spatial resolution of 79 μm. The anatomical boundaries in the atlas were drawn manually based on the image contrast of the T2*-weighted and DTI images. The latest version of the atlas contains 79 structures, including new and updated boundaries of the hippocampus and parahippocampus [13].

3. Visualization

The visualization techniques described here include both well-known methods of volume rendering and methods specifically developed for 3D-PLI, which in combination open up a new way of exploring the high-resolution fiber architecture. The techniques can be classified on the basis of the underlying data, since different visualization methods are required for the different types of data. 3D-PLI provides scalar and vector fields. A scalar field is a dataset with a scalar value per voxel, such as all 3D-PLI gray value modalities, while a vector field contains one vector per voxel, as in the fiber orientation model.

The scalar data are visualized using classic volume rendering techniques introduced in Section 3.1. The procedures have been implemented for interactive work on the graphics card. In addition, various features have been investigated which allow visualizing several volumes simultaneously. This enables an analysis of the complete dataset at the same time. The intuitive visualization of the vector fields, i.e., each vector as a line, has to overcome various difficulties, such as the occlusion of inner structures, visual clutter, and a slow performance of the visualization for larger datasets. The different ways of handling these challenges are presented in Section 3.2.

3.1. Visualization of 3D-PLI gray value maps

For the 3D representation of the gray value maps, existing 3D visualization techniques (volume rendering) can be used. Volume rendering is classified into indirect and direct rendering. Indirect volume rendering is based on the visualization of a previously calculated surface model. This model is a mesh of polygons, ideally a triangle mesh, as graphics cards are optimized to visualize those. The surface is determined by suitable methods. The best-known method is Marching cubes [14]. Direct volume rendering techniques construct a voxel model based on the underlying data that represents the object. Each voxel is assigned a color and transparency. Texture-based methods are very fast, but the more computationally intensive method ray casting is more flexible in terms of coloring [15]. Since the grayscale modalities in combination with the fiber model are primarily intended to be used as anatomical context, the coloring of the data is not essential. Therefore, we are focusing on the fast texture-based method texture slicing.

3.1.1. Surface rendering

Marching cubes calculates a triangle mesh that represents the surface of an object, based on a threshold t, the so-called isovalue. The algorithm marches through the entire volume along the voxels and forms a cube with eight voxels each ( Figure 2(a) ). The gray values at the edges of the cube are then compared with t. Here, three different cases can occur: (i) all values are below t, i.e., the complete cube does not belong to the object; (ii) all values are above t, i.e., the complete cube belongs to the object; or (iii) some values are above, some below t. In the latter case, the surface is defined by a set of triangles which separates the vertices that are larger than t, i.e., belong to the object, from the other vertices that are not belonging to the object. After the entire dataset is traversed, a complete triangle mesh is created. The triangular mesh filled with a certain color can then be efficiently visualized by optimized algorithms provided by the graphics card. In addition the illumination of the scene also plays an important role. For the combined visualization with the fiber orientation model, it is important that the surface is visualized transparently ( Figure 2(b) ). In order to accelerate the calculation of the surface mesh, it was transferred to the graphics card.

Figure 2.

Surface rendering with Marching cubes. The volume is traversed by a cube of eight voxels (a). Each voxel of the cube is inspected whether its value is inside or outside the surface. The cube is divided into triangles that form a triangle mesh. The triangle mesh can then be visualized filled with a color and suitable illumination. A transparent interface is very useful to provide a visual context for a combined visualization including further data (b).

3.1.2. Volume rendering

In texture slicing, a volumetric dataset is visualized as a stack of parallel sections arranged next to each other. A 3D texture is created in the graphics card, which contains the volume. Parallel to the image plane, a stack of 2D textures is generated through the volume. The 2D textures are filled with scalar values by means of trilinear interpolation of the 3D texture containing the volume. Color and opacity can be set by color tables. Texture slicing also takes advantage of the hardware-near implementation, as the graphics cards are designed for the fast use of texture memory and the computation-intensive interpolations are accelerated on the graphics card. In order to reveal the inner structures of the volumes and also to be able to combine the visualization with further data, clipping boxes must be used in texture slicing, which interactively remove areas of the object ( Figure 3 ).

Figure 3.

Volume rendering with texture slicing. With the help of maneuverable and scalable boxes, the internal structures of the gray value visualization (a) as well as the colored visualization (b) become visible.

3.2. Visualization of the 3D fiber orientation model

A 3D fiber orientation model is a 3D vector field that represents the fiber orientation per voxel. A direct way to visualize a vector field is to use glyphs [16]. Glyphs are small geometric objects that can represent different properties of the vectors by their color and shape, such as position, direction, orientation, and size. A variable and fast method to calculate glyphs is presented in Section 3.2.1. The color is an important factor as it can indicate certain properties such as the direction of the vectors (Section 3.2.2).

The visualization of the fiber orientation of an entire brain by glyphs is opaque and thus inaccessible for analysis. The outer layers occlude the inner structures. Therefore, suitable methods have been developed that provide an insight into the vector field and thus into the fiber architecture of the brain. This includes the combined visualization with an anatomical dataset (Section 3.2.3), the clustering of vectors to a more bundled visualization (Section 3.2.4), and the visualization of the vectors as nerve fiber pathways (Section 3.2.5). In addition, a 3D atlas can be used for visualization beyond the scope of an anatomical context, as shown in Section 3.2.6.

3.2.1. Glyphs

For each voxel of the fiber orientation model, the position and orientation of the vector are extracted. Each vector can then be visualized as a geometric form with variable length and width. By means of the voxel coordinate, which serves as starting point pstartof the glyph and the orientation dof the vector, an end point pend of the glyph can be calculated. The length of the vector l is variable in the range [0, 1]:

pend=pstart+ldE1

The two points can be used to define an undirected line per voxel and to represent the fiber orientation vector. A line represents the position and orientation of a vector. However, in a 3D vector field, the distances of the vectors to each other and the occlusions of the vectors are difficult to distinguish. A 3D shape of the glyphs significantly improves the spatial impression of the 3D vector field. Thus, glyph positions and distances between the glyphs can be clearly recognized. A cylinder is the most suitable glyph shape as it models the round shape of the nerve fibers. In computer graphics, circles (the base of a cylinder) are approximated, in which points on the real circle are calculated and connected with the smallest possible distance. The parametric equation of a circle can be used for this purpose:

rϑ=xϑyϑ=rcosϑrsinϑE2

with ϑϵ[0, 2π] and the radius r.

Since this is computationally very demanding, considering the millions of vectors and thus cylinders that have to be calculated, the base areas of the glyphs are defined only by a few points (vertices). For example, a cuboid has four vertices, and the cylinder to be represented has six vertices. The number of vertices can be set individually which provides space for higher-resolution circles.

The vertices vi around the start and end point of the glyphs are now calculated as follows ( Figure 4 ). The number of vertices n determines the angular distance between the vertices on the circle:

ϑ=2π/nE3

Figure 4.

To calculate the glyph, the voxel coordinate (blue circle) is used as starting point p start → , and the orientation of the vector d → is used to determine an end point p end → (a). The variable l specifies the length of the glyph. The shape of the glyph is defined with a given radius r and a number of vertices vi . The angle ϑ determines the distance between the vertices (b). Using the example of six vertices as shown in the picture, the angle is 60°. To calculate the surface normals, vertices vi and v i + 1 are assigned the normal vector nj , where j is the number of the current rectangle (c).

With the help of this angle and the parametric equation of a circle, the vertices vi can be described:

vis= pstart+drcosϑrsinϑ0resp.vie= pend+drcosϑrsinϑ0E4

Before the coordinates are added to the start or end point, they must be multiplied by the vector orientation dto avoid displaying the glyph as oblique prism.

The length and width of the glyphs can be set interactively by changing the variables r and l. Another important aspect is the color coding of the glyphs, which is another visual indicator for the orientation of the fibers. Thus, fibers with the same orientations are directly recognizable and visually discriminable from other orientations.

3.2.2. Color coding and lighting

Two color spaces are used for the color coding of the glyphs, RGB and HSV. The RGB color space is an additive color space based on the three primary colors red, green, and blue. The x, y, and z components of the vector orientation are assigned directly to the three basic colors, i.e., the x-direction is encoded in red, the y-direction in green, and the z-direction in blue. The HSV color space defines color by the color value hue ([0, 360]), the color saturation ([0, 1]), and the brightness value ([0, 1]). The color value is determined by the x- and y-component of the vector, the z-component influences the brightness of the color, and the saturation is set to the maximum 1. For this purpose, the two angles are calculated from the vector components by means of spherical coordinates:

φ=arctanyx,α=arcsinzE5

and then the color can be calculated:

H=2φ,S=1α90°,V=1E6

Color-coded representations of the fiber orientation can be found in Figure 5 . The color spheres serve as a legend. In the HSV space ( Figure 5(b) ), symmetric orientations can be better distinguished in the plane than in the RGB space ( Figure 5(a) ). For example, yellow in the RGB space codes orientations that run diagonally from the bottom left or bottom right, while these orientations in the HSV space are represented by the different colors green and blue. In order to emphasize the colors of the vectors in the plane, the saturation and value channel are swapped so that the vectors running perpendicular to the plane are visualized in black instead of white ( Figure 5(c) ). This generates a special HSV color scheme (HSV black).

Figure 5.

The color-coded glyphs are shown in the RGB color space (a), in the HSV color space (b), and in a special HSV color space: HSV black (c). The colored spheres in the lower right corner of every image are used as legends.

To better recognize the 3D structures, lights are used which darken the colors inside the glyphs by creating shadows. To distinguish between inside and outside the glyphs, the surface normals have to be calculated. For each rectangle that approximates the cylinder, one surface normal is calculated using the cross product. This surface normal is assigned to the first two points of the rectangle ( Figure 4(c) ). The other two points are used to calculate the next normal. This produces continuous shading as a form of the local lighting model Gouraud shading [17].

3.2.3. Combined visualization

To get an insight into a 3D vector field, clipping boxes are needed [18]. A clipping box defines a region that is excluded from the visualization in order to reveal the underlying information. The offset and the size of the box can be interactively changed. In order to obtain an anatomical context despite the removal of vector information, it is an advantage to additionally visualize a PLI modality by means of volume rendering. This means that either the surface of the brain ( Figure 6(a) ) or the entire volume as 3D texture ( Figure 6(b) ) can be visualized together with the clipped fiber orientation model. In the case of 3D textures, two clipping boxes are used to mask out the regions of interest.

Figure 6.

In order to get an insight into the brain, clipping boxes are used to remove parts of the brain. For an anatomical context, structural data are visualized as surface (a) and as 3D texture (b) in combination with the fiber orientation model.

3.2.4. Clustering of fiber orientations

The 3D fiber orientation model contains very dense information, one vector per voxel. Consequently, the visualization is also very dense. Therefore, the visualization may contain too much information; hence, it might not be possible for the viewer to figure out the important information. A reduction of the information can help to get a better overview of all orientations in the model. A quick way of reducing information is to remove every x-th vector from the visualization, but this may also lead to the loss of important information. A better option is to group directions [7, 19]. The fiber orientation model is divided into cuboids of equal size, so-called super-voxels. For each super-voxel region, a 3D histogram is created, which calculates the frequency of the orientations of the vectors in the super-voxel. For this purpose, a unit sphere is divided into bins, i.e., in the case of a sphere, degrees of longitude and latitude. The best match of an orientation vector is determined by the maximum scalar product with the central vector of every bin of the sphere. With the help of the histogram, a direction can now be displayed by super-voxel, using the same algorithms as described in Section 2.2.1. Figure 7 shows a section of the human hemisphere visualized with different super-voxel sizes, showing there is no notable loss of information. In order to ensure that no information is lost, it is possible to display the strongest directions in super-voxel up to a defined number at the same time. This means that no information is lost even at sharp transitions between fiber orientations. In addition, the vector field can be displayed as an information source ( Figure 8 ). Thus, an overall impression of the orientations in the model is easily obtained without significant loss of information.

Figure 7.

One section of the human hemisphere visualized with one line per vector (a) and two clustered vector fields with one line per super-voxel with super-voxels containing 10×10×1 vectors (b) and 20×20×1 vectors (c). A detailed view is located at the bottom of each section. The decrease of the resolution by increasing the super-voxel size shows no significant loss of information.

Figure 8.

Detailed visualization of the human hemisphere: (a) the 3D vector field, (b) the vector field after clustering the data with a super-voxel containing 10×10×1 vectors and visualized as pyramidal glyphs with the three strongest directions in the super-voxel, and (c) the combined visualization of the super-voxel glyphs with the underlying vector field.

3.2.5. Fiber pathways

The analysis of fiber architecture implies the visualization of nerve fiber pathways. For this purpose, the pathways have to be reconstructed from the vector field before they can be visualized. The reconstruction of the fiber pathways is a comprehensive and complex task that has been intensively studied for DTI data [20, 21, 22], but not yet in depth for 3D-PLI data. Nevertheless, some algorithms can be adapted. We use a deterministic algorithm for the 3D-PLI data, which propagates through the vector field from different starting points (seed points) and thus identifies possible fiber pathways [20].

Mathematically, propagating through the vector field from one seed point can be considered as solving an initial value problem using numerical methods. Common linear methods to solve initial value problems are the Euler and Runge-Kutta methods. Both methods start at a seed point. With a defined step size, the propagator moves in the direction of the vector of the seed point. At the new point, the new direction is determined by means of interpolation, and propagation continues until the end of the vector field is reached. The Runge-Kutta method uses additional intermediate steps to calculate the new direction. This is why the Runge-Kutta method is more computation-intensive but more accurate as compared to the Euler method.

The tractography of the 3D-PLI vector field results in a list of points describing the fiber paths. For each given seed point, the possible paths through the vector field are approximated. Since the 3D-PLI data do not provide direction but orientation, the vector field is traversed in both directions. The tractography is terminated as soon as the path leaves the vector field or the area to be viewed or a user-defined number of path points is reached. In addition, a maximum angle difference serves as a stop criterion. The calculations are performed in parallel for each seed point. A challenge in the tractography procedures is the setting of the initial values or seed points. If each voxel of the volume is used as a seed point, the results will quickly become confusing and difficult to evaluate. If only subregions are considered, one misses possible connections. Neuroanatomical knowledge is essential for the manual placement of seed points. Interactive setting of seed points, e.g., by cuboids, with subsequent visualization of the fiber pathways, facilitates revealing interesting pathways. Another anatomically based method for seed placement is the integration of 3D-PLI data into an anatomical atlas (Section 2.2.6). Here, the existing structures of the atlas can be used to use anatomically based seed points ( Figure 9 ).

Figure 9.

Fiber pathways (magenta) provided by tractography inside the corpus callosum of the rat brain with seed points on the midsagittal plane visualized together with the 3D vector field of the corpus callosum (a). Mainly, fiber pathways connecting the hemispheres can be seen (b).

The reconstructed nerve fiber pathways are given as a list of linked points. The easiest way to visualize these paths is to display them as lines. A better impression of depth is achieved by using 3D shapes such as ribbons or tubes. For this representation, the same algorithm can be used to display the vector glyphs (Section 2.2.1). A circle is approximated around each fiber point by calculating vertices on a real circle. The circle points calculated per point are then connected to form ribbons or tubes. To enable a smooth surface also at the intermediate points at sharp curves, the position of the circle points is interpolated between the fiber sections. The length of the glyphs is defined by the distance between the fiber points; only the radius r is variable. The color coding as well as the lighting is equivalent to that of the glyphs (Section 2.2.2).

3.2.6. Anatomical region-based visualization

In order to ensure an anatomical region-based visualization, it is necessary to separate the brain regions from each other. This is usually done by a neuroanatomical expert using a 2D atlas to correlate to the 2D layers of the data. For 3D data, an extensive post-processing of the selections in the other cutting planes is necessary. This task is very time-consuming, labor-intensive, and prone to intra- and interobserver variability. A more recent approach transforms the datasets into a reference space that is ideally stereotactically standardized, e.g., in the Paxinos coordinate system [23].

We aligned a complete 3D-PLI rat brain dataset with the Waxholm Space atlas of the Sprague Dawley rat brain [11]. In order to ensure an accurate analysis of the 3D-PLI data, the atlas data were transformed into the coordinate space of the reconstructed data using advanced image registration algorithms [12]. The delineated regions of the atlas can be used to create an atlas-based visualization. The regions in the atlas are used as masks, so that only information of the selected regions is visible. This can be applied to all available modalities.

Once an anatomical region has been selected, the complex fiber architecture, represented, for example, by the fiber orientation glyphs, can be investigated and viewed in real time under different viewing angles and magnification by rotation, translation, and zooming as demonstrated for the corpus callosum ( Figure 10 ). The displayed fiber orientations unveil the complex network of fibers and fiber bundles in the corpus callosum. The visualization tool shows that the orientation of fibers in the corpus callosum is not restricted to bundles running in parallel in the midline region, and then fanning out, but rather shows an architecture with partly abrupt changes in orientation ( Figure 10 , arrows) and fibers crossing the corpus callosum orthogonally, including regions close to the midline ( Figure 10 , circles).

Figure 10.

Using an atlas facilitates an anatomical region-based visualization, for instance, of the corpus callosum of the rat brain (a). Interactions with the model enable a visual analysis in all directions (b). Zooming into the fiber orientation model unveils different orientations and interrelations (c), left to right (arrow 1), lower right to upper left (arrows 2 and 5), lower left to upper right (arrows 3 and 4), and from top to bottom (arrow 6). The circles point to diverse sites, where fiber orientations are perpendicular to each other. This indicates regions with fibers running orthogonal to the image plane.

In addition, the present method enabled overcoming the problem of visual clutter and tangle. By masking out the structures of interest, the amount of data to visualize has been reduced, which allows to study fiber orientations interactively. Due to the option to use clipping boxes also in the regions of interest, a precise and high-resolution investigation of the fiber architecture has become feasible.

4. Conclusions and future perspectives

The developed methods resulted in a comprehensive tool that allows a detailed and high-resolution 3D exploration of the fiber architecture based on the fiber orientation models derived from 3D-PLI. Clipping planes reveal the fiber architecture of the model. By adding further modalities and their visualization as surface or volume to the model, an anatomical context is provided. The additional clustering of 3D-PLI vectors or the tracing of fiber paths from the vector field reduces visual clutter and enables interactive work with the data. By clustering the high-resolution data, the 3D-PLI data can be compared to DTI data, despite DTI provides a lower resolution than 3D-PLI [7]. 3D-PLI-based vector-type datasets are essential prerequisites for comprehensive fiber tractography at high spatial resolution, which will be investigated in future projects. The use of atlas-based parcellations represents a powerful approach not only to interpret the topography of fibers but also to improve visualization in anatomical regions of interest. The visualization techniques enable new insights into the complex fiber architecture of the brain and unveil the different orientations and interrelations of fibers and fiber bundles ( Figure 11 ).

Figure 11.

The atlas-based visualization of the fiber orientation model is also possible in combination with a 3D-PLI texture (a) or the atlas delineation itself (b).

Acknowledgments

We would like to thank M. Cremer, Research Centre of Jülich, Germany, for her excellent technical assistance and preparation of the histological sections.

This study was partially supported by the National Institutes of Health under Grant Agreement No. R01MH092311, by the Helmholtz Association of German Research Centres through the Helmholtz Portfolio Theme “Supercomputing and Modelling for the Human Brain,” and by the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 7202070 (HBP SGA1).

The authors gratefully acknowledge the computing time granted by the John von Neumann Institute for Computing (NIC) and provided on the supercomputer JURECA at Jülich Supercomputing Centre (JSC).

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Nicole Schubert, Markus Axer, Uwe Pietrzyk and Katrin Amunts (December 20th 2017). 3D Polarized Light Imaging Portrayed: Visualization of Fiber Architecture Derived from 3D-PLI, High-Resolution Neuroimaging - Basic Physical Principles and Clinical Applications, Ahmet Mesrur Halefoğlu, IntechOpen, DOI: 10.5772/intechopen.72532. Available from:

chapter statistics

283total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Detection of Brain Tumor in MRI Image through Fuzzy-Based Approach

By Neha Mathur, Yogesh Kumar Meena, Shruti Mathur and Divya Mathur

Related Book

First chapter

Advances in Cardiac Computed Tomography

By Karthik Ananthasubramaniam, Nishtha Sareen and Gjeka Rudin

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us