DESIGN PROJECT ON

VISUALIZATION OF SIMULATION OF TRAUMATIC BRAIN SURGERY BY ANIRUDH RAVI

2011C6PS575H

AMARSH VUTUKURI

2011B3AA561H Under the Supervision of

Dr. Tathagata Ray

Computer Science and Information Systems Department BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCE, PILANI, HYDERABAD May, 2015

ACKNOWLEDGEMENT

We are thankful to Dr. Tathagata Ray for helping me finish this project and for conducting weekly sessions in order to understand concepts of the project better. The weekly sessions and presentations of each team helped us understand the implementation details which made it easier to create this report. We are also thankful to all the participating team members for providing us with this information so that we could create a detailed report on the requirements in a team wise manner. We are also thankful to Dr. Chandu Parimi and Dr. N L Bhanu Murthy for constantly asking us relevant questions regarding our project and guiding us regarding the direction our project should take.

Thank you, Anirudh Ravi [2011C6PS575H] Amarsh Vutukuri [2011B3AA561H]

1

ABSTRACT

This report is divided into 7 sections for each of teams working on the project Simulation of Traumatic Brain Surgery from data registration to noise reduction to interface reconstruction. Each section gives a summary of each project, explains the current visualization technique being used and what the requirements if each team for visualization. This report can be used as a requirements document so that it becomes clear as to what all current visualization techniques have been developed and how to take this visualization forward through a common framework and User Interface.

2

TABLE OF CONTENTS

SR.NO.

TOPIC

PAGE NO

1 2

OBJECTIVE MESHING IN 2D AND 3D FROM MRI DATA

4 5

3 4

DEFORMATION OF 3D MODELS SHAPE OPTIMIZED MESHING

8 11

5

INTERFACE RECONSTRUCTION

14

6 7

SURFACE RECONSTRUCTION IN 2D/3D DATA ACQUISTION AND REGISTRATION

17 19

8 9

LEAP MOTION BASED GESTURE RECOGNITION REFERENCES

21 23

3

OBJECTIVE The main objectives of this design project are to: 1. To understand the details of implementation of all participating teams in the design project and understand their current visualization techniques 2. Get requirements of each team in terms of what they would require for their current work and for future requirements in terms of visualization 3. Generate a requirements document based on the needs of each team 4. Provide links to sources which can be used implement the required features and integrate them into the existing framework and into an interactive user interface

4

MESHING IN 2D AND 3D FROM MRI DATA SUMMARY Surface reconstruction is a method by which a mesh is generated given some point cloud dataset in any dimension. Imagine we just have some points in 3D space, and there is a way to generate a 3-D model which represents the cloud-set same as it can be visualized by a human brain. Today, in medical analysis, deformations and many other fields, we get a point set through Kinect and many other sources. To imagine this point cloud dataset as some meaningful presentation, we need to reconstruct the original surface from where the point set was obtained from. A point cloud is a set of data points in some coordinate system. In a three-dimensional coordinate system, these points are usually defined by X, Y, and Z coordinates, and often are intended to represent the external surface of an object. Point clouds may be created by 3D scanners. These devices measure a large number of points on an object's surface and often output a point cloud as a data file. The point cloud represents the set of points that the device has measured. There are many techniques for converting a point cloud to a 3D surface. Some approaches, like Delaunay triangulation, NN Crust, ball pivoting, Cocone, while other approaches convert the point cloud into a volumetric distance field and reconstruct the implicit surface so defined through a marching cubes algorithm.

Figure 1: Poisson surface reconstruction

5

In the current project Simulation of Traumatic Brain Injury, the step is an important one to visualize the mesh formed collected using the point cloud dataset extracted during sudden brain injuries. The project aims at simulating the observed surface so that medical analysis of brain can be done at another level.

CURRENT VISUALIZATION TECHNIQUE The project involves taking in noise filtered point cloud data (given out by the noise reduction team) in 3-D and meshing it - in the form of tetrahedrons. This list of tetrahedrons will be saved as output in a OFF format or other contemporary formats used for storing polyhedrons. Current tool being used for visualization: Paraview, an open source platform built over Visualization Tool Kit (VTK).

Figure 2: Using ParaView to see the calculation result of OpenFOAM. The background color represent the temperature, the arrows represent the gas velocity and the color of arrows represent the concentration of oxygen

6

REQUIREMENTS FOR VISUALIZATION This project has the following requirements:    

Looking at the slices of a volume On selection of a point being able to see it’s iso-surface Zoom in and zoom out through cursor Selection of Segments and Lines in the Mesh along with functionality such as angle between 2 line segments, end points of a line segment, node numbers of a segment  Being able to view boundary lines of a mesh separately or with varied coloration  Mouse Strafing in order to view the mesh from side to side instead of purely in a front back form  Options such as displaying all vertices, displaying the Veronoi diagram and Delaunay Triangulation for the Mesh

Figure 3: 1. 2-D Contour Slice of MRI Data generated using MatLab colormap function to map color to contour value, 2. Displaying 3-D Contour Slices

7

DEFORMATION OF 3D MODELS SUMMARY Different techniques for large deformations on 3D meshes is explored in this project. First a graph representing the volume inside the input mesh is constructed. The graph need not form a solid meshing of the input mesh’s interior; its edges simply connect nearby points in the volume. This graph’s Laplacian encodes volumetric details as the difference between each point in the graph and the average of its neighbors. Preserving these volumetric details during deformation imposes a volumetric constraint that prevents unnatural changes in volume. Graph points are also included in a short distance outside the mesh to avoid local self-intersections. Volumetric detail preservation is represented by a quadric energy function. Minimizing it preserves details in a least-squares sense, distributing error uniformly over the whole deformed mesh. It can also be combined with conventional constraints involving surface positions, details or smoothness, and efficiently minimized by solving a sparse linear system. These techniques are also applied in a 2D curve-based deformation system allowing users to create pleasing deformations with little effort. A novel application of this system is to apply nonrigid and exaggerated deformations of 2D cartoon characters to 3D meshes.

Figure 4: Large deformation of the Stanford Armadillo. Left: original mesh; middle: deformed result using Poisson mesh editing; right: deformed result using Volumetric Graph Laplacian technique. Poisson mesh editing causes unnatural shrinkage especially in the model’s right thigh

8

In the current project Simulation of Traumatic Brain Injury, this step is an important one to visualize the deformed mesh formed after changing values of the mesh formed from the collected using the point cloud dataset extracted during sudden brain injuries. The project aims at simulating all defromations that take place on the observed surface so that medical analysis of brain can be done at another level. The project aims at finding given a point cloud P, and given a set of control points C if the control points are moved to a new position C’, guessing the new corresponding positions of P.

CURRENT VISUALIZATION TECHNIQUE Currently visualization is done by using PCL Wrappers of VTK as well as plotPointCloud of MeshLab. PCLVisualizer, PCL’s full-featured visualisation class which is a set of wrapper functions over VTK is used mainly for visualization.

Figure 5: Visualization obtained using PCL Wrappers

9

REQUIREMENTS FOR VISUALIZATION This project has the following requirements:  Easy movement inside visualizer with keyboard/mouse. Being able to support MeshLab style manipulations  Multiple colors of point cloud so RGB point cloud data information produced by Kinect is not lost  Support for PCL pointWithNormal class so that normal information for a surface can be retrieved from the User Interface  Support for region selection of surface. Similar to how intuitive MeshLab’s surface selection

Figure 6: PCLVisualizer displaying the RGB point cloud generated by the Kinect for a set of points

10

SHAPE OPTIMIZED MESHING SUMMARY The project presents a way to generate a shape optimal unstructured 2D mesh from a Point Cloud Data. Delaunay Meshing is used as it guarantees optimal properties for the mesh such as maximum minimum angle, mincontainment property among all the triangulations present. It focuses on 2D aspects of the method but they can also be extended to 3D. One of the most ubiquitous ways to digitalize an existing model or object is to do a three dimensional scan. In recent times scanning technology such as Kinect of Microsoft has made scanning of the 3D object more accessible. In buildings such as nuclear power plants, where access to the components that need to be analysed is limited, 3D scanning technologies are requisite. The point clouds thus generated need to be converted into a mesh in order to analyse the components by Finite Element Method (FEM). This paradigm methodology is shown in Figure 7. Furthermore, meshing is a very cumbersome process for the use in FEM. As the geometry gets more complicated, the meshing process becomes more rigorous. As the sizes of problems increase and the number of degrees of freedom increases, this process gets very tedious.

Figure 7: Meshing for use in FEM for a given Geometry

11

In the current project Simulation of Traumatic Brain Injury, the step is an important one to visualize the mesh formed collected using the point cloud dataset extracted during sudden brain injuries. The project aims at simulating the observed surface so that medical analysis of brain can be done at another level.

CURRENT VISUALIZATION TECHNIQUE Currently a purely OpenGL based visualizer is used to do the visualization.

Figure 8: Visualization of a shape and it’s corresponding Veronoi Diagram

REQUIREMENTS FOR VISUALIZATION This project has the following requirements:    

Looking at the slices of a volume On selection of a point being able to see it’s iso-surface Zoom in and zoom out through cursor Selection of Segments and Lines in the Mesh along with functionality such as angle between 2 line segments, end points of a line segment, node numbers of a segment  Being able to view boundary lines of a mesh separately or with varied coloration  Mouse Strafing in order to view the mesh from side to side instead of purely in a front back form  Options such as displaying all vertices, displaying the Veronoi diagram and Delaunay Triangulation for the Mesh

12

 Given a faulty input being able to detect the error in the curve and highlight that particular area of the curve

Figure 8: Being able to select points in the point cloud along with changing view and camera angles

13

INTERFACE RECONSTRUCTION SUMMARY The topic focuses on Marching Squares and Marching Cubes which are computer graphics algorithms to generate countours for a 2-D and 3-D field respectively. This has to be applied on brain slices to distinguish and find the interface between white and grey matter of the brain. The algorithm is a divide and conquer algorithm that uses a pre defined case table. Taking eight neighbor locations at a time (thus forming an imaginary cube), then determining one of the predefined polygons that will best represent the the part of the isosurface that passes through this cube. Combining all such polygons give us the desired surface.The case table containes all the polygons. There can be 256 possible confurigations for each cube by taking each vertex as a bit. But exploiting rotational and reflective semmetry the cases are reduced to 15 confurigations. Although ambiguous cases do creep in. Next step involves interpolating normals along edges of each cube to get normal values of generated vertices. This is the same as Gourand Shading done in Computer Graphics.

Figure 9: Isolines/Contour lines – Lines having the single data value or isovalue

14

Marching Squares is an algorithm that generates contour lines for 2d arrays by deciding how isoline will intersect each square (4 neighbouring values of 2d array). Similarly Marching Cubes generates an iso-surface for a 3d array by determining how it will intersect a cube (8 neighbouring vertices of the array).

CURRENT VISUALIZATION TECHNIQUE Currently a purely OpenGL based visualizer is used to do the visualization.

Figure 10: Isosurface generated for a 2D set of points using the current visualizer

REQUIREMENTS FOR VISUALIZATION This project has the following requirements:  Taking input as a file running algorithm (Marching Squares or Cubes) and generating the output  Zooming, Panning and changing camera angles so as to view the isosurface that is generated  Being able to color the inside of a region in order to separate it from the outside or being able to color distinctly various isolines (for different isovalues)  Should be able to compare output of two different input files, for example that of white matter against grey matter. Or compare output

15

for same input file but with different iso-values (compare contours for various iso-values)  Write visualized output to a standard format like OFF/PLY/OBJ

Figure 11: Three-dimensional reconstruction of a brain from MRI data using Marching Cubes (it contains white/gray matters, cortex, stem, glia, pituitary gland)

16

SURFACE RECONSTRUCTION IN 2D/3D SUMMARY The problem of surface reconstruction from a set of 3D points given by their coordinates and oriented normals is a difficult problem, which has been tackled with many different approaches. An efficient reconstruction method known as ball pivoting around triangle edges is used in this case and adds new triangles if the ball is incident to three points and contains no other points.

CURRENT VISUALIZATION TECHNIQUE Currently MeshLab is used to see if there are any irregularities in the quality of output being generated (such as holes in the reconstructed surface) and to compute normals for the input point cloud data. The normal data is stored in ASCII format.

Figure 12: Output of Ball pivoting algorithm rendered in MeshLab. Information such as nomrals, number of faces and quality of output are discovered through this.

17

REQUIREMENTS FOR VISUALIZATION This project has the following requirements:  Taking input as a file running algorithm (Ball Pivoting Algorithm) and generating the output  Zooming, Panning and changing camera angles so as to view the output that is generated  Being able to receive information regarding normal to the surface through the User Interface  Getting information such as number of faces and vertices  Storing the data that is generated in the UI as standard formats such as PLY/OFF/OBJ  Being able to highlight discrepancies in the output generated in the UI by running error detection algorithms

Figure 12: Output of Ball pivoting algorithm along with point of view change, number of vertices and number of faces being displayed

18

DATA ACQUISTION AND REGISTRATION SUMMARY The objective of this project is to transform sets of surface measurements into a common coordinate system. Two approaches were proposed: using Iterative Closest Points (ICP) or using Contour Coherence (CC). CC is being implemented. Inspired by ICP, it maximizes the CC by building robust corresponding pairs on apparent contours and minimizing their distances in an iterative fashion. CC is defined as the agreement between the observed apparent contour and the predicted apparent contour. A contour-coherence based registration algorithm is developed to align wide baseline range scans. The traditional registration algorithms, e.g., ICP, fail in this case, as many closest point-to-point corresponding pairs are incorrect in presense of limited overlap. The contour-coherence, on the other hand, still serves as a strong clue, as no matter the amount of overlap only the 2D contour points are used for registration. An example is given when registering two range scans of the stanford bunny with an overlap of approximately 40% (Figure 13.1). The ICP style algorithms fail as most correspondences are incorrect. On the other hand, these two range scans are successfully register using contour-coherence (Figure 13.2).

Figure 13: ICP giving incorrect results on the left while CC gives correct output on the right

Contour-coherence is applied to address the problem of multi-view rigid registration, and further extend it to solve the problem of multi-view piecewise rigid registration. This allows reconstruction of rigid as well as articulated objects from as few as 4 range scans, i.e., front, back, and two profiles.

19

CURRENT VISUALIZATION TECHNIQUE MeshLab is being used to Visualize the point clouds being egenrated both before and after the algorithm of Contour Coherence is run.

REQUIREMENTS FOR VISUALIZATION This project has the following requirements:  Being able to specify which Kinect to take the capture from  Being able to change angles of the Kinect and being able to view the changes in visualization after running Contour Coherence algorithm  Zooming, Panning and changing camera angles so as to view the output that is generated  Being able to receive information regarding normal to the surface through the User Interface  Getting information such as number of faces and vertices  Storing the data that is generated in the UI as standard formats such as PLY/OFF/OBJ  Being able to highlight discrepancies in the output generated in the UI by running error detection algorithms

20

LEAP MOTION BASED GESTURE RECOGNITION

SUMMARY The objective of this project is to be able to use the Leap Motion device to be able to train a system to detect various hand gestures. The Leap Motion provides a mechanism to track hand and fingertip movement within a space. This project builds on top of this by providing a simple API to record and later recognise hand movements and positions within this space. This recognition capability can be easily integrated into new applications to help build motion interfaces. A gesture is a hand movement with a recognizable start and end - for example, a wave, a tapping motion, a swipe right or left. A pose is a hand position that is held motionless for a few moments - for example, holding up some fingers to indicate a number, pointing, or making a stop sign. The difference in how LeapTrainer recognizes gestures as opposed to poses is that gesture recognition starts when hand movement suddenly speeds up, and ends when it slows down (or stops). So a quick karate chop will trigger gesture recognition. Pose recognition, on the other hand, starts when movement suddenly stops and remains more or less unchanged for a short period - so just holding a thumbs-up for a moment or two will trigger pose recognition. First, a Leap Motion device needs to be connected to the machine on which the browser with the program is running. The Leap monitors movement and transmits data to the browser via the Leap Motion Javascript API. This data is then analysed by the LeapTrainer framework and used to learn gestures and fire events when known gestures and poses are detected.

21

CURRENT VISUALIZATION TECHNIQUE WebGL is currendtly being used for rendering and visualization of the hand gestures through the LeapTrainer.js

Figure 14: Trail of hand gesture rendered using WebGL

REQUIREMENTS FOR VISUALIZATION This project has the following requirements:  Visualization is currently being done using WebGL. This program has to be changed to a desktop based interface using Leap Motion APIs  Hand and finger recognition and visualization  Arm based recognition using the algorithm and rendering and visualization of arm based gestures  Bone recognition and visualization to be able to see the current position of a finger

Figure 14: Hand and fingers rendered in OpenGL visualizer

22

REFERENCES The following links were useful:



Surface Reconstruction from Point Sets: http://doc.cgal.org/latest/Surface_reconstruction_points_3/



Using MatLab Graphics – Visualizing MRI Data: http://www.thphys.nuim.ie/CompPhysics/matlab/help/techdoc/umg/chvolvi3.html

• Large Mesh Deformation Using the Volumetric Graph Laplacian: http://www.kunzhou.net/publications/VGL.pdf

• PCLVisualizer: http://pointclouds.org/documentation/tutorials/pcl_visualizer.php

• Intracranial analysis from high biofidelic brain models: http://cargocollective.com/aurelie_jean/Brain-Tissue-Mechanics

• LeapTrainer.js: https://github.com/roboleary/LeapTrainer.js/tree/master

• Leap Motion Developer Portal: https://developer.leapmotion.com/documentation/cpp/api/Leap.Controller.html

23

Final-Report-Visualization-of-Simulation-Amarsh-Vutukuri.pdf

Final-Report-Visualization-of-Simulation-Amarsh-Vutukuri.pdf. Final-Report-Visualization-of-Simulation-Amarsh-Vutukuri.pdf. Open. Extract. Open with. Sign In.

1MB Sizes 1 Downloads 159 Views

Recommend Documents

No documents