Globally Optimal Tumor Segmentation in PET-CT Images: A Graph-Based Co-Segmentation Method Dongfeng Han† , John Bayouth† , Qi Song‡ , Aakant Taurani‡ , Milan Sonka† , John Buatti† , Xiaodong Wu†‡ †

Department of Radiation Oncology Department of Electrical and Computer Engineering The University of Iowa, Iowa City, IA, USA [email protected] {john-bayouth, qi-song, aakant-taurani, milan-sonka, john-buatti, xiaodong-wu}@uiowa.edu ‡

Abstract. Tumor segmentation in PET and CT images is notoriously challenging due to the low spatial resolution in PET and low contrast in CT images. In this paper, we have proposed a general framework to use both PET and CT images simultaneously for tumor segmentation. Our method utilizes the strength of each imaging modality: the superior contrast of PET and the superior spatial resolution of CT. We formulate this problem as a Markov Random Field (MRF) based segmentation of the image pair with a regularized term that penalizes the segmentation difference between PET and CT. Our method simulates the clinical practice of delineating tumor simultaneously using both PET and CT, and is able to concurrently segment tumor from both modalities, achieving globally optimal solutions in low-order polynomial time by a single maximum flow computation. The method was evaluated on clinically relevant tumor segmentation problems. The results showed that our method can effectively make use of both PET and CT image information, yielding segmentation accuracy of 0.85 in Dice similarity coefficient and the average median hausdorff distance (HD) of 6.4 mm, which is 10 % (resp., 16 %) improvement compared to the graph cuts method solely using the PET (resp., CT) images.

1

Introduction

Image-guided radiation therapy utilizes on-board daily image-based target and critical structure position verification that enables precise delivery of a high radiation dose to an image-defined target volume while maintaining a low dose to the surrounding critical structures. Accurate target definition is essential to obtain the full benefit of using image-guided radiotherapy for head-and-neck cancer (HNC) or lung cancer. Standard treatment paradigms utilize manual target

2

Dongfeng Han et al.

(a)

(b)

(c)

Fig. 1. The PET-CT co-segmentation. (a) The idea of the proposed method. For two co-registered PET and CT images, the superior contrast of PET and the superior spatial resolution of CT can be helpful to obtain a more accurate segmentation. Simultaneously using these information is superior than only using one modality image. (b) and (c) are the registered PET and CT images. The white contour is the ground truth and the blue contour is our co-segmentation result. The tumor is difficult to identify in a CT image. Using both PET and CT images simultaneously, the tumor can be segmented with a high accuracy.

volume definitions on computed tomography (CT) scans. This is potentially inaccurate and is subject to both high intra- and inter-observer variability because the tumors are similar in density to the surrounding soft tissues. It is difficult to obtain a consistent tumor definition in an objective way by manual contouring. To improve visualization of the tumor volume, functional imaging using Ffluorodeoxyglucose (FDG) positron emission tomography (PET) has been combined with treatment planning CT to assist radiation oncologists in target delineation. PET-CT has become a standard method for tumor staging and has increasingly been used by radiation oncologists in tumor target delineation. During an FDG-PET scan, a tracer dose of the radioactive material, FDG, is injected intravenously. The FDG uptake is increased in cells with high metabolic rate, hence diseased areas (such as tumor, inflammation) in FDG-PET appear as high-uptake hot spots. Although the high contrast between tumor and normal tissue on PET images can reduce the inter- and intra-observed variability in tumor localization, observed variability in tumor delineation with the qualitative use of FDG-PET is still high and often inconsistent with CT-based anatomically defined tumor contours. Accurate segmentation of an FDG-PET image for tumor delineation is a complex and unresolved problem. Attempts to make definitions more consistent through objective/automated segmentation methods (i.e., standardized uptake value threshold) have been frought with difficulty. Factors such as the image reconstruction algorithm, the partial volume effect for objects smaller

Globally Optimal PET-CT Co-Segmentation

3

than twice the full width at half maximum, and the display windowing, image registration between anatomically based multi-modality (CT/MR) datasets, and the threshold approach utilized, all influence image segmentation. Objective and robust tools are needed to utilize the novel information that molecular imaging techniques provide. This would be tremendously valuable in radiation therapy to capitalize on advances in the precision of radiation planning and delivery, as we have reached an era where our precision in delivering a radiation dose distribution that conforms three-dimensionally to the shape of our target is far greater than the precision with which the target can be defined. Currently, dual-modality PET-CT imaging is widely used in clinical therapy and increasingly directly in the treatment planning process albeit in a largely subjective and inconsistent fashion. Currently available automated tumor delineation methods have relied solely on a threshold of an absolute PET intensity value [1]. The CT image data is ignored and often contradicts the threshold based target definition. The complexity of human anatomy and neoplastic growth produce sophisticated features in images, which cannot fully be captured in single modality images. Because the information presented in PET and CT is distinct but complementary, the dual-modality PET-CT images provides great potentials to the segmentation fields. This complementary character is well known in clinical practice where the diagnostic accuracy of the combined PET-CT has proven superior to either PET or CT alone. We propose an efficient graph-based method to utilize the strength of each system for target/tumor delineation: the superior contrast of PET and the superior spatial resolution of CT. Recently, graph-based optimization receives a lot of attention in computer vision area [2–6]. The basic idea is to transform the target problem as an energy minimization problem, which can be efficiently solved by certain discrete optimization tools. In this paper, a general framework has been proposed to use both PET and CT images simultaneously for the dual-modality co-segmentation, which is formulated as a Markov Random Field (MRF) optimization problem. We introduce a new energy term into the objective energy function penalizing the segmentation difference between PET and CT images to achieve tumor segmentation in PET and CT simultaneously. Our new algorithm guarantees to achieve global optimal solutions with respect to the co-segmentation energy function in a low-order polynomial time by computing a single maximum flow in the constructed graph. This has clinical implications for improving both radiation therapy target definition and response assessment for clinical oncology. In Fig.1, we show an example of the PET-CT co-segmentation.

1.1

Related Work

Image segmentation from co-registered PET-CT has been attracted increasing attentions in recently years. Yu et al. [7] demonstrated the effectiveness of automated region-based segmentation of head-and-neck tumor from PET-CT by validating the results with manual tracings of radiation oncologists. The usefulness of combining the information of both PET and CT images for segmentation

4

Dongfeng Han et al.

was shown by Baardwijk et al. [8]. Yu et al. [9] further demonstrated the effectiveness of using co-registered segmentation to identify useful textural features for distinguishing tumor from normal tissue in the head and neck regions. The FDG-PET images coregistered with CT images was also proved to be superior to CT images in reducing inter-observer variability by Riegel et al. [10]. Potesil et al. [11] proposed a method utilizing the initial robust hot spot detection and segmentation performed in PET to provide the basic tumor appearance and shape model for classifying voxels in CT. Unfortunately, all those previous methods lack the capability of fully making use of the strengths of both PET and CT in the sense of concurrently segmenting the tumor from both modalities. Xia et al. [12] presented a systematic solution for the co-segmentation of brain PET-CT images with the MAP-MRF model, which was achieved by solving a maximum a posteriori (MAP) problem using the expectation-maximization (EM) algorithm with simulated annealing. Their method suffered long execution times due to the use of simulated annealing. The limitations of existing approaches is the main impetus for designing a new efficient method for simultaneously segmenting the tumor from both PET and CT with high accuracy.

2

Problem Formulation

We formulate the task of co-segmenting as a binary labeling of Markov Random Field (MRF) on a graph corresponding to the input PET and CT images. The method attempts to simultaneously minimize the total MRF energy for both PET and CT images as well as penalizing the segmentation difference between two images. In our co-segmentation problem, a discrete random variable fu is introduced for each voxel u in either the input PET or CT image. Denote by fP (resp., fC ) the set of variables corresponding to the voxels in the input PET (resp., CT) image. In the preprocessing phase, we apply the image registration algorithm to register the PET image to the CT image and upsample the PET image to make sure that both have the same size. We thus assume that there is a one-toone correspondence between fP and fC and denote u0 the voxel in the CT image corresponding the voxel u in the PET image. Each label f in fP or fC takes a label value from the label set L = {0, 1} indicating that the voxel is in the foreground (f = 1) or in the background (f = 0). Then, every possible assignment of the random variables in fP (resp., fC ) define the tumor region in the PET (resp., CT) image. We certainly can compute an optimal tumor segmentation in the PET or CT image separately to minimize the corresponding MRF energy by applying Boykov and Funka-Lea’s graph cuts method [2]. However, that method does not make use of the information from the other modality. We propose to introduce a third set of co-segmentation binary variables fP −C each associated with a pair of corresponding voxels (u, u0 ) in the PET and CT images to utilize the strength of both systems for segmentation. fP −C is used to incorporate the energy penalty of the segmentation difference between the PET

Globally Optimal PET-CT Co-Segmentation

5

and CT. That is, if the labelings of the pair of corresponding voxels (u, u0 ), fu and fu0 , are the same, then no penalty is enforced; otherwise, we penalize the disagreement. Furthermore, the PET and CT may weight in differently for the final co-segmentation result. Thus, a different penalty is enforced based on the agreement of the co-segmentation label f(u,u0 ) with fu from fu0 . Hence, the problem of PET-CT co-segmentation is to minimize the following energy function (Eq.(1)). EP ET −CT = EP (fP ) + EC (fC ) + EP −C (fP −C ),

(1)

where EP (fP )) and EP (fC ) are the MRF energy functions for the PET and CT, respectively, and the energy term EP −C (fP −C ) is used to penalize the segmentation difference between the PET and CT. The co-segmentation energy term EP −C (fP −C ) makes use of the high contrast of PET and the high spatial resolution of CT to link the PET segmentation and the CT segmentation as a co-segmentation process. The MRF Segmentation Energy for the PET. Let NP denote the neighborhood system used in the input PET image IP (e.g., the 4-neighbor setting). The MRF energy term EP (fP ) consists of a data term and a smoothness term defined, as follows. X X E(fP ) = du (fu ) + wu,v (fu , fv ). (2) u∈IP

(u,v)∈NP

The data term du (fu ) is the likelihood that imposes individual penalty for assigning a label fu (i.e., the foreground (tumor) or the background) to the voxel u. The smoothness term w(fu , fv ), representing the interaction potential [2], measures the cost of assigning different labels to two neighboring voxels u and v in IP . ( (u, v), if fu 6= fv wu,v (fu , fv ) = (3) 0, if fu = fv where (u, v) is the smoothness value computed from the neighbouring voxels u and v. Note that we can obtain an optimal segmentation using only the PET image IP with respect to the energy function E(fP ) by applying the graph cuts method [2]. The MRF Segmentation Energy for the CT. Let NC denote the neighborhood system used in the input CT image IC . The MRF energy term EC (fC ) has the same form as the energy term EP (fP ) for the CT. X X E(fC ) = du0 (fu0 ) + wu0 ,v0 (fu0 , fv0 ). (4) u0 ∈IC

(u0 ,v 0 )∈NC

Again, an optimal segmentation using only the CT image IC with respect to the energy function E(fC ) can be obtained by applying the graph cuts method [2].

6

Dongfeng Han et al.

The Co-Segmentation Eenergy Term EP −C (fP −C ) penalizes the segmentation difference between the PET IP and the CT IC . Each variable f(u,u0 ) ∈ fP −C associated with a pair of corresponding voxels (u, u0 ) in IP and IC takes a label from the label set L = {0, 1}. If f(u,u0 ) = 1, then no matter fu agrees with fu0 or not, the voxels u and u0 are classified as foreground (tumor); otherwise, the voxels u and u0 are classified as background. We use the function γu,u0 (fu , f(u,u0 ) , fu0 ) to penalize the disagreement of fu and fu0 and resolve the disagreement based on the salient features of the PET and CT. The generalized Potts model is used in our method, as follows.  0, if fu = f(u,u0 ) = fu0 ,   0 γu,u0 (fu , f(u,u0 ) , fu0 ) = ϕ1 (u, u ), if fu 6= fu0 , f(u,u0 ) = fu0 (5)   0 ϕ2 (u, u ), if fu 6= fu0 , f(u,u0 ) = fu , where ϕ1 (u, u0 ) and ϕ2 (u, u0 ) are the values to penalize the segmentation difference between u and u0 . Thus the co-segmentation energy term is defined as follows. X EP −C (fP −C ) = γu,u0 (fu , f(u,u0 ) , fu0 ). (6) u∈IP ,u0 ∈IC

3

The Graph Optimization Method

This section presents our graph-based algorithm for the PET-CT co-segmentation problem, which achieves globally optimal solution with respect to the energy function EP ET −CT defined in Eq. (1). The algorithm runs in a low-order polynomial time by computing a minimum-cost s-t cut in the constructed graph G(V, E). Each variable fu ∈ fP (resp., fu0 ∈ fC ) corresponds to exactly one node in VP (resp., VC ). To simplify the notation, in graph G, we also use u (resp., u0 ) to denote the corresponding node of voxel u (resp., u0 ) in IP (resp., IC ). In addition, a graph node zu,u0 ∈ VP −C is introduced for every variable f(u,u0 ) ∈ fP −C . Since our goal is to formulate the co-segmentation problem as a minimum-cost s-t cut problem, two terminal nodes, a source s and a sink t, are added. Hence, the graph node set V = VP ∪ VC ∪ VP −C ∪ {s, t}. As in Boykov and Funka-Lea’s graph cuts method [2], we introduce t-edges and n-edges for the sub-node sets VP and VC separately, and additional d-edges between the corresponding nodes of VP −C and VP , and of VP −C and VC . (1) t-Edges are used to incorporate the data term of the MRF segmentation energy (i.e., the data terms in Eqs. (2) and (4)). For each node u in VP , we put an edge from s to u with the edge cost of du (fu = 1) and an edge from u to t with the edge cost du (fu = 0). As in the graph cuts method [2], two subsets F and B of seed voxels are manually identified with F of the foreground seeds and B of the background seeds. For any voxel in F (B), we add hard constrains to make sure it is foreground (background) in the final result. Instead of being used for the hard constraints, F and B are also used to compute the Gaussian

Globally Optimal PET-CT Co-Segmentation

7

Mixture Model (GMM) to represent the intensity distribution of each voxel in IP . Each of the background GMM and the foreground GMM is a full-covariance Gaussian mixture with K components ( K = 15 in our study). The negative log likelihoods of a voxel for being background or foreground obtained by this method are used in du (fu ). Denote by Iu the intensity value of voxel u in IP . The values of du (fu ) are computed as in the following table. du (·) u ∈ F u ∈ B u 6∈ F ∪ B fu = 0 +∞ 0 −λ1 log P r(Iu | 0 background0 ) fu = 1 0 +∞ −λ2 log P r(Iu | 0 f oreground0 ) In the same way, we introduce t-edges for each node u0 in VC . (2) n-Edges are used to enforce the smoothness term of the MRF segmentation energy (i.e., the smoothness terms in Eqs. (2) and (4)). For each voxel pair (u, v) in the PET image IP , if (u, v) ∈ NP (recall that NP is the neighboring setting of IP ), we add two n-edges in G, one from the node u ∈ VP to the node v ∈ VP and the other in the opposite direction from v to u. The cost of each edge is u,v , with ( λ3 e(−θ1 ||Iu −Iv ||) , (u, v) ∈ NP (u, v) = (7) 0, otherwise The n-edges are introduced for the sub-node set VC in the same way with the neighboring setting NC . (3) d-Edges are used to penalize the segmentation difference between the PET and CT images. Recall that for each corresponding pair of nodes (u, u0 ) with u ∈ VP and u0 ∈ VC , we introduce a node zu,u0 ∈ VP −C . We now put two edges between u and zu,u0 , one from u to zu,u0 and the other from zu,u0 to u, each with a cost of ϕ1 (u, u0 ). Those d-edges are used to penalize the case that fu 6= fu0 , but f(u,u0 ) = fu0 . Similarly, two edges between zu,u0 and u0 are added with each of cost ϕ2 (u, u0 ), penalizing the case that fu 6= fu0 , but f(u,u0 ) = fu . Both ϕ1 (u, u0 ) and ϕ1 (u, u0 ) are computed, as follows. ( ϕ1 (u, u0 ) = λ4 e(−θ2 ||Iu −Iu0 ||) , (8) ϕ2 (u, u0 ) = λ5 e(−θ3 ||Iu −Iu0 ||) We thus finish the construction of the graph G from the input PET and CT images IP and IC . Fig. 2 shows an example construction of the graph. For the 3D images, we use the 6-neighboring system for both NP and NC ( this can be easily extended to any other form of the neighborhood system for the images). We next show that the minimum-cost s-t cut C ∗ = (S ∗ , S¯∗ ) defines a segmentation minimizing the objective energy function Eq. (1). Note that for a corresponding pair of voxels (u, u0 ) in the PET and CT, if fu = fu0 , then any d-edge between u and zu,u0 , and between zu,u0 and u0 cannot be in C ∗ . Thus, we only consider an s-t cut not including that kind of d-edges, namely, an ad¯ demissible cut. It is easy to see that any finite admissible s-t cut C = (S, S) fines a feasible configuration for fP ∪ fC ∪ fP −C (i.e., a co-segmentation). We

8

Dongfeng Han et al.

Fig. 2. Graph construction. Three types of edges are used for the graph construction. For each node u ∈ VP ∪ VC , the edge cost of s → u is du (fu = 1)), the edge cost of u → t is du (fu = 0)). For two node (u, v) ∈ NP ((u, v) ∈ NC ), the edge cost of u → v is (u, v). For node u ∈ VP and its correspondence u0 ∈ VC , the edge cost for u → zu,u0 and zu,u0 → u are ϕ1 (u, u0 ). The edge cost for u0 → zu,u0 and zu,u0 → u0 are ϕ2 (u, u0 ). Refer to text for details.

can also show that the total cost w(C) of the cut C equals the total energy EP ET −CT of the segmentation. Divide the s-t cut C into three disjoint sub-cuts: ¯ P )∪{t}), ((S∩VC )∪{s}, (S∩V ¯ C )∪{t}), and those d-edges in ((S∩VP )∪{s}, (S∩V C. Using a similar argument in Boykov and Funka-Lea’s graph cuts method [2], we can prove that the total cost of the sub-cut ((S ∩ VP ) ∪ {s}, (S¯ ∩ VP ) ∪ {t}) equals the MRF segmentation energy E(fP ) for the PET, and that the total cost of the sub-cut ((S ∩ VC ) ∪ {s}, (S¯ ∩ VC ) ∪ {t}) equals the MRF segmentation energy E(fC ) for the CT. We now calculate the total cost of those d-edges in C, denoted Cd . For any ¯ four cases need to be considered: Case edge (a, b) ∈ Cd (i.e., a ∈ S and b ∈ S), 1) a ∈ VP and b ∈ VP −C ; Case 2) a ∈ VP −C and b ∈ VC ; Case 3) a ∈ VC and b ∈ VP −C ; and Case 4) a ∈ VP −C and b ∈ VP . Assume that the corresponding voxel pair is (u, u0 ). For Case 1): a ∈ VP and b ∈ VP −C . The cost of the edge (a, b) with w(a, b) = w(u, zu,u0 ) = ϕ1 (u, u0 ). In this case, fu = 1 and f(u,u0 ) = fu0 = 0. Based on Eq. (5), γu,u0 (fu , f(u,u0 ) , fu0 ) = ϕ1 (u, u0 ). That is, w(a, b) = γu,u0 (fu , f(u,u0 ) , fu0 ). With a similar argument, we can prove that w(a, b) = γu,u0 (fu , f(u,u0 ) , fu0 ) for all the other three cases. Putting Pall those four casesPtogether, we have that the total cost of Cd with 0 0 w(Cd ) = (a,b)∈Cd w(a, b) = u∈VP ,u0 ∈VC γu,u (fu , f(u,u0 ) , fu ) being aware of the fact that γu,u0 (fu , f(u,u0 ) , fu0 ) = 0 if fu = f(u,u0 ) = fu0 . Hence, the total cost of an finite admissible s-t cut equals to the total energy of the its corresponding segmentation. Therefore, the optimal co-segmentation solution can be found by computing a minimum s-t cut in G [2].

Globally Optimal PET-CT Co-Segmentation

4

9

Experimental Methods

Implementation and Parameters Setting. Our algorithm was implemented with ISO C++ on a standard PC with a 2.46GHz Intel R CoreTM2 Duo processor and 4 GB memory, running 64-bit Windows system. The max flow library [2] was utilized as the optimization tool. For the PET image, we use the Elastix [13] tools to register it to CT image. After the registration, the PET and CT have the same resolution and each voxel u ∈ VP and its correspondence u0 ∈ VC have the same location in PET and CT, respectively. In our experiments, we used the following parameter setting. The data term coefficients , λ1 = λ2 = 1. For the smoothness term coefficients, we set λ3 = 50 and θ1 = θ2 = θ3 = 1. The coefficients in the co-segmentation term were set λ4 = 50 and λ5 = 1. Note that by choosing λ4 and λ5 with fixed constants, we do not fully explore the power of our model in Eq. (5). Data Collection. We used FDG-PET-CT images of a group of 16 patients with head-and-neck cancer (HNC) for the present study. The reconstructed matrix size for each CT slice was 512×512, with a voxel size of 0.97 ×0.97×2 mm. For the PET images, the reconstructed matrix size was 168×168, with a voxel size of 3.39×3.39×2 mm. The PET images were co-registered to the CT images and interpolated to the same voxel size so that there was a one-to-one correspondence between PET and CT. The radiation oncologist with expertise in HNC manually segmented the primary tumors from the 16 PET-CT images as the ground truth. Evaluation Strategy. The segmentation performance was evaluated using the Dice similarity coefficient (DSC) and median hausdorff distance (HD). A DSC of 0 means that the segmentations have no overlap at all, while 1 indicates a perfect matching. A small median HD value indicates a accurate segmentation and a large value gives an indication that the segmentation method is not accurate. All DSC and median HD values were computed in 3-D. We compared three methods on 16 PET-CT datasets: (1) the proposed PET-CT co-segmentation method (Co-Seg); (2) the graph cuts method only using PET (PET-Seg) [2]; and (3) the graph cuts method only using CT (CT-Seg) [2]. We manually identified a set of foreground and a set of background seeds for each dataset and used for all the three methods.

5

Results and Discussion

The results are summarized in Figure 3 showing the DSCs and median HDs for the three methods. Our method obtained tumor segmentation accuracy characterized by DSC = 0.86±0.051 on average, which was higher than that of the PET-Seg method (0.78±0.045) and the CT-Seg method (0.51±0.12). The median Hausdorff distance mostly ranged between 5-7 mm. The average value was 6.4±0.56 mm, which was much smaller than those of PET-Seg (7.6±0.97 mm) and CT-Seg. (11.6±2.73 mm). We noted that the dataset No. 3 has a relatively low accuracy, which is due to the fact that the ground truth is not good enough.

10

Dongfeng Han et al.

DSC results for 16 test images

Median HD (mm) results for 16 test images

0.9

Median HD (mm)

0.8 0.7

DSC

Co−Seg PET−Seg CT−Seg

16

0.6 0.5 0.4

Co−Seg PET−Seg CT−Seg

0.3 1

2

3

4

5

6

7

8

9

10

11

Test images index

(a) DSC results

12

13

14

15

16

14 12 10 8 6 4

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Test images index

(b) Median HD results

Fig. 3. DSC and median HD results for different methods.

Fig. 4. A tumor segmentation example. (a) 2-D slice of 3-D PET image with seeds highlighted. The read line and the green line were background and foreground seeds. (b) The results in PET image. Manual segmentation were showed with orange and our results were showed with green. (c) The results in CT image. (d) 3-D representation of the tumor (green).

Fig. 5. Segmentation result comparisons. (a) The manual segmentation results. The foreground seeds are shown in green and the background seeds are shown in red. (b) The results by our co-segmentation method (Co-Seg). (c) The results by the graph cuts method solely using the PET image (PET-Seg). (d) The results by the graph cuts method solely using the CT image(CT-Seg).

Globally Optimal PET-CT Co-Segmentation

11

Fig.4 showed a typical tumor segmentation result by our method. The visual comparison with the independent standard showed our method matched well to the ground truth. Fig.5 illustrates the segmentation results on one head-and-neck test dataset for the comparison purpose of the three methods. The results demonstrate the high accuracy of our Co-Seg method comparing to the independent stand. In most of cases, the results using only PET had smaller segmentation volumes than the co-segmentation results. Due to the low resolution in PET, the voxels along the tumor boundary suffered the partial volume effect thus the tumor volume can not be fully identified using solely PET datasets. On the other hand, though with high resolution, the low contrast of the CT datasets led to the biggest segmentation errors among the three methods. Our co-segmentation approach can successfully incorporate both the information from PET and CT. Another issue we need to discuss is the role of co-registration as a pre-porcessing step. Currently, we assume there is one-to-one correspondence between PET and CT after registration. However, the registration errors will affect the results. To overcome this problem, introducing a high order term may reduce such errors and we will investigate it in our future work. The running time of our method includes two parts: (1) PET-CT registration; and (2) PET-CT co-segmentation. For each image with size of 512×512×100, the execution time for the registration was about 60s. In the PET-CT cosegmentation process, we used the ROI to reduce the computing time. Commonly, the size of ROI was about 200×200×60. The time for co-segmentation was about 20s. We note that our current code is not fully optimized and there is much room for improvement.

6

Conclusion and Future Work

In this paper, we have proposed a novel method to make use of the strength of both the PET and CT systems for tumor segmentation. The tumor in the PET and CT images is concurrently segmented with the awareness of the information provided by the other modality. Our method was tested on segmenting 16 patients’ datasets with challenging neck-and-head tumor. The results showed high accuracy on real clinical data comparing to those obtained by the method solely using PET or CT image data. In the future, one interesting work is to extend our idea of using dual-modality PET-CT images to [14], in which the authors tried to use surface context to improve tumor segmentation only in CT images. Additional, we plan to further improve the accuracy by studying adaptive parameter selection, and investigate the use of atlas-based segmentation method to reduce the human interaction.

Acknowledgements This research was supported in part by the NSF grants CCF-0830402 and CCF0844765, and the NIH grants R01 EB004640 and K25 CA123112.

12

Dongfeng Han et al.

References 1. Ford, E., Kinahan, P., Hanlon, L., et al.: Tumor delineation using PET in head and neck cancers: threshold contouring and lesion volumes. Medical Physics 33 (2006) 4280 2. Boykov, Y., Funka-Lea, G.: Graph cuts and efficient nd image segmentation. International Journal of Computer Vision 70 (2006) 109–131 3. Han, D., Wu, X., Sonka, M.: Optimal multiple surfaces searching for video/image resizing-a graph-theoretic approach. In: IEEE 12th International Conference on Computer Vision, ICCV 2009. (2009) 1026–1033 4. Song, Q., Wu, X., Liu, Y., Smith, M., Buatti, J., Sonka, M.: Optimal graph search segmentation using arc-weighted graph for simultaneous surface detection of bladder and prostate. Medical Image Computing and Computer-Assisted Intervention– MICCAI 2009 (2009) 827–835 5. Song, Q., Wu, X., Liu, Y., Haeker, M., Sonka, M.: Simultaneous searching of globally optimal interacting surfaces with shape priors. In: CVPR. (2010) 6. Han, D., Bayouth, J., Bhatia, S., Sonka, M., Wu, X.: Motion Artifact Reduction in 4D Helical CT: Graph-Based Structure Alignment. Medical Computer Vision. Recognition Techniques and Applications in Medical Imaging (MICCAI MCV) (2011) 63–73 7. Yu, H., Caldwell, C., Mah, K., et al.: Automated radiation targeting in head-andneck cancer using region-based texture analysis of PET and CT images. International Journal of Radiation Oncology, Biology and Physics 75 (2009) 618–625 8. Baardwijk, A., Bosmans, G., Boersma, L., et al.: PET-CT-based auto-contouring in non-small-cell lung cancer correlates with pathology and reduces interobserver variability in the delineation of the primary tumor and involved nodal volumes. International Journal of Radiation Oncology, Biology and Physics 68 (2007) 9. Yu, H., Caldwell, C., Mah, K., Mozeg, D.: Coregistered FDG PET/CT-based textural characterization of head and neck cancer for radiation treatment planning. IEEE Transactions on Medical Imaging 28 (2009) 374–383 10. Riegel, A., Berson, A., Destian, S., et al.: Variability of gross tumor volume delineation in head-and-neck cancer using CT and PET/CT fusion. International Journal of Radiation Oncology, Biology and Physics 65 (2006) 11. Potesil, V., Huang, X., Zhou, X.: Automated tumor delineation using joint PET/CT information. In: Proc. SPIE International Symposium on Medical Imaging: Computer-Aided Diagnosis. Volume 6514. (2007) 12. Xia, Y., Wen, L., Eberl, S., Fulham, M., Feng, D.: Segmentation of dual modality brain PET/CT images using the MAP-MRF model. In: Multimedia Signal Processing, 2008 IEEE 10th Workshop on, IEEE (2008) 107–110 13. Klein, S., Staring, M., Murphy, K., Viergever, M., Pluim, J.: Elastix: a toolbox for intensity-based medical image registration. IEEE Transactions on Medical Imaging 29 (2010) 196–205 14. Song, Q., Chen, M., Bai, J., Sonka, M., Wu, X.: Surface-region context in optimal multi-object graph-based segmentation: robust delineation of pulmonary tumors. IPMI 2011 (2011)

Globally Optimal Tumor Segmentation in PET-CT Images: A Graph ...

hence diseased areas (such as tumor, inflammation) in FDG-PET appear as high-uptake hot spots. ... or CT alone. We propose an efficient graph-based method to utilize the strength of each ..... in non-small-cell lung cancer correlates with pathology and reduces interobserver variability in ... In: Multimedia Signal. Processing ...

1MB Sizes 1 Downloads 285 Views

Recommend Documents

Globally Optimal Finsler Active Contours
Segmentation boundaries generally coincide with strong edges in the source ..... Minimization of the Active Contour/Snake Model. J. Math. Imag. Vision (2007).

Globally Optimal Target Tracking in Real Time using ...
target's optimal cross-camera trajectory is found using the max-flow algorithm. ...... frames per second and 24-miunte duration) on desktop PCs with 2.8 GHz Intel ...

Multi-Organ Segmentation in Abdominal CT Images
(E-05): A center of excellence for an in silico medicine-oriented world wide open platform ... statistical shape model (SSM), which we call prediction- based PA ...

Efficient Hierarchical Graph-Based Video Segmentation
els into regions and is a fundamental problem in computer vision. Video .... shift approach to a cluster of 10 frames as a larger set of ..... on a laptop. We can ...

Multi-Organ Segmentation with Missing Organs in Abdominal CT Images
Children's National Medical Center, Washington DC. {miyukis ... current MOS solutions are not designed to handle such cases with irregular anatomy.

Intervertebral disc segmentation in MR images using ...
a GE Healthcare Canada, 268 Grosvenor Street, London, Ontario, Canada N6A 4V2 b University of Western Ontario, 1151 Richmond Street, London, Ontario, Canada N6A 3K7 c London Health Sciences Centre, 800 Commissioners Road East, London, Ontario, Canada

Geometry Motivated Variational Segmentation for Color Images
In Section 2 we give a review of variational segmentation and color edge detection. .... It turns out (see [4]) that this functional has an integral representation.

segmentation techniques of microarray images – a ...
Cochin University of Science and Technology. By. JISHNU L. REUBEN ... analysis of gene expression. A DNA microarray is a multiplex technology used in.

Brain Tumor Segmentation Using K-Means Approach - IJRIT
A pathologist looks at the tissue wireless phones under a microscope to check for .... of Computer Science and Engineering, University of California, San Diego.

Efficient Method for Brain Tumor Segmentation using ...
Apr 13, 2007 - This paper works on the concept of segmentation based on grey levels. It proposes a new entropy method for MRI images. The segmentation is done using ABC algorithm and the method is used to search the value in continuous gray scale int

Brain Tumor Segmentation Using K-Means Approach - IJRIT
medical experts see the body's third dimension magnetic resonance imaging protons and neutrons of the small group of an atom has a ... certain segmentation on two dimensional MRI data 32. in addition, sensed tumors 5 are represented in 3-dimensional

Beyond Globally Optimal: Focused Learning for ... - Alex Beutel
Apr 3, 2017 - that all movies or apps have a good opportunity to be sur- faced? Recommender .... side information about the items [34, 4, 20, 33], users' social networks [23], multi-task learning to model review text [11, ...... In RecSys, 2016.

Globally Optimal Surfaces by Continuous ... - Research at Google
other analysis techniques that practitioners need only define an appropriate mea- sure of 'goodness' and then optimise ... stereo matching yielding improved spatial consistency at the cost of additional computation [12]. ... additional user interacti

Causal Video Segmentation Using Superseeds and Graph Matching
advantage of the proposed approach over some recently reported works. Keywords: Causal video segmentation · Superseeds · Spatial affinity ·. Graph matching.

Kidney segmentation using graph cuts and pixel ...
May 23, 2013 - duced in the energy function of the standard graph cut via pixel labeling. Each pixel is assigned a ... As a statistical alternative, non-parametric ... graph G contains two types of edges: the neighborhood links/n- links (N) and the .

Abdominal Multi-Organ Segmentation of CT Images ... - Springer Link
Graduate School of Information Science and Engineering, Ritsumeikan University,. 1-1-1, Nojihigashi, .... the segmented region boundaries Bs of the “stable” organs, we estimate b. ∗ by b. ∗. = argmin .... International Journal of Computer As-

Texture Detection for Segmentation of Iris Images - CiteSeerX
Asheer Kasar Bachoo, School of Computer Science, University of Kwa-Zulu Natal, ..... than 1 (called the fuzzification factor). uij is the degree of membership of xi.

Segmentation of Mosaic Images based on Deformable ...
in this important application domain, from a number of points of view including ... To the best of our knowledge, there is only one mosaic-oriented segmentation.

Segmentation of Mosaic Images based on Deformable ...
Count error: Count(S(I),TI ) = abs(|TI |−|S(I)|). |TI |. (previously proposed1 for the specific problem). 1Fenu et al. 2015. Bartoli et al. (UniTs). Mosaic Segmentation ...

Image Annotation Using Bi-Relational Graph of Images ...
data graph and the label graph as subgraphs, and connect them by an ...... pletely correct, a big portion of them are (assumed to be) correctly predicted.

A Hierarchy of Self-Renewing Tumor-Initiating Cell Types in ...
Apr 13, 2010 - growth (i.e., both fractions generated expandable neurospheres ..... residing in the CD133+ sort window are reported even though these two.

Availability in Globally Distributed Storage Systems - USENIX
*Now at Dept. of Industrial Engineering and Operations Research. Columbia University the datacenter environment. We present models we derived from ...