SEGMENTATION OF VESSELS USING WEIGHTED LOCAL VARIANCES AND AN ACTIVE CONTOUR MODEL

by WAI KONG LAW

A Thesis Submitted to The Hong Kong University of Science and Technology in Partial Fulfillment of the Requirements for the Degree of Master of Philosophy in Computer Science

August 2006, Hong Kong

c by Wai Kong Law 2006 Copyright °

Authorization

I hereby declare that I am the sole author of the thesis.

I authorize the Hong Kong University of Science and Technology to lend this thesis to other institutions or individuals for the purpose of scholarly research.

I further authorize the Hong Kong University of Science and Technology to reproduce the thesis by photocopying or by other means, in total or in part, at the request of other institutions or individuals for the purpose of scholarly research.

WAI KONG LAW

ii

SEGMENTATION OF VESSELS USING WEIGHTED LOCAL VARIANCES AND AN ACTIVE CONTOUR MODEL by WAI KONG LAW

This is to certify that I have examined the above M.Phil. thesis and have found that it is complete and satisfactory in all respects, and that any and all revisions required by the thesis examination committee have been made.

DR. ALBERT C. S. CHUNG, THESIS SUPERVISOR

PROFESSOR LIONEL NI, ACTING HEAD OF DEPARTMENT Department of Computer Science and Engineering 25 August 2006 iii

ACKNOWLEDGMENTS I would like to thank my advisor, Dr. Albert C. S. CHUNG for supervising my research. I would like to thank K.S. Lo Foundation for Financial Support. I would like to thank Dr. Simon Yu from Prince of Wales Hospital Hong Kong for supplying clinical data. I would like to thank Vincent Yuan who have helped to generate experimental results. I would like to thank Wilbur C.K. Wong for discussion of my work.

iv

TABLE OF CONTENTS Title Page

i

Authorization Page

ii

Signature Page

iii

Acknowledgments

iv

Table of Contents

v

List of Figures

vii

List of Tables

x

Abstract Chapter 1

xi Introduction

1

1.1 Literature Reviews

1

1.2 The proposed method

6

Chapter 2

Methodology

7

2.1 Weighted Local Variance

7

2.2 Active Contour Model

12

2.3 Level Set Representation

14

2.4 Implementation

14

Chapter 3

Experiments

16

3.1 Edge orientation estimation

16

3.2 Image Segmentation

19

3.2.1

Synthetic Image Segmentation

19

3.2.2

Real Image Segmentation Comparison

19

3.2.3

Manual Initialized Real Image Segmentation

22

Chapter 4

Discussion and Future Extension

40

4.1 Active Contour Models

40

4.2 Extension for 3D Image Volume

40

v

Chapter 5

Conclusions

41

References

42

vi

LIST OF FIGURES 2.1 2.2

3.1

3.2

3.3

Top: Gσ=4 . Middle: g1,θ (x, y). Bottom: g2,θ (x, y).

8

An example of polar space for a pixel. K = 24, the α is found according to V1 , V2 , ..., V24 . The numbers and the gray lines p represent the orientation of θk . The black crosses are the points ( Varθk , θk ), where k ∈ {1, 2, ..., 24}. The solid line and the dotted line are the straight lines (r, α1 ), (r, α2 ) respectively.

9

(a) From top to bottom, a clear horizontal edge and the image is corrupted using Gaussian noise with SNR= ∞, 10, 5, 2, 1. (b) Plots of WLVs and filtering responses of the first derivatives of Gaussian along different orientations. The orientation estimation results are marked as circles and crosses for WLV and the first derivatives of Gaussian respectively.

17

Top: A 128 × 128 synthetic image. Bottom: The angular discrepancies between the estimated orientation and the reference orientation. Zero angular difference at the left side of the x-axis represents a correct estimation (no discrepancy), π/2 means the estimated orientation is perpendicular to the reference orientation. ”Frequency” in the y-axis represents the number of occurrences of a particular orientation discrepancy. High number of occurrences in low discrepancy implies better performance.

24

Top: A 128 × 128 synthetic vessel with width around 6 pixels and having intensity decay from 1 (in the endings) to 13 (in the middle). Bottom: Values of Q calculated using K = 24, ρ = 0.0001, σ1 = 2 and σ2 = 1.6.

25

3.4

(a) The synthetic vessel corrupted by addictive Gaussian noise, σ = 0.3. (b) Manually selected initial contour. Intensity re-scaled for better visualization of the intial contour. (c) Resultant contour of FLUX, r = {5, 6, 7}, image pre-processed with Gaussian filter, Gσ=1 . (d) Resultant contour of MLV, K = 24, σ1 = 7 and σ2 = 5.6. 26

3.5

(a) A 256×256 DSA image. (b) Showing the lines of interest where intensity profiles are plotted in (c-f). (c) The intensity profile is plotted along line c in (b) from left to right. (d-f) The intensity profiles are plotted along the corresponding lines in (b) from top to bottom.

vii

27

3.6

Top row: Initial contours. Bottom row: Final results. (a) ACWE, manually selected initial contour, µ = 0.001·2552 , λ1 = λ2 = 1, ν = 0, h = 1. (b) ACEI, initial contour obtained automatically as in [1], σ1 = 2, σ2 = 10, µ = 0.0015. (c) FLUX, r = {1, 2, 3, 4}, image preprocessed with Gσ=2 initial contour obtained automatically from regions with highest 5% inward flux which is further smoothed under curvature flow for 500 steps. (d) The proposed method, σ = 2, ρ = 0.0001, ² = 0.1, initial contour obtained automatically from smoothing the term E using Gσ=1 and choosing the regions with highest 5% value of E which is further smoothed under curvature flow for 500 steps.

28

(a) A 128×128 retinal angiogram. (b) Showing the lines of interest where intensity profiles are plotted in (c-f). (c-f) The intensity profiles are plotted along the corresponding lines in (b) from left to right.

29

Top row: Initial contours. Bottom row: Final results. (a) ACWE, manually selected initial contour, µ = 0.004 · 2552 , λ1 = λ2 = 1, ν = 0, h = 1. (b) ACEI, manually selected initial contour, σ1 = 1.5, µ = 0.0015. (c) FLUX, r = {2, 3, 4}, image pre-processed with Gσ=1 initial contour obtained automatically from regions with highest 5% inward flux which is further smoothed under curvature flow for 2000 steps. (d) The proposed method, σ = 1.5, ρ = 0.0001, ² = 0.1, initial contour obtained automatically from smoothing the term E and choosing the regions with highest 5% value of E which is further smoothed under curvature flow for 500 steps.

30

(a) A 100×100 retinal angiogram. (b) Showing the position where intensity profiles are plotted in (c,d). (c,d) The intensity profiles are plotted from top to bottom.

31

3.10 Top row: Initial contours. Bottom row: Final results. (a) ACWE, manually selected initial contour, µ = 0.004 · 2552 , λ1 = λ2 = 1, ν = 0, h = 1. (b) EI, manually selected initial contour. (c) FLUX, image pre-processed with Gσ=1 initial contour obtained automatically from regions with highest 5% inward flux which is further smoothed under curvature flow for 1000 steps.

32

3.11 Top row: Initial contours. Bottom row: Final results. (a): FLUX with r = 1, 2, 3, image is preprocessed with Gσ=0.8 , initial contour obtained automatically from regions with the highest 5% inward flux which is further smoothed under curvature flows for 200 steps. (b): ACEI, the image is preprocessed with Gσ=0.8 , manually selected initial contour. (c): ACWE with µ = 0.000 01 · 2552 , λ1 = λ2 = 1, ν = 0, h = 1, manually selected initial contour.

33

3.7

3.8

3.9

3.12 Result of the proposed method using σx = 1.6 and σy = 0.8. Left: Initial contour obtained automatically from regions with the highest 5% field value which is further smoothed under curvature flows for 200 steps. Middle: Intermediate step. Right: Final result. 33 viii

3.13 Top row: Initial contours. Bottom row: Final results. (a): FLUX with r = 4, 5, 6, 7, image preprocessed with Gσ=3 , initial contour obtained automatically from regions with the highest 5% inward flux which is further smoothed under curvature flows for 200 steps. (b): ACEI, initial contour obtained automatically using the heuristic approach presented in [1] with σ1 = 3, σ2 = 10. (c): ACWE with µ = 0.4 · 2552 , λ1 = λ2 = 1, ν = 0, h = 1, manually selected initial contour.

34

3.14 Result of the proposed method using σx = σy = 3 in (a) and σx = 1, σy = 3 in (b). Left: Initial contour obtained automatically from regions with the highest 5% field value which is further smoothed under curvature flows for 200 steps. Middle: Intermediate step. Right: Final result.

35

3.15 Top row: Initial contours. Bottom row: Final results. (a): FLUX with r = 1, 2, 3, image preprocessed with G0.8 , initial contour obtained automatically from regions with the highest 10% inward flux which is further smoothed under curvature flows for 200 steps. (b): ACEI initial contour obtained automatically using the heuristic approach presented in [1] with σ1 = 0.8 and σ2 = 10. (c): ACWE with µ = 0.000 01 · 2552 , λ1 = λ2 = 1, ν = 0, h = 1, manually selected initial contour.

36

3.16 Result of the proposed method using σx = 1.6 and σy = 0.8. Left: Initial contour obtained automatically from regions with the highest 10% field value which is further smoothed under curvature flows for 200 steps. Middle: Two intermediate steps, the contour is propagating through the narrow and dim segments. Right: Final result.

36

3.17 (a) A 100 × 100 retinal angiogram. (b) Manually selected initial contours. Intensity re-scaled for better visualization of the initial contours. (c) Resultant contour of WLV, K = 24, ρ = 0.0001, σ1 = 1.2 and σ2 = 0.96.

37

3.18 (a) A 128 × 128 DSA image. (b) Manually selected initial contour. Intensity re-scaled for better visualization of the initial contour. (c) Resultant contour of WLV, K = 24, ρ = 0.0001, σ1 = 6 and σ2 = 4.8.

38

3.19 (a) A 120 × 120 DSA image. (b) Manually selected initial contours. Intensity re-scaled for better visualization of the initial contours. (c) Resultant contours of WLV. K = 24, ρ = 0.0001, σ1 = 1.5 and σ2 = 1.2.

39

ix

LIST OF TABLES 3.1 Average absolute acute angular differences (in radians) between the estimated orientation and the orientation of the horizontal edge .

x

16

SEGMENTATION OF VESSELS USING WEIGHTED LOCAL VARIANCES AND AN ACTIVE CONTOUR MODEL by WAI KONG LAW Lo Kwee-Seong Medical Image Analysis Laboratory Department of Computer Science and Engineering The Hong Kong University of Science and Technology

xi

ABSTRACT Performing segmentation of vasculature with blurry and low contrast boundaries in noisy images is a challenging problem. This thesis presents a novel approach to segmenting blood vessels using weighted local variances and an active contour model. In this work, the object boundary orientation is estimated locally based on the orientation that minimizes the weighted local variance. Such estimation is less sensitive to noise compared with other common approaches. The edge clearness is measured by the ratio of weighted local variances obtained along different orientations. It is independent of the edge intensity contrast and capable of locating weak boundaries. Integrating the orientation and clearness of edges, an active contour model is employed to align contours that match the contour tangent direction and edge orientation. The proposed method is validated by two synthetic images and two real cases. It is experimentally shown that our method is suitable for dealing with noisy images which consist of structures having blurry and low contrast boundaries, such as blood vessels.

xii

CHAPTER 1 INTRODUCTION Active contour models are widely used in shape recovery, visual tracking and solving image segmentation problems. For instance, blood vessel segmentation is one of the applications in medical image segmentation. They are based on progressively discovering target regions by propagating moving curves. The dynamics of moving curves can be initialized according to statistics or by human interaction, which makes it versatile to handle different application. The active contour models offer the ability to bridge physics to image analysis and geometry. Such analysis commonly involves extracting information to distinguish different regions. In medical image segmentation, to separate vasculature from the image background, researchers consider utilizing image gradient as a criterion to label blood vessel boundaries.

1.1

Literature Reviews

In the classical snakes [2] framework, a moving parametric contour is driven by minimization of the energy Z

1

E(C) =

¡

¢ α(C 0 (s))2 + β(C 00 (s))ds − λ(∇I(C(s)))2 ds

(1.1)

0

where C(s) is an evolving curve parameterized by s, I is image intensity, α, β and λ are positive parameters that control the sensitivity of the result contours between curve smoothness (internal energy) and potential from image gradients(external energy). Classical snakes have short range interaction that requires the initial contour to be placed near to the object boundaries. In order to provide long range interaction, Xu et al. proposed the Gradient Vector Flow (GVF) method [3]. In which, the external force is extended to the whole image domain and to evolve a moving parametric contour Z Z E(C) =

µ(u2x + u2y + vx2 + vy2 ) + |∇f |2 |v − ∇f |2 dxdy, 1

where f = |∇I(x, y)|2 represents the edge map of an image I, v(x, y) = (u(x, y), v(x, y))T denotes the flow vector, which is obtained using the following diffusion based partial differential equations in the whole image domain, µ∇2 u − (u − fx )(fx2 + fy2 ) = 0, µ∇2 v − (v − fy )(fx2 + fy2 ) = 0. These two equation can be viewed as diffusion process. This process creates a competition of forces exerting from the image gradient at different locations. The GVF method outperforms the classical Snakes [2] because the above diffusion processes enable GVF to have long range interaction between boundaries and moving contours. Apart from the parametric contours, Malladi et at. [4] have proposed the use of the level set framework [4] for modeling of moving curves. The level set formulation can handle merging or splitting of contours naturally. The level set framework defines a moving curve C as the zero-level of a level set function, C = {(x, y)|φ(x, y) = 0}, where φ(x, y) is the level set function. Grounded on variational framework, contours evolve according to the following equation, µ ¶ ∂φ ∇φ = g(∇I(x, y)) ∇( ) + v |∇φ|, ∂t |∇φ|

(1.2)

where v is a constant that keeps the level-set expanding or shrinking depending on its sign. In the above formulation, curve is evolving in the normal direction with speed proportional to the the value of g(∇I(x, y)). The g(∇I(x, y)) is known as edge detector which gives small value to halt the moving contour over the edges, One of the main ideas in [4] is that there is an advection term, which keeps the front of the level set function expanding (or contracting) with the speed controlled by a function, namely edge detector, g(∇I(x, y)) =

1 , p ≥ 1. 1 + |∇Gσ ∗ I(x, y)|p

(1.3)

This formulation keeps the contours exploring the image and eventually halted on the object boundaries, where the edge detector gives small value. 2

However, these methods are not suitable for elongated or low contrast objects such as blood vessels in the brain. For example, for the GVF method, it favors the conceptual edges and usually discards the narrow regions rather than including them in the segmentation results. Also, the edge detector relies on high image gradient magnitude to halt the moving contours, and can fail to detect low contrast boundaries. To deal with this problem, Vasilevskiy et al. proposed the use of flux maximizing geometric flows for image segmentation [5]. Different from the above methods, object boundaries are detected by incorporating image gradient direction and magnitude. The contour motion is governed by Ct = ∇(V(x, y))N ,

(1.4)

where V(x, y) is the gradient vector of an image and N is the normal direction on the curve C. Contour evolution direction is guided by the direction perpendicular to the image gradient. It does not fail in the situation where gradient magnitude is small or object structures are elongated and thin. Along the same line, Xiang et al. introduced an elastic model [1] for segmentation of thin concave or convex structures. This method also integrates the information of both the magnitude and direction of image gradient in the similar fashion. In [1], the image gradient magnitude is extended to whole image domain rather than locally defined, as in [5]. The dynamics of an active contour is defined by minimizing the energy, 1 E(C) = 2

Z w · wdxdydz,

(1.5)

subject to the constraint ∇ × w = δC t,

(1.6)

where w is a three dimensional vector field, δC is Dirac delta-function which is zero ´T ³ ∂(Gσ ∗I) ∂(Gσ ∗I) , − ∂x , 0 everywhere except on the curve C. δC t is approximated by δ(z)· ∂y such that C can be attracted towards the object boundaries by minimizing the energy above. On the other hand, without considering image gradient, Chan and Vese suggested to perform image segmentation by solving the minimal partition problem 3

in [6]. The segmentation result is the minimizer of an energy functional, F (c1 , c2 , C) = µ · Length(C) + ν · Area(C) Z +λ1 |I(x, y) − c1 |2 dxdy inside(C)

Z |I(x, y) − c2 |2 dxdy,

+λ2 outside(C)

(1.7) where µ ≥ 0, ν ≥ 0, λ1 ≥ 0, λ2 ≥ 0 are fixed parameters, c1 and c2 are the average intensity values of pixels inside and outside contour C respectively. This approach is capable of dealing with low contrast objects, blurry edges or noisy image that cause failure in many gradient based segmentation methods. Furthermore, choosing a large value for µ in (1.7) encourages this method to link disconnected boundaries through conceptual edges. Besides, it is reasonable to assume that vasculature is in tubular shapes. Lorigo et al. presented an arbitrary codimension framework in [7]. The dynamics of contours is driven by g0 ∇I Ct = λ(∇C, ∇2 C) + ρ(∇C, ∇I) ∇C · H , g ||∇I||

(1.8)

where g is a weighting factor and H is the Hessian matrix. Using the heuristic factor ρ(∇C, ∇I), the above formulation modifies geometric flows of the image and encourages tubular shape segmentation results. Another technique, ”Shape Driven Flow” [8], for segmenting tubular structures is proposed by Nain et al. A term namely, ”ball filter”, is introduced in that work in addition to a region based flow to perform segmentation. This term suppresses contour evolution when segment regions are too wide which deviates from tubular shapes. On the other hand, to handle low contrast narrow tubular vasculature, Yan and Kassim applied a ”capillary force” to performing segmentation in [9]. The main purpose of ”capillary force” is to pull contours inside thin and low contrast vessels. Due to background noise and overlapping of different structures which commonly exist in medical images, the intensity values of vascular structures and the background are varying from regions to regions. Minimizing the energy functional can lead to a situation that the bright regions of background and dark portions of vessels belong to the same object. 4

Although approaches in [5] and [1] are robust to intensity variation of objects and background, they are confused by the fluctuating gradient of object boundaries in such case. The locally defined flux cannot recover the weak edges that are longer than the radius of the target object. Similarly, the elastic model is insensitive to small gradient of weak edges. This can lead to contour leakage. Besides, noise also faultily generates intensity gradient across thin objects. It creates small gaps (discontinuities) on those narrow structures. The approaches above do not encode with the information about contour continuity. They tend to regard those single objects with small gaps as separated and disconnected structures. Furthermore, making assumption on vessel appear as in [9, 8, 7]loses the ability to handle abnormal or twisting vessels.

5

1.2

The proposed method

In this work [10, 11], we propose the use of weighted local variance to extract edge information for an active contour model to perform vascular segmentation. The weighted local variance is general to handle junction and twisted vessels, and not only for tubular structures. Also, it does not depend on absolute intensity contrast between background and target object So that the proposed active contour model grounded on it is capable to segment contrast varying structures. Although intensity contrast is one of an important features for different methods, it is occasionally misleading when contrast of vessel and background are not constant which is a commonly situation in medical images. Two major problems caused by contrast varying are contour discontinuity and leakage. In Chapter 2, We elaborate the procedures of applying Weighted Local Variance (WLV) to extracting contrast edge information. Followed by implementation details and proves of contrast-independency, the formulation of the proposed active contour model is shown. The active contour model is based on that edge information, including edge orientation and edge magnitude, Chapter 3, on the other hand, we present experimental results obtained from different data sets including both synthetic data and real clinical cases. Comparison is performed to validate the performance the proposed method in several chapters. Examples of manual initialized segmentation are given at the last section of this chapter. Finally, we conclude this thesis in Chapter 4.

6

CHAPTER 2 METHODOLOGY In this section, the use of weighted local variance (WLV) is introduced for extracting edge information in an image, including edge orientation and clearness. Given the edge information, segmentation of vessels is performed in an active contour model, which minimizes the angular difference between the contours and object boundaries.

2.1

Weighted Local Variance

To extract edge information based on weighted local variance (WLV), including edge orientation and clearness, we first consider the Heaviside function,   0 Hθ (x, y) =



(x,y) 1 (sin πtθ2² 2

1

if tθ (x, y) ≤ −², + 1) if |tθ (x, y)| < ², if tθ (x, y) ≥ ²,

(2.1)

where tθ (x, y) = xcosθ + ysinθ, orientation θ represents the gradient direction of the Heaviside function, and ² is a small constant. This function creates a straight line oriented along θ⊥ , where θ⊥ = θ + π2 , for smoothly separating a region into two half planes, one has values of ’1’ and the other plane has values of ’0’. At each pixel (x, y), WLV is calculated within a small, local region using two half sided Gaussian kernels, g1,θ (x, y) = Gσ (x, y) · Hθ (x, y), g2,θ (x, y) = Gσ (x, y) · (1 − Hθ (x, y)),

(2.2)

where Gσ is a Gaussian kernel with detection scale equals to σ. It shows that each half plane, as defined and separated by the Heaviside function (Equation 2.1), is multiplied with a Gaussian kernel. The scale parameter of the Gaussian kernel, σ, determines the size of objects to be detected. In our application, segmentation of vasculature, the value of σ should be defined roughly smaller than the widths of vessels. Examples of g1,θ and g2,θ with ² = 0.1, σ = 4, and different orientations 7

θ=

π , 8

0,

π , 4

3π , 8

π , 2

5π , 8

3π , 4

7π . 8

Figure 2.1: Top: Gσ=4 . Middle: g1,θ (x, y). Bottom: g2,θ (x, y). θ are illustrated in Figure 2.1. Using the half sided Gaussian kernels in Equation 2.2, given an arbitrary orientation θ, WLV at pixel (x, y) is defined as Z Varθ (x, y) =

0 {g1,θ (u, v) · (I(x + u, y + v) − µ1,θ (x, y))2 0 + g2,θ (u, v) · (I(x + u, y + v) − µ2,θ (x, y))2 }dudv, (2.3)

0 0 where I(x, y) represents the intensity at (x, y), g1,θ and g2,θ are the normalized

versions of g1,θ and g2,θ respectively, 0 g1,θ (x, y) = R

g1,θ (x, y) g2,θ (x, y) 0 and g2,θ (x, y) = R g1,θ (u, v)dudv g2,θ (u, v)dudv

(2.4)

and µ1,θ (x, y) and µ2,θ (x, y) are the weighted intensity averages of their corresponding half planes separated by a straight line along orientation θ⊥ , Z 0 g1,θ (u, v) · I(x + u, y + v)dudv,

µ1,θ (x, y) = Z µ2,θ (x, y) =

0 g2,θ (u, v) · I(x + u, y + v)dudv.

(2.5)

The WLV, Varθ (x, y), is weighted sum of squared differences between the intensities of the neighboring pixels around (x, y) and their corresponding weighted intensity averages, µ1,θ or µ2,θ . For an ideal edge, where the intensities of pixels 0 0 in each half plane covered by the non-zero entries of g1,θ and g2,θ are constant, it

satisfies the following situation ½ I(x + u, y + v) =

0 µ1,θ (x, y) if g1,θ (u, v) > 0 , 0 µ2,θ (x, y) if g2,θ (u, v) < 0

8

(2.6)

θ18

θ θ θ θ16 θ15 14 13 12 θ11 θ

θ17

10 θ 9θ 8

θ19 θ20

θ21

θ7

θ6

θ5

θ22

θ4

α

θ23

2

θ24

α1

θ3 θ

2

θ1

Figure 2.2: An example of polar space for a pixel. K = 24, the α is found according to V1 , V2 , ..., V24 . The numbers andp the gray lines represent the orientation of θk . The black crosses are the points ( Varθk , θk ), where k ∈ {1, 2, ..., 24}. The solid line and the dotted line are the straight lines (r, α1 ), (r, α2 ) respectively. the value of Varθ computed at the position over this edge with θ perpendicular to the ideal edge is zero. For a non-ideal real edge, WLV can be viewed as a weighted squared error term to measure how much the real edge deviate from an ideal edge which satisfies Equation 2.6. A large deviation implies the edge is unclear and noisy. It explains why WLV increases as the intensity fluctuation in the local regions if the pixel intensities in each of half planes are not constant. Consequently, WLV converges to zero at locations getting closer to an ideal edge. On the other hand, at a point over an edge, a small value of WLV along a particular orientation θ implies that pixels are well partitioned into two groups by a straight line along the orientation perpendicular to θ. WLV varies as θ changes and attains minimum when the straight line is aligned in the same orientation as the edge, so that θ is in the orientation perpendicular to the edge. Therefore, we formulate the relation between WLV and the edge orientation ω(x, y) at (x, y) as, ω(x, y) = arg min {Varθ (x, y)} + θ∈[0,π)

π 2

Z = arg min { θ∈[0,π)

0 0 {g1,θ (u, v) · (I(x + u, y + v) − µ1,θ (x, y))2 + g2,θ (u, v) ·

(I(x + u, y + v) − µ2,θ (x, y))2 }dudv } +

π . 2

(2.7)

As we shall see in Section 3, estimation of edge orientation using WLV is 9

less sensitive to noise. Our method is different from other conventional operators such as first derivative of Gaussian, Laplacian operators or quadrature filters [12]. When estimating the edge orientation, since noisy pixels can boost the values of WLV along all orientations, there is little impact on finding the orientation by minimizing WLV. It is because pixels are weighted isotropically in the calculation of WLV. Thus, the noisy pixels affect the values of WLV by roughly the same amount disregard to the kernel orientation. This situation holds unless noisy pixels form a group and create an edge which has higher contrast than the object boundary. The edge orientation ω(x, y) can be estimated using the relation stated in Equation 2.7. However, this equation cannot be solved analytically. in this thesis, we propose to discretize θ as θk =

kπ ,k K

∈ {1, 2, ..., K} and estimate ω(x, y) in a

continuous fashion. A very accurate estimation requires θ being discretized in a high angular resolution. However, probing the WLV in a high angular resolution for a discrete estimation is computationally expensive. Without using a high angular resolution, we set K = 24 in this work. To estimate the continuous edge orientation ω(x, y) at a pixel, we first employ polar space to observe the relationship between WLVs obtained along all discrete orientations. The WLVs are represented as points using the polar coordinates p ( Varθk , θk ) in polar space (See Figure 2.2). When the orientations θk are close to the perpendicular direction of the edge (i.e. θk +

π 2

is the edge orientation),

the values of WLV become smaller than those parallel to the edge. As shown in Figure 2.2, the points along these orientations are packed closely to the origin and sparsely along the other orientations. To capture this relation, we use the p sum squared perpendicular distances from the points ( Varθk , θk ) to a straight line passing though the origin along an orientation α. The distance is defined as,

D(x,y) (α) =

K X

Varθk (x, y) cos2 (θk − α).

(2.8)

k=1

Therefore, the edge orientation can be computed using the above equation which

10

approximates the Equation 2.7 , ω(x, y) = arg max {D(x,y) (α)}, α∈[0,π)

= arg max

α∈[0,π)

K X

Varθk (x, y) cos2 (θk − α).

(2.9)

k=1

The solution of Equation 2.9 is obtained by computing the zero occurrence of the first derivative of D with respect to the orientation α, PK k=1 tan 2α = PK k=1

Varθk (x, y) · sin 2θk Varθk (x, y) · cos 2θk

, α ∈ [0, π).

(2.10)

Solving the above equation, two solutions α1 and α2 can be found analytically for each pixel. The angles α1 and α2 are orthogonal to each other. They represent the edge and gradient orientation respectively. An graphical example of solving α is given in Figure 2.2. In this example, in terms of the perpendicular distance, the points are packed closer to the line along the orientation α1 . It implies that the variances along the orientation α2 are smaller, hence, α2 is the gradient orientation and α1 is the edge orientation in this example. Apart from the edge orientation estimation, WLVs along different orientations also reflect the confidence about the estimated orientation. For instance, the difference between the WLV values along the edge orientation and gradient orientation should be large for a sharp straight edge. In contrast, such difference should be small for a noisy edge. Therefore, it is natural to measure the edge clearness using the sum squared perpendicular distances along edge and gradient orientations. We therefore define a ratio, M (x, y) =

|D(x,y) (α1 ) − D(x,y) (α2 )| , D(x,y) (α1 ) + D(x,y) (α2 ) + ρ

(2.11)

where ρ is a small constant to prevent singularity in homogeneous regions that WLVs are zero along all orientations. The Equation 2.11 measures the ratio between the difference and sum of the sum squared perpendicular distances along edge and gradient orientations. As the WLV values vary largely in different orientations for sharp and clear edges, the ratio M (Equation 2.11) can give a large value. On the other hand, a pixel with small value of ratio M belongs to noise or an unclear edge. 11

An important property is that the ratio M does not depend on the intensity contrast. It can be illustrated by considering two arbitrary image patches I and J which have different intensity contrast and brightness but are related by I = c · J + b. The terms c and b are constants representing the differences in intensity contrast and brightness respectively. Hence, Z µI1,θ (x, y)

=

c · J(x + u, y + v) + b · g1,θ (u, v)dudv

= c · µJ1,θ (x, y) + b similarly,

Z µI2,θ (x, y)

=

c · J(x + u, y + v) + b · g1,θ (u, v)dudv

= c · µJ2,θ (x, y) + b therefore, VarIθ (x, y)

Z =

{g1,θ (u, v) · (c · J(x + u, y + v) + b − c · µJ1,θ (x, y) − b)2 +g2,θ (u, v) · (c · J(x + u, y + v) + b − c · µJ2,θ (x, y) − b)2 } dudv,

= c2 · VarJθ (x, y). (2.12) I J It can be shown that D(x,y) (α) = c2 · D(x,y) (α). The constant term c is finally

canceled in Equation 2.11, for calculation of the clearness of edges. Distinct from other edge detectors or operators such as Sobel, Roberts, Prewitt, Laplacian operators, quadrature filters [12] and λτ space representations [13], the clearness of edge of WLV does not depend on intensity contrast. This is particularly essential to deal with the vascular images that consist of both low contrast vessels and noisy vessels.

2.2

Active Contour Model

In the previous section, we derive a WLV approach to estimating edge orientation and clearness of edges. We now formulate an active contour model which employs the estimated edge orientation and clearness for performing segmentation on vascular images. Let C(l) = [x(l) y(l)]T be a closed curve parameterized 12

by the length parameter l, where 0 ≤ l ≤ 1. The tangent direction of this curve is denoted as γ(l), where −π ≤ γ(l) < π, ∀l. As such, the optimal contours should minimize the angular discrepancies between ω(C(l)) and γ(C(l)) along the curve. However, the edge direction is ambiguous along two directions, ω(C(l)) and ω(C(l)) − π. Such ambiguity can be eliminated by comparing the average intensity of the local regions in two different sides of the edge. The edge direction is defined as ω(C(l)) −

π · (S(C(l)) − 1), 2

(2.13)

where S(x, y) = sign(µ1,ω(x,y) (x, y) − µ2,ω(x,y) (x, y)). The desired resultant curve is designed to align in the same direction as the object boundaries, which maximizes the following functional, I cos{γ(C(l)) − ω(C(l)) +

F (C) = I

C

=

π · (S(C(l)) − 1)}dl, 2

S(C(l)) · {cos(γ(C(l))) cos(ω(C(l))) + sin(γ(C(l))) sin(ω(C(l)))}dl. C

(2.14) since cos(γ(C(l))) =

∂ x(l) ∂l

and sin(γ(C(l))) =

I F (C) =

S(C(l)) · {cos(ω(C(l))) C

∂ y(l), ∂l

∂ ∂ x(l) + sin(ω(C(l))) y(l)}dl. ∂l ∂l

We make this functional favor clear edges by giving large weights for those boundaries having high value in Equation 2.11, I FM (C) =

S(C(l)) · M (C(l)) · {cos(ω(C(l))) C

∂ ∂ x(l) + sin(ω(C(l))) y(l)}dl. ∂l ∂l (2.15)

Applying the Green’s theorem, we have, Z FM (C) =

{

∂ (S(x, y)M (x, y) sin(ω(x, y))) ∂x

Inside(C)



∂ (S(x, y)M (x, y) cos(ω(x, y)))}dxdy. ∂y

13

(2.16)

2.3

Level Set Representation

We utilize an active contour approach to maximize the functional in Equation 2.16. Using the zero level of a level set surface [14] to represent moving contours, the motion of the level set surface is φt = E|∇φ|, where E is the scalar value of the curve evolution speed in its normal direction. We optimize the Equation 2.16 based on gradient descent, E=

δFM (C) , δC

we have, E(x, y) = {

∂ ∂ (S(x, y)M (x, y) sin(ω(x, y))) − (S(x, y)M (x, y) cos(ω(x, y)))}. ∂x ∂y (2.17)

As a result, the term E represents the rate of change of the edge information including the change of the orientation and clearness. It has large value inside objects and near the clear and straight boundaries. Therefore, the initial contour can be chosen by thresholding a proportion of regions that has the highest E values. In practice, we smooth the term E slightly before preforming thresholding, and smooth the resultant contours obtained from thresholding using a curvature flow to ensure the initial seeds are located only in the major vessels but not noisy regions.

2.4

Implementation

In our implementation, WLV is computed along 24 discrete orientations and K = 24 for Equation 2.8 and Equation 2.10. It requires calculation of WLVs 24 times at each pixel. To speed up the calculation, the integration in Equation 2.3 is reformulated in the form of convolution, which can be computed in the Fourier

14

domain efficiently, Varθ (x, y) Z 0 = {g1,θ (u, v) · (I(x + u, y + v) − µ1,θ (x, y))2 0 + g2,θ (u, v) · (I(x + u, y + v) − µ2,θ (x, y))2 }dudv,

since g10 and g20 are sum-to-one, Z 0 = {g1,θ (u, v) · I 2 (x + u, y + v) − µ1,θ 2 (x, y) 0 + g2,θ (u, v) · I 2 (x + u, y + v) − µ2,θ 2 (x, y)}dudv,

flipping g10 and g20 and writing as convolution, Z 0 = {g2,θ (u, v) · I 2 (x − u, y − v) − µ2,θ 2 (x, y) 0 + g1,θ (u, v) · I 2 (x − u, y − v) − µ1,θ 2 (x, y)}dudv, 0 0 = {g1,θ (x, y) + g2,θ (x, y)} ∗ I 2 (x, y)

(2.18)

0 0 −{g1,θ (x, y) ∗ I(x, y)}2 − {g2,θ (x, y) ∗ I(x, y)}2 .

Therefore, the integration in Equation 2.3 is formulated as convolution and calculated as multiplication in the Fourier domain. The running time of calculating WLVs along K different orientations is reduced from O(KN 2 ) to O(KN log N ) for an image having N pixels. Finally, we normalize E(x, y) in Equation (2.17) using a sigmoid function, E 0 (x, y) = q where σE =

1 N

P

E 2 (x, y) − ( N1

2 1+ P

e−E(x,y)/σE

− 1,

(2.19)

E(x, y))2 , is the standard deviation of

E(x, y). This function makes the contour evolves in a fairly constant speed in the regions having large magnitude of E and converge slowly and stably near towards the zero crossing boundaries of E.

15

CHAPTER 3 EXPERIMENTS In this section, we study the accuracy in orientation estimation by weighted local variance (WLV) and show the segmentation results obtained by the active contour model using WLV. Two synthetic images are utilized to illustrate the robustness of WLV-based orientation estimation against noise. Besides, using one synthetic image, six digital subtraction angiograms and four retinal angiograms, the proposed active contour model has been compared with three other active contour approaches.

3.1

Edge orientation estimation

The first experiment used a horizontal edge, as shown in Figure 3.1a. By adding different levels of Gaussian noise (no noise, SNR=10, 5, 2 and 1) on this image, √ we measured the values of Varθ at the center pixel (x1 , y1 ) along 512 discrete orientations, θ, ranging from 0 to π (Figure 3.1b). The angles that minimize √ Varθ in different noise levels are marked in the figure. The result is compared with the first derivatives of Gaussian, which are commonly used for detecting intensity discontinuities [15, 16]. Considering the relation between the filtering responses, I ∗ Gy = (I ∗ G)y and I ∗ Gx = (I ∗ G)x , the orientation estimation by the first derivatives of Gaussian can be viewed as the orientation of the gradient vector obtained from a smoothed image. Hence, such information is also widely utilized in different active contour models, such as [5, 1]. The first derivatives of

No noise SNR=10 SNR=5 SNR=2 SNR=1

WLV 0 4.90 × 10−5 0.000413 0.025881 0.056744

The first derivatives of Gaussian 0 0.045001 0.081156 0.119140 0.123655

Table 3.1: Average absolute acute angular differences (in radians) between the estimated orientation and the orientation of the horizontal edge . 16

SNR=∞ 1 0 SNR=10 1 0

SNR=5

1 0

SNR=2

1 0

SNR=1

1 0

0

π/4

π/2 Orientation of filter Absolute filtering response, |Aθ(x1,y1)|

π

3π/4

1/2

Square rooted weighted local variance, Vθ (x1,y1) Orientation maximizing absolute filter response arg θ max (|Aθ(x1,y1)|) 1/2

Orientation minimizing weighted local variance, arg θ min (Vθ (x1,y1))

(b)

(a)

Figure 3.1: (a) From top to bottom, a clear horizontal edge and the image is corrupted using Gaussian noise with SNR= ∞, 10, 5, 2, 1. (b) Plots of WLVs and filtering responses of the first derivatives of Gaussian along different orientations. The orientation estimation results are marked as circles and crosses for WLV and the first derivatives of Gaussian respectively. Gaussian along x-direction and y-direction are given by, Gx = −

x − x2 +y2 2 y − x2 +y2 2 2σ e e 2σ , and G = − y 2πσ 4 2πσ 4

respectively. We convolved the image with these two filters and their responses are denoted as A0 (x1 , y1 ) and Aπ/2 (x1 , y1 ). The filtering responses in other orientations θ were synthesized by Aθ = A0 cos θ + Aπ/2 sin θ. The edge orientation for maximizing the absolute filtering response magnitude was computed using I∗Gy arctan I∗G in the quadrant of [0, π). x

The scale of the Gaussian functions for both methods was set to be 2. In Figure 3.1b, we can see how noise affects the edge orientation estimation by WLV and the first derivatives of Gaussian. The estimated orientations according 17

to Aθ are inaccurate in the noisy images. In contrast, although the values of Varθ are increased in the noisy images, the estimation using WLV is more accurate. As mentioned in the previous section, noisy pixels can boost the values of WLV along all orientations. But it has little impact on the accuracy in estimating the orientation that minimizes Varθ . To further investigate the performance of WLV, the same experiment was iterated 1000 times. The absolute acute angular discrepancies |θestimated − θtruth | were measured each time for both methods. Its average values are listed in Table 3.1. WLV yields lower discrepancies in all noisy cases. It shows that the orientation estimation by WLV is more accurate than those by the first derivatives of Gaussian. The performance of WLV was now examined using a more complicated synthetic image (Figure 3.2). This image contains tubular structures with different widths and intensity values (0.2 for the gray regions and 1 for the white regions). It also consists of curved boundaries, junctions and corners. In this experiment, WLV was compared with the first derivatives of Gaussian and ”Orientation Tensor” [12] (OT). For OT-based method, the edge orientation is estimated by performing eigen decomposition on an orientation tensor, which is calculated from the filter responses of three complex valued quadrature filters. The bandwidth √ and center frequency were set to be 2 and π/2 2 according to [12]. The scale parameter of the Gaussian function used by WLV and the first derivatives of Gaussian was 2. The edge orientation estimation by the three methods were measured at the positions of the object boundaries. Comparison was performed on the images which were corrupted by a Gaussian noise with different values of standard deviation σ. The histograms of |θreference − θestimated | for different noisy images are shown in Figure 3.2. θreference was obtained according to the intensity gradient on the smoothed version of the synthetic image in the quadrant of [0, π) (by a Gaussian filter, scale parameter equals to 1). For the image without noise, σ = 0 in Figure 3.2, all methods show only small discrepancies which were caused by the differences in handling corners and junctions. Comparing the results between the noisy images and the image without noise, more occurrences of large discrepancies are observed for noisy images. It is found that the orientation estimation by WLV is more accurate than the other 18

two methods as WLV has generally lower discrepancies.

3.2 3.2.1

Image Segmentation Synthetic Image Segmentation

Apart from noise robustness, it is interesting to reveal the importance of contrast independent edge information using a synthetic image. In this experiment, a synthetic vessel (Figure 3.3) is employed. The contrast between this vessel and image background is varying in different segments of the vessel. It is shown in Figure 3.3, the value of Q is independent of intensity contrast between the vessel and background. In this figure, the values of σ1 and σ2 used are set to be small for better visualization of the edge location. In practice, σ1 should be set roughly larger than the radius of vessels. As mentioned before, the ratio between σ1 and σ2 controls the sensitivity of this method for detecting straight edges. For vasculature, vessel walls mainly consist of elongated edges, with some junctions and endings. We find that the performance of the proposed method is the best when

σ1 σ2

= 0.8 for vascular images.

Therefore, the ratio between σ1 and σ2 is fixed to be 0.8 in all the experiments. In Figure 3.4, we compare the segmentation performance between the proposed active contour model based on WLV and ”Flux Maximizing Geometric Flows” (FLUX) using a corrupted image of the synthetic vessel. The initial contour is manually placed to demonstrate the difference between these two methods in handling vessels having varying contrast. In Figure 3.4c, the contour of FLUX cannot propagate through the low contrast segment. The WLV has no problem to deal with this vessel (3.4d) when the contrast is dropping.

3.2.2

Real Image Segmentation Comparison

The proposed active contour model using WLV is applied on the real images. A DSA image (Figure 5a) obtained from the Department of Diagnostic Radiology and Organ Imaging, Prince of Wales Hospital, Hong Kong, and a retinal angiogram (Figure 7a) provided by the ”DRIVE database” [17] were selected for the experiments. In the images, four intensity profiles for different lines of interest 19

are plotted to illustrate intensity variations in vessels and background for understanding the behaviors of different approaches. Two comparisons were performed between the proposed method and three different approaches including (1) ”Active Contours without Edges” (ACWE) [6], (2) ”A New Active Contour Method based on Elastic Interaction” (ACEI) [1] and (3) ”Flux Maximizing Geometric Flows” (FLUX) [5]. Figure 3.5 shows a DSA image which consists of a vertical vessel with limited background noise. The intensity inside the vessel is dropped significantly in several positions (see Figures 3.5b-f for lines of interest and their corresponding intensity profiles). Consistently, all approaches have no leakage because of the low level of background noise. However, ACWE (Figure 3.6a) only selects the brightest segment of the vessel because of the large intensity variations in the vessel. For ACEI, the resultant contour (Figure 3.6b) halts at two positions (Figures 3.5c and 3.5f). For FLUX, several positions inside the vessel was recognized as vessel boundaries (Figure 3.6c). The disconnected contours cannot be merged to enclose the entire vessel. In contrast, WLV does not prefer relatively weak edges across the vessel. Therefore, the proposed method can handle intensity variations inside the vessel, and the contour can propagate through the dim regions to capture the whole vessel (Figure 3.6d). The second experiment includes a retinal angiogram, as shown in Figure 3.7a. The background intensity increases, as shown along the line of interest (Figure 3.7c). Figure 3.7f shows an intensity drop inside a small vessel branch. The results of ACWE are shown in Figure 3.8a. ACWE selects the bright background region in the right side of the image as vessel region. Several dim vessel portions are excluded. For ACEI, the contour initialization, as mentioned in [1], mixes up the vessel and background regions in the right part of the image. Thus, two initial contours were placed manually inside the vessels. Leakage occurred in several positions (Figure 3.8b), especially in the right portion of the image, where the boundary contrast is small (see Figures 3.5b, 3.5d and 3.5e for lines of interest and their corresponding intensity profiles). For FLUX, since there is no leakage (Figure 3.8c), the performance of FLUX is better than that of ACEI and ACWE. But, its contour halts at the position marked by line f in Figure 3.7b (see Figure 3.7f for its intensity profile). For the proposed method, neither leakage nor undesired discontinuity is found and both vessels are successfully captured 20

(Figure 3.8d). In the following three experiments, we show the intermediate steps of evolving curves to illustrate how contours evolve along low contrast vessels. A retinal angiography is shown in (Figure 3.11) as the first example showing intermediate evolving steps. In this image, the background intensity is generally lower in left-bottom, left-top and right-top regions. Therefore, intensity contrast in this image is varying from regions to regions. Since the ACWE method partitions the image into high intensity group and low intensity group, it cannot give a satisfactory result as the low intensity vessel is excluded from the contour while high intensity background is included (Figure 3.12). On the other hand, ACEI tends to ignore weak edges when strong edges are present. Therefore, the contour is guided by noise and leaks through blurred boundaries at the bottom of the image (Figure ??). We have manually placed the initial contour of ACEI inside the blood vessel as the heuristic approach in [1] cannot locate the vessel position in this low contrast situation. FLUX, on the other hand, selects initial contour correctly and indicates side vessel as well (Figure ??a). On the contrary, our method favors smooth contour and keeps branches be connected without leakage. Figure 3.12 shows that our method locates the main vessel correctly, and can handle intensity variation in the object and background regions because of the calculation of minimal variance in a localized manner. It also avoids leakages in the low contrast regions since edges are extended along its direction. In Figure 3.13, we have shown a DSA where the intensity of the object is dropped significantly at two positions (the Y-shape structure and the circular structure at the middle of the image). As shown in Figure ??, similar to the results previously shown, ACWE could not capture objects in the dim regions. Conversely, as shown in As shown in Figure ??, the contour of FLUX is halted when the gradient along the vessel is similar to the gradient of object boundaries. Besides, ACEI capture the vessel but the result is noisy (see Figure 3.14), although it is the best results obtained among different combinations of parameters. Contours followed the noisy regions attached to the vessels rather than the weak vessel boundaries. Increasing either the σ of the Gaussian filter or curvature term 21

as authors suggested in [1] dose not help and results in contour halted at dim or narrow parts. In contrast, our method extends boundaries along their direction to recover the discontinued boundaries over dim and tiny segments. Thus, the contour can propagate through the dim and narrow regions (Figure 3.14). The last example which shows intermediate evolving steps(Figure ??a) is based on a vessel with a dim portion. It aims to examine the ability of different approaches to connect a gap, which has size comparable to the object width. FM, ACEI and ACWE cannot merge the contours across the portion with low intensity value (figs.3.16a, b and c). The value of σ of the Gaussian filter being used in ACEI and FM cannot be too large. Otherwise, they cannot handle the narrow branch at the right portion of the image. ACWE fails to detect the narrow branch using different values of µ because of the significant intensity variation. Therefore, we only show the result with a large value of µ, in which contour is not halted far away from vessel boundaries, as shown in Figure 3.16c. It shows that ACWE cannot recognize the vessel as a single object. Here the proposed method is able to connect the top and bottom portions of the vessel (Figure 3.16a). For this image, we demonstrate the results obtained using different values of σx . The main purpose is to show the flexibility of the proposed method, which is capable to control the scale of detected boundary. In the case using a small value of σx for identifying the target region as separated objects in (Figure 3.16b). With a large value of σx , our method is able to recognize the whole vessel as a single object (Figure 3.16a).

3.2.3

Manual Initialized Real Image Segmentation

When the image is very noisy or vessels are not fully connected, it is helpful to manually place seeds as initialization of the active contour model. In this section, we are going to demonstrate obtaining segmentation results by manual initialization. Two DSA images and one retinal angiogram are utilized as three examples of segmentation which involves limited manual interaction. In Figure 3.17, we show an interesting region of a retinal angiogram. The contrast between the vessels and background is generally dropping from the top region to the bottom region. The automatic initialization scheme cannot locate initial seeds inside the vessel which lead to serious contour leakage. Instead, 22

manual initialization resulted in a good segmentation result (3.17c). In this DSA image (Figure 3.18, we would like to show that manual initialization does not affect the characteristic of the active contour model to pass though contrast-varying region. In this example, a single initial seed is manually placed inside the upper portion of the vessel. This portion is connected with lower portion of this vessel though a low contrast segment. As shown in Figure 3.18(c), the active contour have no problem to propagate inside the low contrast segment and the whole vessel is discovered. In the last example, a DSA that consists of disconnected branches (Figure 3.19). The initial contours are manually placed inside the branches. It is going to show the results based on disconnected vessels. In Figure 3.19, several disconnected branches are selected by the proposed method. On the other hand, as one of the characteristic of level set presentation of active contours, excessive utilization of initial seeds does not affect the final segmentation result. It is because two or more evolving contours inside one connected region can merge which is not harmful to the final segmentation result.

23

Frequency

σ=0

Orientation Tensor Weighted Local Variance The first derivatives of Gaussian

1000 0

Frequency Frequency

σ=0.04 500

300

0 σ=0.1 200 100 0

Frequency

σ=0.15 200 100 0

Frequency

σ=0.2 200 100 0

0

π/8

π/4

3π/8

π/2

Absolute acute angular difference

Figure 3.2: Top: A 128 × 128 synthetic image. Bottom: The angular discrepancies between the estimated orientation and the reference orientation. Zero angular difference at the left side of the x-axis represents a correct estimation (no discrepancy), π/2 means the estimated orientation is perpendicular to the reference orientation. ”Frequency” in the y-axis represents the number of occurrences of a particular orientation discrepancy. High number of occurrences in low discrepancy implies better performance.

24

0.4 0.3 0.2 0.1

Figure 3.3: Top: A 128 × 128 synthetic vessel with width around 6 pixels and having intensity decay from 1 (in the endings) to 13 (in the middle). Bottom: Values of Q calculated using K = 24, ρ = 0.0001, σ1 = 2 and σ2 = 1.6.

25

(a)

(b)

(c)

(d)

Figure 3.4: (a) The synthetic vessel corrupted by addictive Gaussian noise, σ = 0.3. (b) Manually selected initial contour. Intensity re-scaled for better visualization of the intial contour. (c) Resultant contour of FLUX, r = {5, 6, 7}, image pre-processed with Gaussian filter, Gσ=1 . (d) Resultant contour of MLV, K = 24, σ1 = 7 and σ2 = 5.6.

26

(a) 1

c 0.11

0.2

0.1

Intensity

d 0.5

0.1

Intensity

0.15

Intensity

0.05

0

e

0.09 0.08 0.07

0

0.06

0.3

0.16 0.15

0.25

−0.5

0.15

Intensity

Intensity

0.14 0.2

0.13 0.12

0.1

f

0.05

0.11 0.1

(c) (d) (e) (f)

−1

(b)

Figure 3.5: (a) A 256 × 256 DSA image. (b) Showing the lines of interest where intensity profiles are plotted in (c-f). (c) The intensity profile is plotted along line c in (b) from left to right. (d-f) The intensity profiles are plotted along the corresponding lines in (b) from top to bottom.

27

(a)

(b)

(c)

(d) Figure 3.6: Top row: Initial contours. Bottom row: Final results. (a) ACWE, manually selected initial contour, µ = 0.001 · 2552 , λ1 = λ2 = 1, ν = 0, h = 1. (b) ACEI, initial contour obtained automatically as in [1], σ1 = 2, σ2 = 10, µ = 0.0015. (c) FLUX, r = {1, 2, 3, 4}, image pre-processed with Gσ=2 initial contour obtained automatically from regions with highest 5% inward flux which is further smoothed under curvature flow for 500 steps. (d) The proposed method, σ = 2, ρ = 0.0001, ² = 0.1, initial contour obtained automatically from smoothing the term E using Gσ=1 and choosing the regions with highest 5% value of E which is further smoothed under curvature flow for 500 steps.

28

(a) 0.58

0.58

0.54 0.52

0.56 Intensity

Intensity

0.56

0.52

0.5 0.48

0.5

0.66

0.64

0.64

0.62

0.62 0.6 0.58

f

d

Intensity

Intensity

c

0.54

0.6 0.58 0.56 0.54

(c) (d) (e) (f)

e

(b)

Figure 3.7: (a) A 128 × 128 retinal angiogram. (b) Showing the lines of interest where intensity profiles are plotted in (c-f). (c-f) The intensity profiles are plotted along the corresponding lines in (b) from left to right.

29

(a)

(b)

(c)

(d) Figure 3.8: Top row: Initial contours. Bottom row: Final results. (a) ACWE, manually selected initial contour, µ = 0.004 · 2552 , λ1 = λ2 = 1, ν = 0, h = 1. (b) ACEI, manually selected initial contour, σ1 = 1.5, µ = 0.0015. (c) FLUX, r = {2, 3, 4}, image pre-processed with Gσ=1 initial contour obtained automatically from regions with highest 5% inward flux which is further smoothed under curvature flow for 2000 steps. (d) The proposed method, σ = 1.5, ρ = 0.0001, ² = 0.1, initial contour obtained automatically from smoothing the term E and choosing the regions with highest 5% value of E which is further smoothed under curvature flow for 500 steps.

30

(a) 0.75 0.7

Intensity

0.65 0.6 0.55 0.5 0.45

0.9

c

d

Intensity

0.8 0.7 0.6 0.5 0.4

(c) (d)

(b)

Figure 3.9: (a) A 100 × 100 retinal angiogram. (b) Showing the position where intensity profiles are plotted in (c,d). (c,d) The intensity profiles are plotted from top to bottom.

31

(a)

(b)

(c)

(d) Figure 3.10: Top row: Initial contours. Bottom row: Final results. (a) ACWE, manually selected initial contour, µ = 0.004 · 2552 , λ1 = λ2 = 1, ν = 0, h = 1. (b) EI, manually selected initial contour. (c) FLUX, image pre-processed with Gσ=1 initial contour obtained automatically from regions with highest 5% inward flux which is further smoothed under curvature flow for 1000 steps.

32

(a)

(b)

(c)

Figure 3.11: Top row: Initial contours. Bottom row: Final results. (a): FLUX with r = 1, 2, 3, image is preprocessed with Gσ=0.8 , initial contour obtained automatically from regions with the highest 5% inward flux which is further smoothed under curvature flows for 200 steps. (b): ACEI, the image is preprocessed with Gσ=0.8 , manually selected initial contour. (c): ACWE with µ = 0.000 01 · 2552 , λ1 = λ2 = 1, ν = 0, h = 1, manually selected initial contour.

Figure 3.12: Result of the proposed method using σx = 1.6 and σy = 0.8. Left: Initial contour obtained automatically from regions with the highest 5% field value which is further smoothed under curvature flows for 200 steps. Middle: Intermediate step. Right: Final result.

33

(a)

(b)

(c)

Figure 3.13: Top row: Initial contours. Bottom row: Final results. (a): FLUX with r = 4, 5, 6, 7, image preprocessed with Gσ=3 , initial contour obtained automatically from regions with the highest 5% inward flux which is further smoothed under curvature flows for 200 steps. (b): ACEI, initial contour obtained automatically using the heuristic approach presented in [1] with σ1 = 3, σ2 = 10. (c): ACWE with µ = 0.4 · 2552 , λ1 = λ2 = 1, ν = 0, h = 1, manually selected initial contour.

34

(a)

(b) Figure 3.14: Result of the proposed method using σx = σy = 3 in (a) and σx = 1, σy = 3 in (b). Left: Initial contour obtained automatically from regions with the highest 5% field value which is further smoothed under curvature flows for 200 steps. Middle: Intermediate step. Right: Final result.

35

(a)

(b)

(c)

Figure 3.15: Top row: Initial contours. Bottom row: Final results. (a): FLUX with r = 1, 2, 3, image preprocessed with G0.8 , initial contour obtained automatically from regions with the highest 10% inward flux which is further smoothed under curvature flows for 200 steps. (b): ACEI initial contour obtained automatically using the heuristic approach presented in [1] with σ1 = 0.8 and σ2 = 10. (c): ACWE with µ = 0.000 01 · 2552 , λ1 = λ2 = 1, ν = 0, h = 1, manually selected initial contour.

Figure 3.16: Result of the proposed method using σx = 1.6 and σy = 0.8. Left: Initial contour obtained automatically from regions with the highest 10% field value which is further smoothed under curvature flows for 200 steps. Middle: Two intermediate steps, the contour is propagating through the narrow and dim segments. Right: Final result.

36

(a)

(b)

(c) Figure 3.17: (a) A 100 × 100 retinal angiogram. (b) Manually selected initial contours. Intensity re-scaled for better visualization of the initial contours. (c) Resultant contour of WLV, K = 24, ρ = 0.0001, σ1 = 1.2 and σ2 = 0.96.

37

(a)

(b)

(c) Figure 3.18: (a) A 128 × 128 DSA image. (b) Manually selected initial contour. Intensity re-scaled for better visualization of the initial contour. (c) Resultant contour of WLV, K = 24, ρ = 0.0001, σ1 = 6 and σ2 = 4.8.

38

(a)

(b)

(c) Figure 3.19: (a) A 120 × 120 DSA image. (b) Manually selected initial contours. Intensity re-scaled for better visualization of the initial contours. (c) Resultant contours of WLV. K = 24, ρ = 0.0001, σ1 = 1.5 and σ2 = 1.2.

39

CHAPTER 4 DISCUSSION AND FUTURE EXTENSION 4.1

Active Contour Models

The proposed active contour model based on WLV is relatively simple. There is no regularization term and make use of no prior knowledge of target object. It is possible to add the aforementioned substances to formulate a more sophisticated active contour model with better performance. On the other hand, if we consider a vector field, [u v]T , which integrates both the edge orientation and clearness of WLV, [u v]T = S ·M [cos(ω + π2 ) sin(ω + π2 )]T , where ω + π2 is the gradient direction, the energy functional in Equation 2.16 R ∂v becomes Inside(C) ( ∂u + ∂y )dxdy. It yields an interesting relation to the work in ∂x [18] on edge integration based on the Laplacian operation, where the vector field [u v]T mentioned above can be utilized to substitute the vector field of the image gradient. Subsequently, it is possible to formulate the WLV-based edge information as a vector field and be incorporated in other works which employ image gradient, such as [18, 19, 5, 1].

4.2

Extension for 3D Image Volume

The main barrier for extending the presented work into 3D is that the optimization procedures stated in Equation 2.7 is different to be solved in 3D space. The main reason is, for 3D orientation estimation, there are two rotation parameters to represent an orientation. There is a set of zero occurrence solution of the partial differential equation. This set of solution includes many local extrema and stationary points, where only two orientations are the preferable solution. However, the preferable solution is not easy to be identified in the whole set of solution. If the equation can be solved, the whole work can be extended for 3D image volume. 40

CHAPTER 5 CONCLUSIONS In this thesis, we have presented the use of the weighted local variance (WLV) for extracting edge information, including edge orientation and clearness, to perform the active contour based vessel segmentation. The edge orientation using WLV is demonstrated to be robust to noise in the synthetic data experiments. Using the estimated edge information, the active contour based segmentation has been validated using real clinical data including angiograms and digital subtraction angiography (DSA) images. The proposed method have compared with three related published approaches. It shows that our method is suitable for handling vasculature having blurry and low contrast boundaries in noisy images.

41

REFERENCES [1] Y. Xiang, A.C.S. Chung, and J. Ye, “A new active contour method based on elastic interaction,” in Proceeding IEEE Computer Society Conference in Computer Vision and Pattern Recognition, 2005, vol. 1, pp. 452–457. [2] M.Kass, A.Witkin, and D.Terzopoulos, “Snakes: Active contour models,” in International Journal of Computer Vision, 1998, vol. 1, pp. 321–331. [3] C.Xu and J.Prince, “Snakes, Shapes, and Gradient Vector Flow,” in IEEE Transaction on Image Processing, 1998, vol. 7, pp. 359–369. [4] R.Malladi, J.Sethian, and B.Vemuri, “Shape Modeling with Front Propagation: A Level Set Approach,” in IEEE Transaction on Pattern Analysis and Machine Intelligence, 1995, vol. 17, pp. 158–175. [5] A.Vasilevskiy and K.Siddiqi, “Flux Maximizing Geometric Flows,” in IEEE Transaction on Pattern Analysis and Machine Intelligence, 2002, vol. 24, pp. 1565–1578. [6] T.F. Chan and L.A. Vese, “Active contour without edges,” in IEEE Transaction on Image Processing, 2001, vol. 10, pp. 266–277. [7] L.M.Lorigo,

O.D.Faugeras,

W.E.L. Grimson,

R.Keriven,

R.Kikinis,

A.Nabavi, and C.F. Westin, “CURVES: Curve Evolution for Vessel Segmentation,” in Medical Image Analysis, 2001, vol. 5, pp. 195–206. [8] D.Nain, A.Yezzi, and G.Turk, “Vessel Segmentation using a Shape Driven Flow,” in International Conference on Medical Image Computing and Computer Assisted Intervention, 2004, pp. 51–59. [9] P.Yan and A.A.Kassim, “MRA Image Segmentation with Capillary Active Contour,” in International Conference on Medical Image Computing and Computer Assisted Intervention, 2005, pp. 51–58. [10] W.K. Law and A.C.S. Chung, “Minimal weighted local variance as edge detection for active contour models,” in The Seventh Asian Conference on Computer Vision, 2006, vol. v, pp. 622–632. 42

[11] W.K. Law and A.C.S. Chung, “Segmentation of vessels using weighted local variances and an active contour model,” in IEEE Computer Society Workshop on Mathematical Methods in Biomedical Image Analysis, 2006. [12] G.Granlund and H.Knutssan,

Signal Processing for Computer Vision,

Kluwer Academic Publishers, 1995. [13] M.Gokmen and A.K.Jain, “λτ -Space Representation of Images and Generalized Edge Detector,” in IEEE Transaction on Pattern Analysis and Machine Intelligence, 1997, vol. 19, pp. 545–563. [14] S.Osher and J.Sethian, “Fronts Propagating with Curvature Dependent Speed: Algorithms Based on Hamilton-Jacobi Formulations,” in Journal of Computational Physics, 1988, vol. 79, pp. 12–49. [15] J. Canny, “A Computational Approach to Edge Detection,” in IEEE Transaction on Pattern Analysis and Machine Intelligence, 1986, vol. 6, pp. 679– 697. [16] W.Y. Ma and B.S. Manjunath, “Edgeflow: A technique for boundary detection and image segmentation,” in IEEE Transaction on Image Processing, 2000, vol. 9, pp. 1375–1388. [17] J.J.Staal, M.D.Abramoff, M.Niemeijer, M.A. Viergever, and B.V. Ginneken, “Ridge Based Vessel Segmentation in Color Images of the Retina,” in IEEE Transactions on Medical Imaging, 2004, vol. 23, pp. 501–509. [18] D.Marr and E.Hildreth, “Theory of Edge Detection,” in Proceedings of the Royal Society of London Series B, 1980, vol. 207, pp. 187–217. [19] R.Kimmel and A.M.Bruckstein, “Regularized Laplacian Zero Crossings as Optimal Edge Integrators,” in International Journal of Computer Vision, 2003, vol. 53 of 3, pp. 225–243.

43

segmentation of vessels using weighted local variances ...

Aug 25, 2006 - Signature Page iii. Acknowledgments ...... synthetic image, six digital subtraction angiograms and four retinal angiograms, the proposed active ...

2MB Sizes 2 Downloads 206 Views

Recommend Documents

Image Segmentation using Global and Local Fuzzy ...
Indian Statistical Institute, 203 B. T. Road, Kolkata, India 700108. E-mail: {dsen t, sankar}@isical.ac.in. ... Now, we present the first- and second-order fuzzy statistics of digital images similar to those given in [7]. A. Fuzzy ... gray values in

Simulation of blood flow in deformable vessels using ...
Jul 20, 2010 - models from medical imaging data have improved substantially in the last ..... real-time visualization of segmentation results. ...... 14th Annual Conference on Computer Graphics and Interactive Techniques, 1987; 163–169.

Sentence Segmentation Using IBM Word ... - Semantic Scholar
contains the articles from the Xinhua News Agency. (LDC2002E18). This task has a larger vocabulary size and more named entity words. The free parameters are optimized on the devel- opment corpus (Dev). Here, the NIST 2002 test set with 878 sentences

Protein Word Detection using Text Segmentation Techniques
Aug 4, 2017 - They call the short consequent sequences (SCS) present in ..... In Proceedings of the Joint Conference of the 47th ... ACM SIGMOBILE Mobile.

Semantic Segmentation using Adversarial Networks - HAL Grenoble ...
Segmentor. Adversarial network. Image. Class predic- tions. Convnet concat. 0 or 1 prediction. Ground truth or. 16. 64. 128. 256. 512. 64. Figure 1: Overview of the .... PC c=1 yic ln yic denotes the multi-class cross-entropy loss for predictions y,

Call Transcript Segmentation Using Word ...
form topic segmentation of call center conversational speech. This model is ... in my laptop' and 'my internet connection' based on the fact that word pairs ...

A Meaningful Mesh Segmentation Based on Local Self ...
the human visual system decomposes complex shapes into parts based on valleys ... use 10í4 of the model bounding box diagonal length for all the examples ...

A Meaningful Mesh Segmentation Based on Local ...
[11] A. Frome, D. Huber, R. Kolluri, T. Bulow and J. Malik,. “Recognizing Objects in Range Data Using Regional Point. Descriptors,” In: Proc. of Eighth European Conf. Computer. Vision, 2004, vol. 3, pp. 224-237. [12] D. Huber, A. Kapuria, R.R. Do

Implementation of Brain Tumour Detection Using Segmentation ... - IJRIT
destroy all healthy brain cells. It can also indirectly ... aim in a large number of image processing applications is to extract important features from the image data, from which a description .... HSOM combine the idea of regarding the image segmen

Heterogeneous variances and weighting - GitHub
Page 1. Heterogeneous variances and weighting. Facundo Muñoz. 2017-04-14 breedR version: 0.12.1. Contents. Using weights. 1. Estimating residual ...

Reducing Offline BCI Calibration Effort Using Weighted ...
Machine Learning Laboratory, GE Global Research, Niskayuna, NY USA. † .... Class c of the source domain, Dt,c = {xj|xj ∈ Dt ∧ yj = c} is the set of samples in ...... [5] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vec- tor machi

Aggregation Using the Linguistic Weighted Average ...
Dongrui Wu, Student Member, IEEE, and Jerry M. Mendel, Life Fellow, IEEE. Abstract—The focus of this paper is the linguistic weighted av- ... pert systems which are simply tools to 'realize' an intelligent system, but are not able to process natura

Blood Vessels
practical advantages. Indeed, previous studies ... All of the subjects gave their written, informed .... Offline analysis was performed using custom-designed wall-tracking software ... Software analysis was performed at 30 Hz using an icon-based grap

Image Retrieval Using Weighted Color Co,occurrence ...
Dong Liang, Jie Yang, Jin4jun Lu, and Yu4chou Chang. Institute of Image Processing and Pattern Recognition,. Shanghai Jiao Tong University, Shanghai 200030, China. Abstract. Weighted Color Co4occurrence Matrix (WCCM) is introduced as a novel feature

Detection of Local-Moment Formation Using the ...
Mar 4, 2004 - system for the detection of local magnetic-moment formation. DOI: 10.1103/PhysRevLett. .... The Fourier transformed equations of motion for the.

Predicting the Density of Algae Communities using Local Regression ...
... the combination of features of different data analysis techniques can be useful to obtain higher predictive accuracy. ... Download as a PDF; Download as a PS ...

Super-resolution of Video Sequences Using Local ...
resolution, by estimating translations of small upsampled image patches ... imaging device, such as a cellphone camera or security camera. Super-resolution ...

Learning Kernels Using Local Rademacher Complexity
Figure 1: Illustration of the bound (3). The volume of the ..... 3 kernel weight l1 l2 conv dc. 85.2 80.9 85.8 55.6 72.1 n=100. 100. 50. 0. −1. 0. 1. 2 θ log(tailsum( θ).

Super-resolution of Video Sequences Using Local ...
that surface, allowing partial recovery of higher- ... 'ground truth' data and super-resolved images at the .... estimation of edges within narrow fields of view. 4.