Skeleton Extraction Using SSM of the Distance Transform Quannan Li, Xiang Bai, and Wenyu Liu Department of Electronics and Information Engineering, Huazhong University of Science and Technology, Wuhan, 430074, China Email: [email protected], [email protected], [email protected]

Abstract This paper proposes a novel approach of skeletonization for both binary and gray-scale images based on the Skeleton Strength Map (SSM) calculated from Euclidean distance transform. The distance transform is firstly computed based on the boundary edges, and then its gradient vector field is calculated. After that, the isotropic diffusion is performed on the gradient vector field and the SSM is computed from the diffused vector field. It has two advantages that make it useful for skeletonization: 1) For pixels that are away from the center of object, the SSM value decays very fast; 2) the SSM provides a very effective way for noise smoothing. For binary images, visual skeleton can be obtained directly from the SSM by selecting local maxima and connecting them with Dijkstra’s algorithm. For gray-scale images, since a complete contour can not be obtained usually, we compute the SSM from edge images first. Even for the clutter images, we can obtain clean skeleton with the aid of high level information. Experiments show that, our method can achieve good performance for both binary images and gray-scale images. Key words: Skeleton, skeletonization, Skeleton Strength Map (SSM), isotropic vector diffusion, incomplete boundaries 1. Introduction Skeleton, or Medial Axis, is a very important compact shape descriptor in computer vision since it can preserve both the topological and geometrical information of an object [7]. It has played an important role in areas of object representation and recognition such as image retrieval and computer Preprint submitted to Pattern Recognition

September 17, 2008

graphics, character recognition, image processing, robot mapping, network coverage and analysis of biomedical shape. Typical skeletonization approaches can be categorized into four types [15]: thinning and boundary propagation [6, 19, 20, 21], geometric methods such as algorithms based on the Voronoi diagram [9, 28, 29], algorithms based on distance transform [12, 17, 23], and algorithms based on general-field functions [2, 14, 16]. Beside these methods, there are several other kinds of algorithms on binary image skeletonization. Siddiqi et al. [32] measure the average outward flux of the vector field that underlies the Hamiltonian system and combine the flux measurement with a homotopy preserving thinning process applied in a discrete lattice. This approach leads to a robust and accurate algorithm for computing skeletons in 2D as well as 3D. However, the error in calculating the flux is both limited by the pixel resolution and also proportional to the curvature of the boundary evolution front. This makes the exact location of endpoints difficult to be found. An analysis of the system using the Hamilton–Jacobi equations of classical mechanics has shown how the skeleton can be detected using the divergence of the distance map for the object boundary [33]. Torsello and Hancock [34] overcome this problem by taking into account variations of density due to boundary curvature and eliminating the curvature contribution to the error. Aslan and Tari [3] present an unconventional approach for shape recognition using unconnected skeletons in the coarse level. This approach can lead to stable skeletons in the presence of boundary deformations; however, the obtained skeletons do not contain the small shape details and explicit topological structure. The algorithms related to our work are algorithms based on Euclidean distance transform, which extract the skeleton by detecting ridges on the distance transform surface [12, 17, 23]. Those algorithms can ensure the accurate localization of skeleton points but neither connectivity nor completeness, since the skeletal branches extracted may be disconnected and may not be able to represent all the significant visual parts. In addition, for these algorithms above, there is a common requirement that the complete contour of an object must be known before. However, for gray-scale images, due to the great challenge of segmentation, a complete and robust contour is often unavailable and the distance transform is undefined in gray-scale images. Thus it is difficult to apply the conventional algorithms in gray-scale images. To cope with this issue, many different segmentation-free skeletonization algorithms are then proposed [18, 35, 38]. In [18], a computational algorithm is proposed to compute pseudo-distance map directly from 2

the original image using a nonlinear governing equation. It can extract the skeleton of the narrow objects efficiently without losing information. But for the objects that are wider, this algorithm fails. Anisotropic vector diffusion is used to address the bias problem in [38], but this algorithm also fails to extract the skeleton of wide objects. In [4], Tari et al. proposed a method that extracted the skeletons from a set of level-set curves of the edge strength map and in [13], the authors proposed an algorithm that deals with the case when the objects are always brighter or darker than the background. Tek et al. [36] have shown that an orientation sensitive distance propagation function can be used to extract symmetries from fragmented contours by labelling skeletal points according to whether or not they represent the collision of consistently oriented boundary fronts. In [30, 31], the authors proposed a method based on scale-space theory which extracts the “cores” from the ridges of a medialness function in scale-space. Recently, Adluru et al. proposed a contour grouping method based on a skeletal model with the aid of the local symmetry, which can not only group the contour of objects but also find their skeletal paths [1]. However, their method is time consuming and the positions of both contour and skeleton are not stable enough. In this paper, we introduce a skeletonization algorithm based on the Skeleton Strength Map (SSM). The preliminary versions of this paper appeared as [22] and [24]. The SSM is computed via the isotropic diffusion of the gradient vector field of the distance transform. It has two properties that distance transform does not have. The first advantage is that, the SSM value decays very fast at the pixels which are far away from the center of the object; the second is that, the SSM is very robust under boundary noise and can obtain quite robust skeleton. Also, though without a complete contour, we show that the SSM can be well extended to gray-scale images and can efficiently integrate with the high-level information from the algorithm of [27]. The rest parts of the paper are organized as follows. In section 2 , we introduce the definition and computation of the SSM. In section 3 and section 4, we introduce the skeletonization approaches for binary images and grayscale images respectively. In section 5, we show the experiments results and give some discussion. Conclusion and future work are given in section 6. 2. Skeleton Strength Map (SSM) In this section we illustrate what the SSM is and how can the SSM be computed. 3

2.1. Isotropic diffusion of the gradient vector field of the Euclidean distance transform The distance transform dt(~r) is defined as the distance of an interior point ~r to the nearest boundary point [8]. Every “distance transform” in this paper denotes Euclidean distance transform. It is a scalar field and we can easily compute its gradient vector. However, in our approach we compute f (~r) = 1 − k 5 Gδ (~r) ∗ dt(~r)k to replace dt(~r), where Gδ (~r) is the Gaussian kernel function, δ is its standard covariance and ∗ is the convolution operator. f (~r) can be treated as an inverted version of the smoothed gradient magnitude of dt(~r). The main advantage of using f (~r) instead of dt(~r) is based on the fact that the relative value between skeleton point and it neighbors is significantly larger for f (~r) than for dt(~r) and it provides an effective way to filter the distortions by boundary noises (This will be shown bellow). Therefore, we work with the gradient vector field of f (~r): (u0 , v0 ) = ∇f = (

∂f ∂f , ) ∂x ∂y

(1)

After the gradient vector field of f (~r) is computed, the isotropic diffusion is performed. The diffusion process is ruled by a partial differential equation set as in [37], du = µ∇2 u − (u − fx )(fx2 + fy2 ) dt { du (2) = µ∇2 v − (v − fy )(fx2 + fy2 ) dt Here, µ is the regular parameter and is set to 0.07 in our experiments, u, v are two components of the diffused gradient vector field f (~r). Initializing u, , ∂f ), the partial differential equation (1) can be v with (u0 , v0 ) = ∇f = ( ∂f ∂x ∂y solved. We use isotropic diffusion here because it makes the vectors propagate towards the actual location of the skeleton points. This is very important and useful in order to get the accurate skeleton. Moreover, it is an effective way of smoothing noise, which makes the extracted skeleton robust and stable to boundary noise. The effect of the smoothing is demonstrated by the experimental results in Fig. 1. 2.2. Computation of the SSM To localize the skeleton points from the diffused gradient vector field, we need to compute a Skeleton Strength Map (SSM) from it. In the SSM the value at each point indicates the probability of being a skeleton point. The 4

higher the value at a point is, the more probable a skeleton point this point is. It is known that the skeleton points are located where two or more vectors confront, so based on this principle, the skeleton strength map is computed by adopting the formula from [2]: SSM (~r) = max(0,

X grf (~r) · (~r − r~0 ) ), k~r − r~0 k

(3)

r~0 ∈N (~ r)

where N (~r) denotes the eight-neighbors of ~r. Each of r~0 s eight-neighbors projects its vector to the unit vector pointing from ~r to r~0 . The intensity of the SSM at ~r is then assigned the value of the sum of projections if the sum is positive. The intuition here is that if all r~0 s neighbors have gradient vector pointing to it, the intensity of SSM at ~r is high and it is likely to be a skeleton point. Sample SSM can be found in Fig. 1(b). Fig. 1(a) and Fig. 1(b) show the distance transform and the SSM of a hand respectively. Compared with the distance transform, we can find one major advantage: the SSM value decays very quickly at pixels departing from the center of the object. This advantage can be attributed to isotropic diffusion and the computation of the SSM and it’s helpful to detect the skeleton. We can make another observation that, the distance transform value is not large at narrow places such as the five fingers, while after diffusion, the SSM value at these places has been intensified significantly. 2.3. Non-maximum suppression After the SSM has been computed, we use non-maximum suppression to thin the SSM values. The non-maximum is a very efficient thinning technique that has been applied in edge detection such as Canny operator [11]. For each position (x, y) at the SSM, two responses SSM (x0 , y 0 ) and SSM (x00 , y 00 ) in the adjacent positions (x0 , y 0 ) and (x00 , y 00 ) that are intersection points of a line passing through (x, y) and has the orientation Θ(x, y) and a square defined by the diagonal points of an 8-neighborhood are computed by linear interpolation, e.g., Fig. 2. If the response SSM (x, y) at (x, y) is greater than SSM (x0 , y 0 ) and SSM (x00 , y 00 ), i.e., it’s a local maximum, and it will be retained; otherwise, it will be discarded. The effect of using non-maximum suppression can be seen in Fig. 1(c). Compared with the SSM in Fig. 1(b), we can find that, after non-maximum 5

Figure 1: The result of a hand. (a) shows its distance transform, (b) shows the SSM computed, (c) is the SSM thinned by non-maximum suppression, (d) shows the local maxima detected from (c), (e) gives the skeleton obtained from (d), and (f) shows the skeleton obtained from distance transform directly.

6

suppression, the SSM value has been significant thinned, which is much better for localization of skeleton points.

Figure 2: Illustration of non-maximum suppression. (x0 , y 0 ) and (x00 , y 00 ) are two intersection points, if S(x, y) is greater than S(x0 , y 0 ) and S(x00 , y 00 ), then it’s a local maximum and retained. Θ(x, y) is chosen as the direction of the gradient vector at (x, y).

3. Skeletonization of binary images For binary images, we can compute the SSM, thin it with non-maximum suppression and then we can get very good SSM that is very useful for skeletonization. We then get the local maxima from the SSM thinned and connect the local maxima using the shortest path algorithm (Dijkstra’s algorithm) on f (~r). An example of skeletonization of binary images is shown in Fig. 3. 3.1. Local maxima detection Definition 1: A local maximum of SSM is a point ~r whose SSM value SSM (~r) satisfies the following conditions: SSM (~r) ≥ maxr~0 ∈N (~r) SSM (~r)

(4)

To make the skeleton more robust, we also exclude pixels that have the SSM value SSM (~r) ≤ T . Notice that from the definition of local maxima, if the equality is achieved, some local maxima may be connected to 7

Figure 3: (a) is the binary mask of a horse, (b) shows the distance transform of (a), (c) is the SSM computed from (b), (d) is the SSM thinned by non-maxima suppression, (e) shows the local maxima, and (f) is the final skeleton

form a connected segment, some may be isolated. In [22], a set of critical points are further selected from connected segments of local maxima, and are then connected to form the ultimate skeleton. In [24], we have introduced non-maximum suppression to thin the SSM and found that the connected segments will not affect the process of connecting local maxima to extract the ultimate skeleton. So in this paper, we retain local maxima and try to connect them using the shortest path algorithms. Our definition of local maxima is very similar to the “ridge” of the distance transform. One one hand, similar to “ridge” of the distance transform, it can locate skeleton accurately; on the other hand, defined on the SSM, the local maxima can achieve many advantages such as robustness under boundary noises and the property that local maxima correspond to the significant visual parts of the object. Therefore, the obtained skeleton contains the branches representing all significant visual parts. An example can be found in Fig. 1(d). In Fig. 1(d), the local maxima detected from the SSM has some points at the left bottom part and the skeleton obtained is complete (Fig. 1(e)), while when the skeleton is extracted from the distance transform directly (i.e., by detecting “ridge” of the distance transform) (Fig. 1(f)), it misses the branch at the left bottom part of the hand.

8

3.2. Local maxima connection To connect the local maxima, a distance measure is defined based on the surface of function f (~r). Definition 2: Given an 8-connected path R = {~ r1 , r~2 , ..., r~n }, its gradient P ri )|. length is defined as |R|G = ni=1 |f (~ The gradient distance between two points ~r and r~0 is defined as minimum over the gradient lengths of all 8-connected paths connecting them. The 8connected path with the smallest gradient distance is called a gradient path. The gradient path corresponds to a geodesic path on the surface defined by f (~r). It is computed with Dijkstra’s shortest path algorithm. We obtain the skeleton by connecting the local maxima with gradient paths. To begin, we choose the point having the maximum distance transform as the center of the skeleton and connect iteratively all the other local maxima to it until all the local maxima are connected. 4. Skeletonization of gray-scale images In this section, we try to extract the skeleton of gray-scale images by extending the SSM to gray-scale images. For clutter images, we show that integrating some high-level information such as the probability maps obtained by [27] is helpful to extract the skeletons from gray-scale images. The computation of the SSM is very natural for binary images, however, for gray-scale images, the ways are very different because we don’t have a complete contour and the definition of dt(~r) in binary images does not hold any more. In order to compute the SSM of gray-scale images, we use the following way to compute the SSM from boundaries of gray-scale images. Since we cannot identify whether a pixel is inside or outside the object, we treat all non-boundary pixels as object pixels and boundary points as background and then compute the distance transform. There are many methods to extract boundary, and in this paper we choose the Canny operator [11]. We believe that some recent popular edge detectors such as Berkeley Edge Detector[25] and BEL [26] will be more helpful for obtaining the clean boundaries, but here we use the boundary edges obtained from Canny to show that the SSM is stable even when the boundary edges are incomplete and noisy. Also we step to a direction of utilizing high-level information [27] to help to identify object pixels and compute the skeleton.

9

Figure 4: Effect of noise smoothing

4.1. Noise smoothing of boundaries The skeleton is easily influenced by noises for it is a local quantity. As we hope to extract the skeleton of gray-scale images from boundaries, we should try to filter boundary noises. Here we use two heuristics to measure the significance of edge segments: 1) long boundary segments are more significant than short boundary segments and 2) if one short boundary segment is close to a long segment, it is still possible to compute skeleton branches from them, so we shall treat one short segment more significant if its endpoints are closer to endpoints of a long segment. To implement these two heuristics,suppose S = {si |1 ≤ i ≤ N } be all the boundary segments and L(si ) is the length of si . We partition S into two subsets S1 = {s0i |1 ≤ i ≤ N1 } and S2 = {s00i |1 ≤ i ≤ N2 } (N1 + N2 = N )containing segments longer or shorter than a threshold T1 respectively. For each boundary segment s00i in S2 , we compute the distance of its endpoints to all endpoints of segments in S1 . If the minimum distance is greater than certain threshold, we treat this segment as noise and remove it.This process is straightforward and an example is given shown in Fig. 4. Fig. 4(b) shows the boundary of Fig. 4(a) obtained by canny operator, and Fig. 4(c) shows the boundaries after removing insignificant segments. We can see from this figure, removing some short segments with endpoints distant from endpoints of long segments doesn’t affect the whole shape of the horse. 4.2. SSM computation of gray-scale images To compute the SSM, the boundary is firstly reversed and the Euclidean distance transform (shown in Fig. 5(a)) is performed. We then compute the SSM and perform non-maximum suppression as depicted in Section 2 (Fig. 5(b)). A small problem exists for gray-scale images, as we can see from Fig. 5(c), there are some high values on SSM caused by boundaries. It is 10

Figure 5: An example of skeletonization from boundaries

easy to remove them since we have known the boundary in advance; we can assign each boundary pixel a window of size k × k and assign SSM values at pixels within that window 0 (k = 3 will generally suffice). Fig. 5(d) shows the SSM after edge strength values are removed. By adopting hysteresis thresholding, we can easily get the ultimate skeleton (Fig. 5(e, f) show the skeleton and the skeleton put back to the original images). The skeleton obtained in this way is not connected, as we have mentioned before, since we have no position information about the object. 4.3. Skeletonization with the aid of probability maps We have mentioned before that merely boundary is not sufficient for us to decide the region of object and so it’s not sufficient to compute the connected skeleton for gray-scale images. If with some information from high level vision that can tell us roughly the object location, however, the skeleton can be computed more effectively. In [27], Tu proposes an algorithm called autocontext that learns an integrated low-level and context model and can get good boundaries and probability map (or called body map) that indicates the probability a pixel belonging to the object (Fig. 6(a, c) shows respectively the boundary and body map obtained by the algorithm in [27].). We illustrate that our method has the potential to get good skeleton with the information from a body map, a form of high-level information. We also filter the noises using the heuristics in Section 4.1 and compute the SSM from boundary as depicted in section 2.2 (Shown in Fig. 6(e)). 11

Figure 6: An example of skeletonization with the aid of body map. (a) is the boundary extracted by the algorithm from [27], (b) is the boundary after noise filtering, (c) is the body map got by [27], (d) is the average probability got from body map, (e) shows the SSM, (f) is the SSM with value caused by boundary removed, (g) is the local maxima detected from (f) and (h) is the final skeleton.

Instead of thresholding the SSM, we can obtain better skeleton since we have body map which can identify the object roughly. For each pixel p, we use the average probability of its neighbors to quantify its probability being inside the object. We construct a circle with radius r and compute the average probability of p’s neighborhood within this circle. (Shown in Fig. 6(d), r = 6). By multiplying the SSM with the blurred body map, we can suppress the SSM values at pixels outside the object (shown in Fig. 6f)). With the aid of the body map, we can identify whether a pixel is inside or outside the object and so, we can select the local maxima and connect them in the similar way as in the case of binary image. Fig. 6(h) shows the result of connecting all the local maxima (pixels) of Fig. 6(g). It’s easy to observe that the extracted skeleton is connected, complete and clear, i.e., without spurious branches.

12

5. Experiments In this section, we evaluate the performance of the SSM on both the binary images and gray-scale images. From the results listed, we can state that, complete and also robust skeleton of binary images can be generated from the SSM. Besides, for gray-scale images, connected skeleton of grayscale images can be obtained from the SSM by integrating the body map from [27]. 5.1. The results of binary images

Figure 7: (a) is the binary mask of a camel, (b) is the thinned SSM on (a), (c) is the skeleton extracted from (b), and (d) is the skeleton extracted from the distance transform directly

In Fig. 1, we have already shown that, this algorithm can get more complete skeleton from the SSM than from the distance transform directly. Another example is provided in Fig. 7 that supports the superiority of the proposed algorithm. The skeleton obtained from the distance transform directly 13

Figure 8: The algorithm’s robust on boundary noises

Figure 9: Comparative study of the algorithm on several camels with great inner variations

misses some important branches such as the two branches of the hump of the camel, which makes the topology of the camel incomplete. By contrast, our algorithm generate the two branches of the humps, and also the branch corresponding to the part of the ear of the camel. Fig. 8 provides two examples that show the insensitivity of the proposed method to boundary noise. The two stars both have substantial noises in contours, however, the two skeletons obtained are very similar and preserve the topological and geometric structure of the two stars with no spurious branches generated by these noises. This demonstrates the robustness of our algorithm(in both of the two stars, the parameter δ is set as 2.5). The stability of our algorithm in the presence of large inner-class shape variations is demonstrated in Fig. 9. Although these camels have different poses and differ significantly from each other, the obtained skeleton have the 14

Figure 10: Comparative study of the algorithm with the algorithm in [34]

same global structure. In Fig. 10, we compare the proposed algorithm with the method described in [34], a state-of-the-art algorithm. The first row is the skeleton of four different objects (a wrench, a fish, a hat and a plane) using the proposed algorithm and the second row shows the results from [34]. Our algorithm can generate comparable skeleton for the wrench, the fish and the hat. For the plane, we argue that, our result is even better than the result of [34] because the two branches of the skeleton by [34] circled by the red line are spurious, and our algorithm avoids these two branches successfully. 5.2. The results of gray-scale images 5.2.1. The results from SSM of Canny edges Two examples of skeleton from boundary of gray-scale images are provided in Fig. 11. Column (a) in Fig. 11 are the edge images obtained by Canny operator. We observe that though there are lots of gaps, e.g. gaps on back of the dog, and gaps on feet of the horse, a good SSM can still be computed as shown in column (b). This illustrates that, without a closed contour, good skeleton can still be computed so long as main contour segments of the obtained boundary are retained. Column (c) shows the skeleton (in red) obtained by thresholding the SSM. Compared to the skeleton obtained by [38] (column (d)), two advantages have been achieved. One is that, the skeletons 15

Figure 11: Skeleton results based on the SSM computed from the Canny edges

obtained by our method are more symmetric to the objects’ boundary than [38], e.g., the torsos of the dog and the horse. The reason for the medialness of the proposed algorithm is attributed to the usage of boundaries, while for the algorithm in [38], it fails to locate the skeleton points accurately when the object is wide. The other advantage is that, the results are more complete than the results of [38]. For example, the skeletons of the dog and the horse don’t have branches corresponding to the 4 limbs, while our algorithm can generate such branches and main branches are preserved. 5.2.2. The results of clutter images integrating body maps Fig. 12 illustrates that we can obtain good skeletons from clutter images1 when integrating body maps. Column (a) shows the edge images. There are lots of noises and clutters and also many gaps along the boundary. With the aid of the body maps as shown in column (b), connected skeleton in column (c) can be computed. At the bottom row of Fig. 12, one of the worst results is shown. We show this result to illustrate the reliance of the skeleton on the 1

The horse images for skeletonization are from the image dataset used in [10]

16

Figure 12: Some skeleton results of gray-scale images with the aid of body map. Column (a) shows the boundaries, column (b) shows the body maps and column (c) shows the final skeleton.

17

Figure 13: The mesh of the SSM of a hand. On the left is the mesh of the distance transform, in the middle is the mesh of the SSM and on the right is the mesh of the SSM thinned by non-maximum suppression

quality of the body map. When the quality of the body map is not good, it’s difficult to identify pixels whether are inside or outside the object and the result is not acceptable. 5.3. Discussion Properties of the SSM: We have made this conclusion that the SSM value decays at pixels away from center of the object and the relative SSM value between skeleton pixels and non-skeleton pixels is larger than distance transform. This can be demonstrated from Fig. 13. The distance of pixels inside the hand has value greater than zero, while after diffusion, except for pixels near the skeleton points, other pixels have SSM value zero (In the middle of Fig. 13). After non-maximum suppression, the SSM value of some pixels is further suppressed (On the right of Fig. 13). We can make another observation that, the SSM rearranges the increasing trend of the distance transform. For the lower left part of the hand, value of the distance transform increases monotonically from boundary to the center. 18

Figure 14: the skeleton result of the star by varying the parameter δ

That’s why no local maxima can be detected at the lower left part from the distance transform. For the SSM, the monotonic increasing property doesn’t hold any more, and local maxima can be detected at the lower left part now. The computation of the SSM provides an effective way to filter boundary noises. In Fig. 14, by varying δ from 1, 1.5, to 2.5, more robust skeleton can be generated for the left star in Fig. 8. Skeletonization of gray-scale images: The experiments have demonstrated that the SSM can be readily extended to the case of gray-scale images and that the good skeletons can be computed even when the boundaries have many gaps, clutters and noises (Fig. 6, Fig. 11 and Fig. 12). By integrating high-level information, pixels can roughly be identified whether inside or outside the objects and the connected skeleton can be got. We should notice there is a limitation of the SSM on gray-scale images, i.e., the results relies much on the quality of the boundaries and the body map. For object that varies significantly on texture, the quality of boundaries will be poor and the SSM will also be not good. If quality of the body map is poor, it will be hard to distinguish pixels inside or outside the objects, and the skeleton will be poor too. However, the impressive results illustrates that we are on the right direction of utilizing high-level information to solve skeletonzation of gray-scale images, and we expect that, with the development of high-level information and the boundary detector, the results will be better. This can be illustrated by Fig. 15. The two giraffes 2 both have the textures that make Canny fail to extract clear boundary edges, but the Berkeley Edge 2

The giraffes image used for skeletonization are from the ETHZ dataset [39]

19

Figure 15: the skeleton of two giraffes from the boundary obtained by Berkeley Edge Detector [25]. The left figures are the boundaries obtained by Berkeley Edge Detector, and the right figures are the results by the proposed algorithm

20

Detector [25] can obtain good boundary and the skeletons extracted based on the boundaries from Berkeley Edge Detector are very good. This also suggests a new direction of using learning of shape and appearance cues for detecting the skeleton of the objects in clutter images. 6. Conclusion In this paper, we have developed a novel approach of skeletonization based on the Skeleton Strength Map (SSM). The SSM is calculated from Euclidean distance transform and is very useful for skeletonization of both binary and gray-scale images. We have also stepped to the direction of utilizing highlevel information to help in the process of skeletonization of gray-scale images. The results show the ability of the SSM to obtain stable, complete skeleton for binary images, and also the ability to obtain connected skeleton for grayscale images by integrating the body map. Our future work is to further investigate the utilization of high-level information on the skeletonization of gray-scale images and try to extend the SSM to the case of 3D skeletonization. Acknowledgements We would like to thank Zhuowen Tu for providing us the probability maps for the horse images. This work was supported by a grant from the Ph.D. Programs Foundation of Ministry of Education of China (No. 20070487028). References [1] N. Adluru, L.J. Latecki, R. Lak¨aemper, T. Young, X. Bai and A. Gross, Contour grouping based on local symmetry, ICCV, 2007. [2] N. Ahuja and J. Chuang, Shape representation using a generalized potential field model, IEEE Trans. PAMI 19(2) (1997) 169–176. [3] C. Aslan and S. Tari, An axis based representation for recognition, ICCV, 2005, pp. 1339–1346. [4] S. Tari, J. Shah, and H. Pien, Extraction of shape skeletons from grayscale images, CVIU 66 (1997) 133–146.

21

[5] X. Bai, L.J. Latecki, and W. Liu, Skeleton pruning by contour partitioning with discrete curve evolution, IEEE Trans. PAMI 29(3) (2007) 449–462. [6] G. Bertrand and Z. Aktouf, A three-dimensional thinning algorithm using subfields, Proc. SPIE. Vision Geometry, vol. 2356, 1995, pp. 113– 124. [7] H. Blum, Biological Shape and Visual Science (Part I), J. Theoretical Biology 38 (1967) 205–287. [8] G. Borgefors, Distance transformations in digital images, Computer Vision, Graphics, and Image Processing 34 (1986) 344–371. [9] J.W. Brandt and V.R. Alazi, Continuous skeleton computation by Voronoi Diagram, CVGIP: Image Understanding 55(3) (1992) 329–338. E. [10] E. Borenstein, E. Sharon and S. Ullman, Combining top-down and bottom-up segmentation, Proc. IEEE workshop on Perc. Org. in Com. Vis. (POCV), 2004. [11] J. Canny, A computational approach to edge detection, IEEE Trans. PAMI 8(6) (1986) 679–698. [12] W.P. Choi, K.M. Lam, and W.C. Siu, Extraction of the Euclidean skeleton based on a connectivity criterion, Pattern Recognition 36(3) (2003) 721–729. [13] D.H. Chung and G. Sapiro, Segmentation-free skeletonization of grayscale images via pdes, ICIP, 2000, pp. 927–930. [14] J. Chuang, C. Tsai, and Min-Chi Ko, Skeletonization of threedimensional object using generalized potential field, IEEE Trans. PAMI 22(11) (2000) 1241–1251. [15] N.D. Cornea, D. Silver, and P. Min, Curve-skeleton properties, applications, and algorithms, IEEE Trans. Visualization and Computer Graphic 13(3) (2007) 520–548.

22

[16] N.D. Cornea, M.F. Demirci, D. Silver, A. Shokoufandeh, S.J. Dickinson, and P.B. Kantor, 3D object retrieval using many-to-many matching of curve-skeletons, Proc. Shape Modeling International, 2005. [17] Y. Ge and J.M. Fitzpatrick, On the generation of skeletons from discrete Euclidean distance maps, IEEE Trans. PAMI 18(11) (1996) 1055–1066. [18] J.H. Jang and K.S.Hong, A pseudo-distance map for the segmentationfree skeletonization of gray-scale images, ICCV, 2001, pp. 18–23. [19] T.Y. Kong and A. Rosenfeld, Digital topology: introduction and survey, Comp. Vision, Graphics, and Image Proc. 48(3) (1989) 357–393. [20] T.Y. Kong, A.W. Roscoe, and A. Rosenfeld, Concepts of digital topology, Topology and its Applications 46(3) (1992) 219–262. [21] L. Lam, S.W. Lee, C.Y. Suen, Thinning methodologies - a comprehensive Survey, IEEE Trans. PAMI 14(9) (1992) 869–885. [22] L.J. Latecki, Q. Li, X. Bai, and W. Liu, Skeletonization using SSM of the distance transform, ICIP, 2007, pp. 349–352. [23] F. Leymarie and M. Levine, Simulating the grasfire transform using an active contour model, IEEE Trans. PAMI 14(1) (1992) 56–75. [24] Q. Li, X. Bai, and W. Liu, Skeletonization of gray-scale images from incomplete boundaries, ICIP, 2008, to appear. [25] D. Martin, C. Fowlkes, and J. Malik, Learning to detect natural image boundaries using local brightness, color and texture cues, IEEE Trans. PAMI 26(5) (2004) 530–549. [26] P. Dollar, Z. Tu, and S. Belongie, Supervised learning of edges and object boundaries, IEEE Computer Vision and Pattern Recognition (CVPR), 2006. [27] Z. Tu, Auto-context and its application for highlevel vision, CVPR, 2008. [28] R. Ogniewicz, A multiscale MAT from Voronoi Diagrams: the skeletonspace and its application to shape description and decomposition, Aspects of Visual Form Processing, 1994. 23

[29] R. Ogniewicz and O. K¨ ubler, Hierarchic Voronoi skeletons, Pattern Recognition 28(3) (1995) 343–359. [30] S.M. Pizer, D. Eberly, D.S. Fritsch, and B.S. Morse, Zoom-invariant vision of figural shape: The mathematics of cores, CVIU 69 (1998) 55– 71. [31] B.S. Morse, S.M. Pizer, D.T. Puff, and C. Gu, Zoom–invariant vision of figural shape: Effects on cores of images disturbances, CVIU 69 (1998) 72–86. [32] K. Siddiqi, S. Bouix, A. Tannenbaum, and S.W. Zucker, The HamiltonJacobi skeleton, ICCV, 1999, pp. 828–864. [33] K. Siddiqi, A. Shokoufandeh, S.J. Dickinson, and S.W. Zucker, Shock graphs and shape matching, IJCV 35(1) (1999) 13–32. [34] A. Torsello and E.R. Hancock, Correcting curvature-density effects in Hamilton-Jacobi skeleton, IEEE Trans. Image Processing 15(4) (2006) 877–891. [35] H. Qiu and E.R. Hancock, Grey scale image skeletonisation from noise– damped vector potential, ICPR, 2004, pp. 839–842. [36] H. Tek, P.A. Stoll, and B.B. Kimia, Shocks from images: Propagation of orientation elements, CVPR, 1997, pp. 839–845. [37] C. Xu and J.L. Prince, Snakes, shapes, and gradient vector flow, IEEE Trans. Image Processing 7(3) (1998) 359–369. [38] Zeyun Yu and Chandrajit Bajaj, A segmentation-free approach for skeletonization of gray-scale images via anisotropic vector diffusion, CVPR, 2004, pp. 18–23. [39] V. Ferrari, T. Tuytelaars, and L.J. Van Gool. Object detection by contour segment networks, ECCV, 2006, pp. 14–28.

24

Skeleton Extraction Using SSM of the Distance Transform - CiteSeerX

Sep 17, 2008 - graphics, character recognition, image processing, robot mapping, network coverage and .... Figure 2: Illustration of non-maximum suppression.

3MB Sizes 1 Downloads 344 Views

Recommend Documents

Skeleton Extraction Using SSM of the Distance Transform - CiteSeerX
Sep 17, 2008 - graphics, character recognition, image processing, robot mapping, network coverage and .... Figure 2: Illustration of non-maximum suppression.

Skeleton Extraction Using SSM of the Distance Transform
Sep 17, 2008 - Abstract. This paper proposes a novel approach of skeletonization for both binary and gray-scale images based on the Skeleton Strength Map (SSM) calculated from. Euclidean distance transform. The distance transform is firstly computed

skeletonization using ssm on the distance transform
as image retrieval and computer graphics, character recognition, image processing ... boundary curves in accord with the definition of medial axis. Moreover, the ...

Distance Matrix Reconstruction from Incomplete Distance ... - CiteSeerX
Email: drinep, javeda, [email protected]. † ... Email: reino.virrankoski, [email protected] ..... Lemma 5: S4 is a “good” approximation to D, since.

1 Modeling root water extraction using macroscopic ... - CiteSeerX
CD, modeling efficiency EF, and coefficient of residual mass CRM were used to ... the simulated values; Oi are the measured values; n is the number of samples; and .... simulation model can provide reasonable prediction when the system is ...

1 Modeling root water extraction using macroscopic ... - CiteSeerX
Fax: +98 21 6026524, e-mail: [email protected]. Abstract. Water extraction by plant roots plays a significant role in hydrologic cycle. In arid and semi-.

CASE: Connectivity-based Skeleton Extraction in ...
There are studies on skeleton extraction from the computer vision community; their ... partitioning the boundary of the sensor network to identify the skeleton points ... elimination of the unstable segments in skeleton to keep the genuine geometric

Connectivity-based Skeleton Extraction in Wireless Sensor Networks
boundary of the sensor network to identify the skeleton points, then generating the skeleton arcs, connecting these arcs, and ..... boundary partition with skeleton graph generation. As .... into skeleton arcs which will be described in next section.

Connectivity-based Skeleton Extraction in Wireless ...
The topological skeleton extraction for the topology has shown great impact on the performance of such .... identified, connecting all nodes in a proper way is not.

Linear-Space Computation of the Edit-Distance between a ... - CiteSeerX
for 2k string-automaton pairs (xi k,Ai k)1≤i≤2k . Thus, the complexity of step k is in O(∑ ... In Proceedings of the 12th biennial European Conference on Artificial.

Connectivity-based Skeleton Extraction in Wireless ...
point in the component, we reduce the traffic cost greatly. .... the traffic cost for local area totally is O(. √ n). ..... and information engineering and the MS degree.

Challenges for Discontiguous Phrase Extraction - CiteSeerX
Any reasonable program must start by putting the exponentially many phrases into ..... Web page: www.cs.princeton.edu/algs4/home (see in particular:.

Textline Information Extraction from Grayscale Camera ... - CiteSeerX
INTRODUCTION ... our method starts by enhancing the grayscale curled textline structure using ... cant features of grayscale images [12] and speech-energy.

Challenges for Discontiguous Phrase Extraction - CiteSeerX
useful for many purposes, including the characterization of learner texts. The basic problem is that there is a ..... Master's thesis, Universität. Tübingen, Germany.

Relative clause extraction complexity in Japanese - CiteSeerX
(1) INTEGRATION resources: connecting an incoming word into the ... 2) structural integration cost ..... Computational factors in the acquisition of relative clauses ...

Relative clause extraction complexity in Japanese - CiteSeerX
Illustration of the cost function: (1) Object-extracted ... Items: simple transitive clauses that made up each RC. Results: 4 items ... effect, it should occur at the verb.

Scalable Video Summarization Using Skeleton ... - Semantic Scholar
the Internet. .... discrete Laplacian matrix in this connection is defined as: Lij = ⎧. ⎨. ⎩ di .... video stream Space Work 5 at a scale of 5 and a speed up factor of 5 ...

Semantic Property Grammars for Knowledge Extraction ... - CiteSeerX
available for this task, which only output a parse tree. In addition, we ... to a DNA domain or region, while sometimes it refers to a protein domain or region.

Semantic Property Grammars for Knowledge Extraction ... - CiteSeerX
source of wanted concept extraction, we can directly apply the same method- ... assess a general theme in the (sub)text: since the parser retrieves the seman-.

Scalable Video Summarization Using Skeleton ... - Semantic Scholar
a framework which is scalable during both the analysis and the generation stages of ... unsuitable for real-time social multimedia applications. Hence, an efficient ...

Extraction and Search of Chemical Formulae in Text ... - CiteSeerX
trade-off between recall and precision for imbalanced data are proposed to improve the .... second set of issues involve data mining, such as mining fre- ... Documents PDF ...... machines for text classification through parameter-free threshold ...

Autonomous Traversal of Rough Terrain Using ... - CiteSeerX
Computer Science and Engineering, University of New South Wales, Sydney, Australia ... Clearly, there- fore, some degree of autonomy is extremely desir- able.

Event Extraction Using Distant Supervision
these five features sets. We compare this local classifier to a majority class base- ... 〈Passenger〉 go together; 〈Site〉 often follows 〈Site〉; and. 〈Fatalities〉 never follows ..... Force Research Laboratory (AFRL) contract no. FA8750-

Text Extraction Using Efficient Prototype - IJRIT
Dec 12, 2013 - as market analysis and business management, can benefit by the use of the information ... model to effectively use and update the discovered Models and apply it ..... Formally, for all positive documents di ϵ D +, we first deploy its