566

ISCCSP 2008, Malta, 12-14 March 2008

Automatic Intensity-Pair Distribution for Image Contrast Enhancement Nai-Ching Wang1 , Shen-Chuan Tai2 Department of Electrical Engineering National Cheng Kung University No. 1, University Road, Tainan 701, Taiwan, R.O.C. [email protected] , [email protected]

Abstract—Intensity-pair distribution was recently proposed to enhance the contrast of an image. There are several parameters provided in that algorithm for users to control the enhancement. With a proper combination of parameters, the algorithm provides satisfying contrast enhancement without noise amplification, unnatural look, and other common drawbacks that are seen in previous work, but there is no effective method of parameter selection mentioned. In this paper, we proposed an effective criterion based on human visual system to decide a proper combination of parameters. With an appropriate combination of parameters, we can take the most advantage of that algorithm. Automation of parameter selection can easily be done by iteratively using the proposed algorithm. We presented experimental results to compare results of contrast enhancement with proper and improper combinations of parameters. Index-terms: automation, intensity-pair, contrast enhancement, histogram

I. INTRODUCTION Contrast enhancement plays a very important role in image processing and is widely used for many applications [1]-[4] .With this image processing technique, visual quality of an image can be greatly increased. There exist various kinds of linear and nonlinear gray level transformation functions for contrast enhancement. One of the most popular techniques is histogram equalization (HE) which has been regarded as the ancestor of many contrast enhancement algorithms [5]-[9]. The HE makes use of intensity distribution of pixels of an image to compute a transformation function and redistributes intensities based on the transformation function. In spite of low complexity in computing, there are several drawbacks of this simple algorithm. Because this algorithm takes only global information into account and does not contain any parameter to control strength of enhancement, this may result in over-enhancement or reduction of contrast in some parts of the processing image and thus makes processed image look unnatural. In order to eliminate some drawbacks of the HE, [8] and [10] proposed bi-histogram equalization (BHE). In this method histogram of the processing image is split into two parts. The splitting point is the intensity mean for the image and the two parts are independently applied with the HE. This method reduces over-enhancement but the unnatural look of the processed image is still a problem.

c 978-1-4244-1688-2/08/$25.00 2008 IEEE

Another method discussed in [9] and [11] proposed adaptive histogram equalization (AHE). This approach takes local information into account by using overlapped blocks. This approach has better contrast enhancement than the HE but it also has much higher complexity in computing. In this paper we try to make best use of method proposed in [12] and propose an excellent algorithm for automatic parameter selection. In section 2 the intensity-pair distribution for image contrast is briefly reviewed. In section 3 we introduce an algorithm for automatic parameter selection of intensity-pair distribution and experimental results are shown in section 4. In section 5 conclusions are made. II. EXISTING APPROACH This section will provide a brief overview of the image contrast enhancement based on intensity-pair distribution. The following steps are described in [12] to construct a mapping function for contrast enhancement. Assume that i(x,y) is the intensity of the pixel at (x,y). Step1: Scan pixels in the given image and compute the differences between i(x,y) and its 4 neighbors i(x-1,y), i(x-1,y-1), i(x,y-1), and i(x+1,y-1) : d1= | i ( x, y ) − i ( x − 1, y ) | , d2= | i ( x, y ) − i ( x − 1, y − 1) | , d3= | i ( x, y ) − i ( x, y − 1) | , d4= | i ( x, y ) − i ( x + 1, y − 1) | . Step2: Compute expansion and anti-expansion forces. Both accumulate expansion force Fe and accumulate anti-expansion force Fa are vectors with 256 elements which are all initialized to 0. For every local difference di computed in step1 constructs a corresponding vector vi whose length is 256. Elements of vi indexed between two intensity values corresponding to di are set 1 and otherwise 0. For example, elements between v1[i(x,y)] and v1[i(x-1,y)] are 1 and others are 0.

Fe [k ] = Fe [k ] + vi [k ] if di  th, i = 1,2,3,4

k = 0,…,255

ISCCSP 2008, Malta, 12-14 March 2008

Fig. 1. Original image of Baboon

Fa [k ] = Fa [k ] + vi [k ] if di < th, i = 1,2,3,4 k = 0,…,255

567

Fig. 2. Original image of Peppers.

where a is a parameter controlling strength of contrast enhancement.

where th is the parameter to distinguish edges from flat regions. III.

Step3: Compute net expansion forces Fn.

Fn [k ] = Fa [k ] + g × Fa [k ] , k = 0,…,255 where g (typically g=0.1) is a parameter controlling magnitude of anti-expansion forces. Step4: To reduce large net expansion forces, magnitude mapping is performed.

Fn' [k ] = M ( Fn [k ]) 1 M0

where M(x) = x magnitude mapping.

,and M0 is a parameter controlling

Step5: '

Compute cumulative sum Fc of Fn [ k ] and normalize it to get expansion function f. Step6: The final mapping function m is then the weighted average of expansion function f and the original mapping function (which is the identity function).

m[ k ] = (1 − a ) × k + a × f [ k ]

PROPOSED ALGORITHM

In section 2 we can easily find that algorithm proposed in [12] provides several parameters. With the same strength of contrast enhancement but different combinations of parameters, we have different distributions of strength of contrast enhancement. Improper combinations of parameters result in large variations of distributions of strength of contrast enhancement. This makes some portions of the processed image enhanced a lot and some portions enhanced little. Moreover, if the strength of contrast enhancement concentrates at certain parts of the histogram, other parts may get compressed and then some details are lost. We also find that a proper combination of parameters brings us excellent contrast enhancement and more details. As a result, we introduce a criterion to select a proper combination of parameters which makes strength of contrast enhancement distribute as uniformly as possible. First we have to compute contrast of a pixel which corresponds to human visual system (HVS). Since most of our everyday vision is at suprathreshold levels, we will focus on contrast discrimination sensitivity of human beings for suprathreshold vision. From [13]-[14], we have a simple conclusion that the local contrast of the image is proportional to the local gradient of the image. In other words,

C∝

∂I ∂x

where I is the intensity of the image, C is the contrast. And from Weber law, we also know that

568

ISCCSP 2008, Malta, 12-14 March 2008

(a)

(b)

(c)

(d)

Fig. 3. (a) Baboon enhanced with g=0.1, M0=0.8, a=0.75 and S= 0.647 (b) Proposed combination of parameters. Baboon enhanced with g=0.1, M0=1.4, a=0.95 and S=0.483 (c) Distribution of Cs (x,y) of (a) (d)Distribution of Cs (x,y) of (b). (both (c) and (d) are shifted by 50 for observing)

C=

ΔL L

where L is the luminance difference between current pixel and the background luminance, and L is the luminance of the background. Using the above, we can define local contrast as

1 ∂I ( x, y ) ∂I ( x, y ) ( + ) 2 ∂x ∂y C ( x, y ) = I ( x, y )

(1)

where C(x,y) is the contrast at (x,y) and I(x,y) is the intensity at (x,y).

ISCCSP 2008, Malta, 12-14 March 2008

569

(a)

(b)

(c)

(d)

Fig. 4. (a) Peppers enhanced with g=0.1, M0=0.4, a=0.64 and S= 0.618 (b) Proposed combination of parameters. Peppers enhanced with g=0.1, M0=2.0, a=0.8 and S=0.176 (c) Distribution of Cs (x,y) of (a) (d)Distribution of Cs (x,y) of (b). (both (c) and (d) are shifted by 50 for observing)

With the definition of contrast, we can now define strength of contrast enhancement for a single pixel. Since contrast discrimination threshold follows the Weber law, we define strength of contrast enhancement as

Cs ( x, y ) =

ΔC ( x, y ) C '( x, y ) − C ( x, y ) = (2) C ( x, y ) C ( x, y )

where C(x,y) is the contrast of the original image and C’(x,y) is the contrast of the processed image and Cs (x,y) is the strength of contrast enhancement. From the definition of strength of contrast enhancement for a single pixel, we know that if Cs (x,y) is a constant, strengths of contrast enhancement at all pixels are the same for human visual perception. On the other hand, if Cs (x,y) varies a lot,

570

over-enhancement and contrast reduction are possible to occur. Under certain circumstances, C(x,y) will be zero and cause Cs (x,y) to be infinity. In the proposed algorithm these pixels are not taken into account because C(x,y) is zero when it is at flat region where contrast should not be enhanced. In order to know the variations of strengths of contrast enhancement of pixels, we adopt a widely used technique in statistics – standard deviation S. The smaller the standard deviation S of Cs (x,y) is, the more uniform the distribution of Cs (x,y) is. As a result, the criterion of parameter selection is to find the combination of parameters which satisfies the requested average increase of contrast and has the smallest standard deviation. In most cases, larger strength of contrast enhancement will have larger variations of distribution of strength of contrast enhancement. Therefore, we have to have the same average strength of contrast enhancement before we can compare the standard deviations resulted from different combinations of parameters.

ISCCSP 2008, Malta, 12-14 March 2008

parts of histogram get compressed. This is consistent with discussions in previous sections. The second test image is shown in Fig. 2. With this test image, we set average strength of contrast enhancement to be 50±2.5%. We observe that over-enhancement, contrast reduction, and loss of details occur in Fig. 4(a). Fig. 4(a) looks very unnatural and colors look different from the original image. On the contrary, Fig. 4(b) contains neither over-enhancement nor loss of details and shows a satisfying contrast enhancement. Comparing Fig. 4(c) and (d), we can easily see that strengths of contrast enhancement of pixels in Fig. 4(d) distribute much more uniformly than those in Fig. 4(c). The two tests, performed on color images, show that a random selection of parameters is possible to get contrast enhancement with lots of drawbacks that appeared in previous work, but a suitable selection can result in pleasant contrast enhancement. These results are also consistent with the values computed by the proposed algorithm. V.

Summary of automatic algorithm Input: An image, average strength of contrast enhancement, and sets of possible parameters Step1: Compute C(x,y) by (1) Step2: Enhance contrast of the input image by intensity-pair distribution with an available combination of parameters. Step3: Compute C’(x,y) by (1) and Cs (x,y) by (2). Step4: Compute the standard deviation of Cs (x,y). Repeat step2 to step4 until all combinations of possible parameters are tested and record the combination of parameters that minimizes the standard deviation of Cs (x,y). Output: A combination of parameters or failure which means there is no appropriate combination of parameters for the requested average strength of contrast enhancement. IV. EXPERIMENTAL RESULTS The first test image is shown in Fig. 1. With this test image, we set average strength of contrast enhancement to be 97.5±2.5%. As we can see, Fig. 3(a) suffers from over-enhancement. Fig. 3(a) has larger standard deviation than Fig. 3(b) and therefore, gets over-enhanced and loses some details. As expected, larger variation of distribution of strength of contrast enhancement causes over-enhancement at some regions and contrast reduction at some other regions. On the other hand, the result of contrast enhancement with proposed combination of parameters is shown in Fig. 3(b). It looks very natural and provides the same average strength of contrast enhancement without problems of over-enhancement and loss of details. Comparing Fig. 3(c) with (d), we also notice that Fig. 3(c) has greater intensity value near edges. This makes other

CONCLUSIONS

In this paper we have introduced a very effective method to find an appropriate combination of parameters mentioned in intensity-pair distribution algorithm. With the recommended combination of parameters, we have very satisfying contrast enhancement. Proposed algorithm also provides automation of image contrast enhancement and the strength of contrast enhancement is still controllable. Proposed combinations of parameters mentioned in section 4 are found by automation. Experimental results, performed on color images, show that our approach is quite effective. REFERENCES [1]

S. C. Pei, Y. C. Zeng, and C. H. Chang, “Virtual restoration of ancient Chinese paintings using color contrast enhancement and lacuna texture synthesis,” IEEE Trans. Image Processing, vol. 13, pp. 416–429, 2004. [2] W A. Wahab, S. H. Chin, and E.C. Tan, “Novel approach to automated fingerprint recognition,” in Proc. IEE Vision, Image and Signal Processing, vol. 145, 1998, pp. 160–166. [3] A. Torre, A. M. Peinado, J. C. Segura, J. L. Perez- Cordoba, M. C. Benitez, and A. J. Rubio, “Histogram equalization of speech representation for robust speech recognition,” IEEE Trans. Speech Audio Processing, vol. 13, no. 3, pp. 355–366, May 2005. [4] S. M. Pizer, “The medical image display and analysis group at the university of North Carolina: reminiscences and philosophy,” IEEE Trans. Medical Imaging, vol. 22, no. 1, pp. 2–10, Jan 2003. [5] R. C. Gonzalez, R. E.Woods, Digital Image Processing. 2nd ed. Reading, MA: Addison-Wesley, 1992. [6] A. K. Jain, Fundamentals of Digital Image Processing. Englewood Cliffs, NJ: Prentice-Hall, 1989. [7] J. Zimmerman, S. Pizer, E. Staab, E. Perry, W. McCartney, and B. Brenton, “Evaluation of the effectiveness of adaptive histogram equalization for contrast enhancement,” IEEE Trans. Medical Imaging, vol. 7, no. 4, pp. 304–312, Dec 1988. [8] Y. T. Kim, “Contrast enhancement using brightness preserving bi-histogram equalization,” IEEE Trans. Consumer Electronics, vol. 43, no. 1, pp. 1–8, Feb 1997. [9] Y. K. Kim, J.K. Paik, and B.S. Kang, “Contrast enhancement system using spatially adaptive histogram equalization with temporal filtering,” IEEE Trans. Consumer Electronics, vol. 44, no. 1, pp. 82– 86, Feb 1998. [10] K. Wongsritong, K. Kittayaruasiriwat, F. Cheevasuvit, K. Dejhan, and A. Somboonkaev, "Contrast Enhancement Using Multi-Peak Histogram

ISCCSP 2008, Malta, 12-14 March 2008

[11]

[12]

[13]

[14]

Equalization With Brightness Preserving," in Proceedings of IEEE AsiaPacific Conference on Circuits and Systems, Chiangmai, Taiwan, Nov 1998, pp. 455-458. J.Y. Kim, L.S. Kim and S.H. Hwang, “An Advanced Contrast Enhancement Using Partially Overlapped Sub- Block Histogram Equalization,” IEEE Trans. on Circuits and Systems for Video Technology, Vol. 11, Issue 4, pp.475-484, Apr. 2001. T.-C. Jen, B. Hsieh, and S.-J. Wang, "Image Contrast Enhancement Based on Intensity-Pair Distribution," in Proceedings of IEEE International Conference on Image Processing ICIP 2005, Genova, Italy, Sept 2005, pp. 913-916. Kingdom, F. A. A., and Whittle, P., "Contrast discrimination at high contrasts reveal the influence of local light adaptation on contrast processing, " in Vision Research 36, 6, 1996, pp. 817-829. Peli, E. "Contrast in complex images" Journal of Optical Society of America A 7, 10, 1990, pp. 2032-2040

571

Automatic Intensity-Pair Distribution for Image Contrast ...

q3696407@mail.ncku.edu.tw1 , sctai@mail.ncku.edu.tw2. Abstract—Intensity-pair ... Index-terms: automation, intensity-pair, contrast enhancement, histogram.

1MB Sizes 0 Downloads 236 Views

Recommend Documents

Hybrid Generative/Discriminative Learning for Automatic Image ...
1 Introduction. As the exponential growth of internet photographs (e.g. ..... Figure 2: Image annotation performance and tag-scalability comparison. (Left) Top-k ...

Multi-organ automatic segmentation in 4D contrast ...
promise as a computer-aided radiology tool for multi-organ and multi-disease ... the same range of intensities), and r=1..3 for pre-contrast, arterial and venous ... and visualization of the segmentation was generated using. VolView (Kitware, Inc.).

Automatic CAD System for HEp-2 Cell Image ...
among the top ten leading causes of death among women in all age groups ..... The heat map can represent the intensity values clearly in each image. As can ...

Automatic segmentation of kidneys from non-contrast ...
We evaluated the accuracy of our algorithm on five non-contrast CTC datasets .... f q t qp p. +. = → min, min. (3) t qp m → is the belief message that point p ...

Tattoo-ID: Automatic Tattoo Image Retrieval for Suspect ...
performance of the system is evaluated on a database of 2,157 tattoos representing 20 .... 2 Tattoo Image Database. We have ..... Intelligence, vol. 19, no. 7, pp.

Reversible Image Data Hiding With Contrast Enhancement ieee.pdf ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Reversible ...

A Revisit of Generative Model for Automatic Image ...
visit the generative model by addressing the learning of se- ... [15] proposed an asymmetrical support vector machine for region-based .... site i for the kth image.

Tattoo-ID: Automatic Tattoo Image Retrieval for Suspect ...
East Lansing, Michigan 48824, USA. {jain, leejun11 ... Tattoos are a useful tool for person identification in forensic applications. There ... We have downloaded 2,157 tattoo images from the Web [14] belonging to eight main ..... We have presented th

Learning Contextual Metrics for Automatic Image ... - Springer Link
Mi) dx. (6). In our metric learning method, we regularize each metric by Euclidean distance function. So we add the following single regularization term into our ...

Multi-Label Sparse Coding for Automatic Image ...
Department of Electrical and Computer Engineering, National University of Singapore. 3. Microsoft ... sparse coding method for multi-label data is proposed to propagate the ...... Classes for Image Annotation and Retrieval. TPAMI, 2007.

Multi-Label Sparse Coding for Automatic Image ... - Semantic Scholar
Microsoft Research Asia,. 4. Microsoft ... [email protected], [email protected], {leizhang,hjzhang}@microsoft.com. Abstract .... The parameter r is set to be 4.

Automatic Image Tagging via Category Label and ...
To copy otherwise, to republish, to post on servers or to ..... 2Some images are not available due to copyright protection or other reasons. The real number ...

Semi-automatic dynamic auxiliary-tag-aided image ...
all over the world due to the limited accuracies or data sets. Nev- ertheless the ... Besides the image annotation, the text categorization [25,26] and the gene ...

Automatic Image Tagging via Category Label and Web ...
trip, these images may belong to 'tiger', 'building', 'moun- tain', etc. It is hard ..... shuttle shuttlecock club bocce ball game lawn summer bocci croquet party grass.

Semi-automatic dynamic auxiliary-tag-aided image ...
School of Computer Science, Fudan University, 220 Handan Road, Shanghai, .... 2. Related work. Many existing multi-label learning algorithms transform the ...... retrieval at the end of the early years, IEEE Transactions on Pattern Analysis .... his

Automatic Image Annotation by Using Relevant ...
image/video search results via relevance re-ranking [14-16], where the goal for .... serve that visual-based image clustering can provide a good summarization of large ..... multiple search engines for visual search reranking”, ACM. SIGIR, 2009.

Actionable = Cluster + Contrast? - GitHub
pared to another process, which we call peeking. In peeking, analysts ..... In Proceedings of the 6th International Conference on Predictive Models in Software.

Compare Contrast Defarges.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Compare ...

high contrast high.pdf
high contrast high.pdf. high contrast high.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying high contrast high.pdf.

compare contrast writing.pdf
Whoops! There was a problem loading more pages. Retrying... compare contrast writing.pdf. compare contrast writing.pdf. Open. Extract. Open with. Sign In.

PartBook for Image Parsing
effective in handling inter-class selectivity in object detec- tion tasks [8, 11, 22]. ... intra-class variations and other distracted regions from clut- ...... learning in computer vision, ECCV, 2004. ... super-vector coding of local image descripto

PartBook for Image Parsing
effective in handling inter-class selectivity in object detec- tion tasks [8, 11, 22]. ... automatically aligning real-world images of a generic cate- gory is still an open ...

Resolvent estimates for high-contrast elliptic problems ...
Kirill Cherednichenko, University of Bath. Sazetak. I will discuss the ... values of order one and of order ε2 in different parts of the period cell. I shall describe a ...