1804

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 8, AUGUST 2009

Improved Dot Diffusion by Diffused Matrix and Class Matrix Co-Optimization Jing-Ming Guo, Member, IEEE, and Yun-Fu Liu

Abstract—Dot diffusion is an efficient approach which utilizes concepts of block-wise and parallel-oriented processing to generate halftones. However, the block-wise nature of processing reduces image quality much more significantly as compared to error diffusion. In this work, four types of filters with various sizes are employed in co-optimization procedures with class matrices of size 8 and 16 16 to improve the image quality. The optimal 8 diffused weighting and area are determined through simulations. Many well-known halftoning methods, some of which includes direct binary search (DBS), error diffusion, ordered dithering, and prior dot diffusion methods, are also included for comparisons. Experimental results show that the proposed dot diffusion achieved quality close to some forms of error diffusion, and additionally, superior to the well-known Jarvis and Stucki error diffusion and Mese’s dot diffusion. Moreover, the inherent parallel processing advantage of dot diffusion is preserved, allowing us to reap higher executing efficiency than both DBS and error diffusion. Index Terms—Digital halftoning, direct binary search, dot diffusion, error diffusion, ordered dithering.

I. INTRODUCTION igital halftoning [1], [2] is a technique for converting grayscale images into binary images. These binary images resemble the original images when viewed from a distance due to the low-pass filtering nature of the human visual system (HVS). The technique is used widely in computer printer-outs, printed books, newspapers and magazines, as they are mostly constrained to the black-and-white format (with and without ink). Another major application of digital halftoning is color quantization with a restricted color palette. Halftoning methods include ordered dithering [1], dot diffusion [3], [4], error diffusion [5]–[16], and direct binary search (DBS) [17]–[23]. Among these, dot diffusion offers good tradeoff between image quality and processing efficiency. Although some researchers propose an error-diffusion-based approach that encompasses the parallel processing property as that in dot diffusion, the resulting efficiency is still lower than that of dot diffusion. Detailed comparisons are provided at the end of Section IV.

D

Manuscript received October 14, 2007; revised March 24, 2009. First published June 23, 2009; current version published July 10, 2009. This work was supported by the National Science Council, R.O.C., under Contract NSC 96-2221-E-124-MY2. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Gaurav Sharma. The authors are with the Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, R.O.C. (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIP.2009.2021318

Ordered dithering is a parallel processing method, and is generally distinguished into clustered-dot and dispersed-dot halftone screens. The image quality produced by ordered dithering is inferior to that of DBS, error diffusion, and dot diffusion, since the error induced during the halftoning procedure is retained in each halftone pixel. In contrast, error diffusion avoids this, as the quantized error is designed to be compensated by neighboring pixels. This results in error-diffused halftones generally having the pleasant-looking blue noise property [24]. However, error diffusion lacks the advantage of parallel processing and, thus, its inferior processing efficiency to ordered dithering. The DBS is currently the most powerful halftoning method to generate halftones. However, its time-consuming iterative approach renders it difficult to be realized in commercial printing devices. Dot diffusion reaps benefits from parallel processing through utilizing a diffused weighting and a class matrix. Prior studies on dot diffusion techniques focused on class matrix optimization (Knuth [3] and Mese [4]). However, the diffused weighting of these works are fixed without carrying out optimization procedures. This study considers three issues, namely diffused weighting, diffused area, and training sets, in order to improve image quality. The proposed method approximates error diffusion halftoning while maintaining parallel processing capability. The rest of this paper is organized as follows. Section II introduces prior error diffusion and dot diffusion methods. Following which, Section III describes the proposed dot diffusion method. Finally, Section IV shows the experimental results and Section V draws the conclusions. II. OVERVIEW OF ERROR DIFFUSION AND DOT DIFFUSION This study presents an improved dot diffusion approach with approximate quality to traditional error diffusion. To facilitate the understanding of the differences between error diffusion and dot diffusion, an overview of error diffusion is provided in Section II-A. In Section II-B, dot diffusion methods proposed by Knuth and Mese will be briefly introduced, of which major weaknesses and differences will be highlighted. A. Traditional Error Diffusion Figs. 1(a) and 2 show the scanned path and processing flow chart of error diffusion, respectively. Here, we numerically define 255 as a white pixel and 0 as a black pixel. The variable denotes the current input pixel value, and denotes the diffused error accumulated from neighboring processed pixels. denotes the binary output, and the variable The variable denotes the error kernel. The variable denotes the

1057-7149/$25.00 © 2009 IEEE Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on August 31, 2009 at 03:14 from IEEE Xplore. Restrictions apply.

GUO AND LIU: IMPROVED DOT DIFFUSION BY DIFFUSED MATRIX AND CLASS MATRIX CO-OPTIMIZATION

Fig. 1. Scan order of error diffusion and diffused direction of dot diffusion. (a) Raster scan order of standard error diffusion. (b) Diffused direction of Knuth’s 8 8 class matrix.

2

Fig. 3. LMS-trained human visual filter (7

1805

2 7).

Fig. 2. Error diffusion flow chart.

modified gray output, and denotes the difference between the modified gray output and the binary output . The relationships of these variables are organized as follows: where (1) where

if if

(2)

B. Traditional Dot Diffusion The main differences between dot diffusion and error diffusion are the processing order, diffused weighting, and the difis fused direction. Suppose an original image of size , as shown in divided into nonoverlapped blocks of size Fig. 1(b) (suppose ). A class matrix, which is of the same size as a divided block, is used to determine the processing order in a block. The flow chart of dot diffusion is the same as that in Fig. 2, while (1) is modified as follows: where where the variable pose support region

(3) denotes the diffused weighting (supis of size 3 3) as arranged below (4)

Fig. 4. Flow chart of the proposed co-optimization procedure.

The variable C denotes the pixel currently being processed. Notably, the error can only diffuse to neighboring pixels that associates to the members in the class matrix with a higher value than its own associated value. These are pixels that have yet is to be binarized. The variable the summation of the diffused weights corresponding to those unprocessed pixels. An example as shown in Fig. 1(b) demonstrates the concepts introduced above. The Knuth’s 8 8 class associated to matrix is used as an example. The values of members with values 0 and 7 in the class matrix, are 12 and 7, respectively. The parallel processing property of dot diffusion can be appreciated from Fig. 1(b), where the pixels associated to the

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on August 31, 2009 at 03:14 from IEEE Xplore. Restrictions apply.

1806

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 8, AUGUST 2009

Fig. 5. Diffused weighting of various sizes for class matrix of size 8 (g) 7 7, and (h) 9 9.

2

2

2 8 (a)–(d) and 16 2 16 (e)–(g). (a) 3 2 3, (b) 5 2 5, (c) 7 2 7, (d) 9 2 9, (e) 3 2 3, (f) 52 5,

same value in the class matrix can be processed concurrently. Hence, suppose the class matrix is of size 8 8, a dot-diffused image can be obtained in 64 time units. The processing orders within the class matrix play a significant role in the quality of the reconstructed image. Knuth’s optimization approach attempts to reduce the number of barons (coefficients in the class matrix with no higher pixel value

surrounding it) and near-barons (coefficients in class matrix with only one higher pixel value surrounding it). The concept is straight-forward since a baron leads to a nondiffusible quantized error, and a near-baron only allows the quantized error to diffuse in one way. However, the Knuth’s method does not consider the nature of the human visual system (HVS) in their optimization procedure. To resolve this problem, the optimiza-

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on August 31, 2009 at 03:14 from IEEE Xplore. Restrictions apply.

GUO AND LIU: IMPROVED DOT DIFFUSION BY DIFFUSED MATRIX AND CLASS MATRIX CO-OPTIMIZATION

1807

TABLE I CLASS MATRIX OBTAINED BY THE PROPOSED CO-OPTIMIZATION PROCEDURE. (a) CLASS MATRIX OF SIZE 8 8 AND DIFFUSED WEIGHTING OF SIZE 3 (b) CLASS MATRIX OF SIZE 16 16 TRAINED WITH DIFFUSED WEIGHTING OF SIZE 3 3

2

2

tion procedure in Mese’s method includes the following HVS equation to optimize the order within the class matrix

2

2 3.

single tone 16 in the class matrix optimization, which causes difficulties in rendering image regions with other tones using the devised class matrix.

(5) where

(6)

fMese’s work adopted , , , , , , and . The single tone 16 is employed in the optimization to develop the final class matrix. Although Mese’s class matrix provides excellent reconstructed halftones as will be shown in Section IV, we believe it can be further improved for the following reasons. First, the diffused weighting and diffused area are not carefully optimized in Mese’s approach. Second, Mese simply adopted the

III. IMPROVED DOT DIFFUSION USING OPTIMIZED DIFFUSED WEIGHTING AND CLASS MATRIX As mentioned, two dot diffusion parameters, the diffused weighting and the diffused area, play important roles in the class matrix optimization. An experiment is carried out using filters of different sizes as well as varying diffused weightings. The filters are co-optimized with the class matrix. Although the size of the class matrix can be varied, parallel processing efficiency declines as the size of the class matrix grows. To preserve the benefit of parallel processing, this study attempts to develop optimized class matrices of size 8 8 and 16 16 with their corresponding diffused weightings.

(7)

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on August 31, 2009 at 03:14 from IEEE Xplore. Restrictions apply.

1808

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 8, AUGUST 2009

2

2 2

Fig. 6. Halftone images obtained with class matrices of size 8 8 and 16 16 and trained using different diffused areas and diffused weighting. (a) Original grayscale image. (b) Class matrix of size 8 8 and diffused weighting of size 3 3, (c) 5 5, (d) 7 7, and (e) 9 9. (f) Class matrix of size 16 16 and diffused weighting of size 3 3, (g) 5 5, (h) 7 7, and (i) 9 9. (All printed at 100 dpi). (b) PSNR = 33:33 dB; (c) PSNR = 33:29 dB; (d) PSNR = 33:13 dB; (e) PSNR = 32:84 dB; (f) PSNR = 34:13 dB; (g) PSNR = 33:84 dB; (h) PSNR = 33:69 dB; (i) PSNR = 33:47 dB.

2

2 2

2

2 2

2

2

Fig. 7. Diffused weighting of size 3 3 for class matrices of size 8 8 and 16 16. (a) for class matrix of size 8 8 and (b) 16 16; (c) diffused weighting with power fashion.

2

2

A. Performance Evaluation Performance evaluation employed in this work is described as below. For an image of size , the quality of a halftone image is defined as (7), shown at the bottom of the previous denotes the original grayscale image; page, where denotes the corresponding halftone image; denotes the

2

2

2

2

Fig. 8. Average PSNR using eight images obtained by the proposed class matrices of size 8 8 and 16 16.

2

2

LMS-trained coefficient at position , and denotes the support region of the LMS-trained filter. The size of is fixed at 7 7 when measuring the quality of halftone images. The LMS-trained filter can be obtained from psychophysical experiments [25]. The other way to derive is to use a training set of both pairs of grayscale images and good halftone result of them. Error diffusion, ordered dithering and direct binary

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on August 31, 2009 at 03:14 from IEEE Xplore. Restrictions apply.

GUO AND LIU: IMPROVED DOT DIFFUSION BY DIFFUSED MATRIX AND CLASS MATRIX CO-OPTIMIZATION

1809

2

Fig. 9. Dot-diffused pattern obtained by the proposed method and Mese’s method with class matrix of size 8 8. (a) Grayscale 16, proposed method. (b) Grayscale 16, Mese’s method. (c) Grayscale 33, proposed method. (d) Grayscale 33, Mese’s method. (e) Grayscale 81, proposed method. (f) Grayscale 81, Mese’s method. (g) Grayscale 116, proposed method. (h) Grayscale 116, Mese’s method. (All printed at 100 dpi). (a) PSNR = 29:90 dB; (b) PSNR = 34:23 dB; (c) PSNR = 35:43 dB; (d) PSNR = 27:27 dB; (e) PSNR = 38:15 dB; (f) PSNR = 30:29 dB; (g) PSNR = 36:05 dB; (h) PSNR = 29:49 dB.

search (DBS) can be used to produce the set. The work adopts LMS to produce as follows: (8) (9) (10)

in this work. LMS optimum procedure, which is set to be Some other quality evaluation methods can be found in [26] and [27]. The trained LMS filter is shown in Fig. 3. Notably, this filter has some basic human visual system characteristics: 1) the diagonal has less sensitivity than the vertical and horizontal directions, and 2) the center portion has the highest sensitivity and it decreases while moving away from the center. By cooperating with the trained LMS filter, (7) may not be a perfect measurement for image quality, but it is more appropriate than the traditional PSNR criterion.

(11) B. Diffused Weighting and Class Matrix Co-Optimization (12) denotes the optimum LMS coefficient; dewhere and , and denotes the adnotes the MSE between justing parameter used to control the convergent speed of the

To conduct the diffused weighting and class matrix co-optimization, some constraints in diffused weighting must be met. 1) Those coefficients nearer to the center of the diffusion matrix have higher values.

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on August 31, 2009 at 03:14 from IEEE Xplore. Restrictions apply.

1810

2) Those coefficients with the same Euclidean distance to the center of the diffusion matrix have the same values. 3) The nearest vertical and horizontal coefficients are fixed as 1. The first constraint is to accommodate human vision characteristic. The second and third constraints are to reduce the number of possible coefficient combinations in diffused weighting. For example, if the diffusion matrix is of size 3 3, only one value in the four corners needs to be optimized. Instead of individually optimizing diffused weighting and class matrix, the two key components are co-optimized. During the class matrix optimization, each member in the class matrix is successively swapped with one of the other 63 members (assuming the class matrix if of size 8 8) and applied to the eight testing images. Each swapping involves switching all potential diffused weightings. Each potential weighting is obtained by to its previous value. The quality evaluation adjusting approach introduced in Section III-A is employed to evaluate the average PSNRs (before and after swapping and switching) of the corresponding dot-diffused halftone images. Only the combination that achieves the highest PSNR is selected, and the above procedures are repeated until swapping in class matrix and switching in diffused weighting no longer improve the PSNR. Comparatively, Mese’s method simply adopts the single tone 16 for class matrix training, making it difficult to render image regions with other tones. In contrast, this study adopts eight different natural images in its training procedure, allowing the optimized class matrix and diffused weighting to adapt to different tones in an image more adequately. The steps of the optimization procedure are detailed as below. The corresponding flow chart is illustrated in Fig. 4. Step 1) Given an initial class matrix (Mese’s class matrix is employed). Step 2) Four initial filters of sizes 3 3, 5 5, 7 7 and 9 9 are employed as diffused weighting with different diffused areas in the testing. Step 3) Suppose the members within class matrix are ordered as a 1-D sequence. Successively swap each member in the class matrix with one of the other 63 (given a class matrix is of size 8 members 8), where . Step 4) Generate potential diffused weightings by adjusting in all its coefficient values. During the diffused weighting generation, the nearest vertical and horizontal coefficients are fixed as 1, and the coefficients with the same Euclidean distance to the center of the diffusion matrix are kept at the same value. Step 5) Evaluate the average PSNR of the dot-diffused halftone images using the class matrix and diffused weightings obtained from Step 3 and 4, Let the , as LMS filter of size 7 7 be the HVS filter indicated in (7). Step 6) The swapped class matrix and the switched diffused weighting that lead to the highest reconstructed image quality, , is used as the new class matrix and diffused

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 8, AUGUST 2009

Fig. 10. PSNR versus grayscale performance comparison with the proposed method and the Mese’s method.

weighting. Otherwise, the swapped members are returned to the original position in the class matrix. in the class matrix, and Step 7) Select another member then performs Steps 4 to 6. Step 8) If any swapping and switching do not improve the resulting quality of the reconstructed dot-diffused image, terminate the optimization procedure. Otherwise, repeat Steps 3–7. Fig. 5 shows eight diffused weightings obtained from above procedure. The coefficient values are interpolated to full fill each floating point locations. Since the center value is useless, it is fixed as zero throughout the optimization procedure. Table I shows the final convergent class matrices obtained with the optimization procedure above. The table does not include 8 8 and 16 16 class matrices with optimized diffused area 5 5, 7 7 and 9 9, as these do not yield results superior to the one obtained by filter of size 3 3. The reason that the diffused area of size 3 3 performs the best is that it violates the causality the least. IV. EXPERIMENTAL RESULTS Eight different testing images are used to test the performance of the proposed algorithm. Equation (7) is adopted to evaluate the PSNR, using the LMS-trained filter of size 7 7 shown in Fig. 3. The best diffused weighting and its corresponding diffused area is first identified. Fig. 6(b)–(i) shows the dot-diffused images processed by the eight optimized class matrices. Among these, Fig. 6(b) has the maximum PSNR of 33.33 dB for the class matrix of size 8 8, and Fig. 6(f) has the maximum PSNR

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on August 31, 2009 at 03:14 from IEEE Xplore. Restrictions apply.

GUO AND LIU: IMPROVED DOT DIFFUSION BY DIFFUSED MATRIX AND CLASS MATRIX CO-OPTIMIZATION

1811

2

Fig. 11. Dot-diffused images obtained from continuous ramp map. (a) Original grayscale image. (b) Class matrix of size 8 8 using in proposed method and (c) Mese’s method [4]. (d) Class matrix of size 16 16 using in proposed method and (e) Mese’s method. (All printed at 200 dpi). (b) PSNR = 35:10 dB; (c) PSNR = 32:40 dB; (d) PSNR = 35:39 dB; (e) PSNR = 35:08 dB.

2

of 34.13 dB for class matrix of size 16 16. The best class matrices are both obtained by diffused area of size 3 3. Fig. 7

shows the exact diffused weightings, where the variable x denotes the pixel currently being processed. Since the diffused

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on August 31, 2009 at 03:14 from IEEE Xplore. Restrictions apply.

1812

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 8, AUGUST 2009

2

Fig. 12. Dot-diffused images with the natural image. (a) Proposed method with class matrix of size 16 16. (b) Mese’s method [4] with class matrix of size 16 16. (Original part printed at 300 dpi and the enlarged part printed at 150 dpi). (a) PSNR = 34:69 dB; (b) PSNR = 34:01 dB.

2

weighting is co-optimized with the class matrix, the optimal class matrix is obtained simultaneously with the optimal diffused weighting. The two optimized class matrices of size 8 8 and 16 16 associate to the two optimized diffused weighting in Fig. 7(a) and (b) are shown in Table I(a) and (b), respectively. Although we set constraints in the optimization by fixing the four vertical and horizontal elements as 1 and the four elements in the corners with the same floating value, the reconstructed image quality is still improved with this co-optimization strategy, as shown in Fig. 6. To further reduce the deficiency of using floating point diffused weighting, we use weightings according to the power relationship shown in Fig. 7(c). Notably, the class matrices are still the ones in Table I(a), (b). The reconstructed dot-diffused results are surprising in that they are simply slightly inferior to the results from co-optimization as shown in Fig. 8 with diffused area of size 3 3. The reason that the image quality using power fashion filter degrades rapidly is that when the filter size increases, the differences of between the power fashion coefficients and the optimal floating point coefficients increase as well. The variables CM and DW denote the class matrix and diffused weighting, respectively. It is apparent that the image quality decreases as the diffused area increased. The reason can be explained by the fact that optimization does not guarantee global optimum, and when the search space increases, the probability of getting stuck to local optimums increases. Hence, the class matrices, as shown in Tables I(a), (b), which are co-optimized with the diffused weighting of size 3 3 are adopted for the proposed dot diffusion. In fact, a better result can be obtained by modifying Step 8 and adding an extra Step 9 in co-optimization procedure as below.

Step 8. If any swapping and switching do not improve the resulting quality of the reconstructed dot-diffused image, record the temporary local optimized diffused weighting and class matrix. Otherwise, repeat Steps 3–7. Step 9. Randomly permute the convergent coefficients of class matrix obtained in Step 8, and repeat Steps 3–9 until a satisfactory result is obtained. It took us about two months to finish one round (Steps 1–8) using the original co-optimization version. If the readers are interested in generating a better result, the above modified co-optimization version (Steps 1–9) is suggested. Another observation is that the optimized diffused weighting matrices for 8 8 and 16 16 matrices are different. Since this work tries to propose a new dot diffusion by co-optimizing diffusion matrix and class matrix simultaneously, it is not surprising to obtain different diffusion matrix for class matrix of different sizes. However, as indicated above, the diffusion matrices and class matrices obtained in this work should not be global optimized results. It is difficult to provide a precise rationale for the experimental results, since the global optimized results may have identical diffusion matrices for 8 8 and 16 16 class matrices. A series of experiments are conducted to compare Mese’s and the proposed dot diffusion. In Mese’s, the single tone 16 is utilized for optimization. In contrast, eight natural images are employed for co-optimization in this work. Fig. 9 shows four patterns with grayscales 16, 33, 81, and 116 for performance comparison. As expected, Mese’s method achieves better image quality with tone 16. However, the proposed method is superior to Mese’s method in tones 33, 81, and 116. Fig. 10 shows

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on August 31, 2009 at 03:14 from IEEE Xplore. Restrictions apply.

GUO AND LIU: IMPROVED DOT DIFFUSION BY DIFFUSED MATRIX AND CLASS MATRIX CO-OPTIMIZATION

1813

Fig. 13. Performance comparisons between various methods. (a) DBS [23]. (b) Floyd [5], (c) Jarvis [6], (d) Stucki [7], (e) Ostromoukhov [8]. (f) Shiau [11]. (g) Li [13] (h) Knuth [3]. (i) Mese [4] with class matrix of size 8 8. (j) Mese [4] with class matrix of size 16 16. (k) Classical-4 clustered-dot dithering. (l) Bayer-5 dispersed-dot dithering. (All printed at 100 dpi). (a) PSNR = 39:8 dB; (b) PSNR = 34:24 dB; (c) PSNR = 28:1 dB; (d) PSNR = 28:87 dB; (e) PSNR = 34:87 dB; (f) PSNR = 33:68 dB; (g) PSNR = 38:08 dB; (h) PSNR = 29:22 dB; (i) PSNR = 30:24 dB; (j) PSNR = 33:19 dB; (k) PSNR = 20:15 dB; (l) PSNR = 29:65 dB.

2

the results of testing all grayscales ranging from 0 to 255. The proposed method outperformed Mese’s method by and large. The average PSNR values of the proposed method and Mese’s method using class matrix of size 8 8 are 34.63 and

2

32.46 dB, respectively, and the average PSNR of the proposed method and Mese’s method using class matrix of size 16 16 are 35.58 and 34.71 dB, respectively. Another comparison is conducted with the ramp map, as shown in Fig. 11.

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on August 31, 2009 at 03:14 from IEEE Xplore. Restrictions apply.

1814

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 8, AUGUST 2009

2

Fig. 14. Performance comparisons among different halftone techniques. The notation represents DBS, notation represents error diffusion, notation represents dot diffusion and notation 3 represents ordered dithering. (a) Average PSNR. (b) Average computational time, where the vertical axis is compressed by log (1).

The results of this comparison is consistent with that of the other experiments: The PSNR values of the proposed method and Mese’s method are 35.1 and 32.4 dB, using the class matrix of size 8 8, and 35.39 and 35.08 dB, using the class matrix of size 16 16. Another natural image of higher resolution of size 512 512 is also shown to compare the performance of the proposed method and Mese’s method with class matrix 16. Fig. 12 shows the result, revealing that the of size 16 proposed method produces better blue noise distribution than Mese’s method. Some of them can be appreciated from the enlarged parts. Fig. 13 shows the halftone results obtained by various halftoning methods, namely error diffusion by Floyd [5], Jarvis [6], Stucki [7], Ostromoukhov [8], Shiau [11] and Li [13]; dot diffusion by Knuth [3] and Mese [4]; ordered dithering [1] with

Classical-4 clustered-dot dithering and Bayer-5 dispersed-dot dithering, and DBS [23]. Fig. 14 shows the comparisons of average image quality and processing efficiency of above halftone techniques. Fig. 14(a) clearly indicates that the proposed dot diffusion has close image quality to error diffusion and far better than ordered dithering. Although the proposed method has lower quality than some error diffusion methods and DBS, its better processing efficiency as a result of parallel processing makes it superior to error diffusion or iteration-based DBS. The experimental results are shown in Fig. 14(b), which is yielded by a computer with Windows XP Professional Edition SP2 operating system, Intel Core (TM) 2 CPU 2.13 GHz, RAM 1.98 GB using 100 test images of size 512 512. Among these, the average processing efficiency of dot diffusion is higher than error diffusion times and higher than DBS about times. about

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on August 31, 2009 at 03:14 from IEEE Xplore. Restrictions apply.

GUO AND LIU: IMPROVED DOT DIFFUSION BY DIFFUSED MATRIX AND CLASS MATRIX CO-OPTIMIZATION

Fig. 15. Two types of processing order for block interlaced pinwheel error diffusion (BIPED) [14] with block of size 8 8. (a) Inward spiral. (b) Outward spiral.

2

The Tone-Dependent Error Diffusion (TDED) [13] produces halftones with quality approximate to that of DBS. Based on TDED, two other extended approaches, called Block Interlaced Pinwheel Error Diffusion (BIPED) [14] and Serial Block-Based Tone-Dependent Error Diffusion (SBB-TDED) [15], are proposed to improve the processing efficiency and reduce the on-chip memory requirement of error diffusion, respectively. SBB-TDED provides excellent image quality which approximates to DBS results. However, the SBB-TDED does not have the parallel processing advantage as that in dot diffusion. On the other hand, BIPED divides the image into inward blocks and outward blocks to achieve parallel processing advantage. However, the inward and outward scan paths are conducted successively. In other words, the outward block diffusions must be completed before the inward blocks are processed. Hence, it requires processing time twice that of dot diffusion. One example of the inward and outward spirals is shown in Fig. 15. The processing order of BIPED has to follow these two types of spirals. Hence, the processing order of BIPED must be specified, which is similar to the function of the class matrix in dot diffusion. Moreover, the BIPED requires storing 128 sets of diffused weightings and thresholds, whereas dot diffusion simply requires one set of diffused weighting and threshold. In addition, the BIPED needs to store a 128 128 DBS pattern to deal with the checkerboard-like patterns due to limit-cycle behavior at the mid-tone and quartertone levels. Hence, the memory consumption of the BIPED is much higher than that in dot diffusion. Notably, the same memory consumption is also required in SBB-TDED. Based on the discussions above, the proposed dot diffusion still has the advantage of better parallel processing property and lower memory consumption. Although the proposed dot diffusion has these advantages, we would like to highlight that the BIPED and SBB-TDED provide excellent image quality. Suppose the application is quality oriented, the BIPED and SBB-TDED approaches are still strongly recommended. V. CONCLUSION This study proposes an improved dot diffusion that preserves parallel processing capability, while reducing the disparity in image quality with error diffusion. To bridge this disparity, a new co-optimized diffused weighting and class matrix is employed. Four different diffused areas are tested, and the experimental results show that the area of size 3 3 yields the best

1815

results. During optimization, natural images are used to provide more objective results for different tones. Results also indicate that the proposed method has better quality than Mese’s method in most tones, since tone 16 is the only tone level utilized in the training set by Mese. The proposed method is also compared with various halftoning methods, including direct binary search (DBS), error diffusion, ordered dithering, and previous dot diffusion. The comparisons show that the proposed dot diffusion achieved quality close to some methods of error diffusion. Moreover, since the proposed dot diffusion preserves the important parallel processing advantage, it provides higher executing efficiency than DBS or error diffusion. We, therefore, conclude that the proposed dot diffusion method can potentially contribute significantly to the practical printing industry and color quantization applications. Nonetheless, the proposed approach still has two disadvantages. 1) Periodic patterns: Although the proposed approach improves the dot diffusion quality as compared to traditional approaches, some periodic patterns can still be perceived when a large area in the image has monotonic grayscale. This problem may be solved if more complex strategies are applied, such as devising tone dependent diffused weighting or involving DBS patterns in some grayscales as that used in TDED. These two ideas are left for future works. 2) Hardware implementation of floating point diffused weighting: This part has been significantly eased, since during the optimization procedure, the vertical and horizontal weightings are maintained as 1, and the four diagonal weightings have the same value. Moreover, we observed that if the floating diffused weightings are replaced with the power coefficients (2 for vertical and horizontal, and 1 for diagonal), while maintaining the use of the class matrix devised from the floating point diffused weighting, the resulting dot-diffused image still attained excellent image quality.

REFERENCES [1] R. Ulichney, Digital Halftoning. Cambridge, MA: MIT Press, 1987. [2] D. L. Lau and G. R. Arce, Modern Digital Halftoning. New York: Marcel Dekker, 2001. [3] D. E. Knuth, “Digital halftones by dot diffusion,” ACM Trans. Graph., vol. 6, no. 4, Oct. 1987. [4] M. Mese and P. P. Vaidyanathan, “Optimized halftoning using dot diffusion and methods for inverse halftoning,” IEEE Trans. Image Process., vol. 9, no. 4, pp. 691–709, Apr. 2000. [5] R. W. Floyd and L. Steinberg, “An adaptive algorithm for spatial gray scale,” in Proc. SID 75 Digest. Soc. Information Display, 1975, pp. 36–37. [6] J. F. Jarvis, C. N. Judice, and W. H. Ninke, “A survey of techniques for the display of continuous-tone pictures on bilevel displays,” Comput. Graph. Image Process., vol. 5, pp. 13–40, 1976. [7] P. Stucki, MECCA-A Multiple-Error Correcting Computation Algorithm for Bilevel Image Hardcopy Reproduction, IBM Res. Lab., Zurich, Switzerland, Res. Rep. RZ1060, 1981. [8] V. Ostromoukhov, “A simple and efficient error-diffusion algorithm,” in Proc. SIGGRAPH, 2001, pp. 567–572. [9] L. Velho and J. M. Gomes, “Digital halftoning with space filling curves,” Comput. Graph., vol. 25, no. 4, pp. 81–90, Jul. 1991. [10] K. T. Knox, “Introduction to digital halftones,” R. Eschbach, Ed., Recent Progress in Digital Halftoning pp. 30–33, IS&T, 1994.

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on August 31, 2009 at 03:14 from IEEE Xplore. Restrictions apply.

1816

[11] J. N. Shiau and Z. Fan, “A set of easily implementable coefficients in error diffusion with reduced worm artifacts,” Proc. SPIE, vol. 2658, pp. 222–225, 1996. [12] R. Eschbach, “Reduction of artifacts in error diffusion by mean of input-dependent weights,” JEI, vol. 2, no. 4, pp. 352–358, 1993. [13] P. Li and J. P. Allebach, “Tone-dependent error diffusion,” IEEE Trans. Image Process., vol. 13, no. 2, pp. 201–215, Feb. 2004. [14] P. Li and J. P. Allebach, “Block interlaced pinwheel error diffusion,” JEI, vol. 14, Apr.–Jun. 2005. [15] T. C. Chang and J. P. Allebach, “Memory efficient error diffusion,” IEEE Trans. Image Process., vol. 12, no. 11, pp. 1352–1366, Nov. 2003. [16] B. L. Evans, V. Monga, and N. Damera-Venkata, “Variations on error diffusion: Retrospectives and future trends,” in Proc. SPIE/IS&T Color Imaging: Processing, Hardcopy, and Applications, Santa Clara, CA, Jan. 2003, vol. 5008, pp. 371–389. [17] Q. Lin and J. P. Allebach, “Color FM screen design using DBS algorithm,” in Proc. SPIE, 1998, vol. 3300, pp. 353–361. [18] A. U. Agar and J. P. Allebach, “Model-based color halftoning using direct binary search,” IEEE Trans. Image Process., vol. 14, no. 12, pp. 1945–1959, Dec. 2005. [19] F. A. Baqai and J. P. Allebach, “Halftoning via direct binary search using analytical and stochastic printer models,” IEEE Trans. Image Process., vol. 12, no. 1, pp. 1–15, Jan. 2003. [20] M. Analoui and J. P. Allebach, “Model based halftoning using direct binary search,” in Proc. SPIE, Human Vision, Visual Process., Digital Display III, San Jose, CA, Feb. 1992, vol. 1666, pp. 96–108. [21] D. J. Lieberman and J. P. Allebach, “Model based direct binary search halftone optimization with a dual interpretation,” in Proc. Int. Conf. Image Processing, Oct. 1998, vol. 2, pp. 44–48. [22] J. P. Allebach and Q. Lin, “FM screen design using DBS algorithm,” in Proc. Int. Conf. Image Processing, Sep. 1996, vol. 1, pp. 549–552. [23] D. J. Lieberman and J. P. Allebach, “Efficient model based halftoning using direct binary search,” in Proc. IEEE Int. Conf. Image Processing, Oct. 1997, vol. 1, pp. 775–778. [24] R. A. Ulichney, “Dithering with blue noise,” Proc. IEEE, vol. 76, no. 1, pp. 56–79, Jan. 1988. [25] J. Mannos and D. Sakrison, “The effects of a visual fidelity criterion on the encoding of images,” IEEE Trans. Inf. Theory, vol. IT-20, pp. 526–536, 1974. [26] Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process. Lett., vol. 9, no. 3, pp. 81–84, Mar. 2002.

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 8, AUGUST 2009

[27] N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, “Image quality assessment based on a degradation model,” IEEE Trans. Image Process., vol. 9, no. 4, pp. 636–650, Apr. 2000.

Jing-Ming Guo (M’06) was born in Kaohsiung, Taiwan, R.O.C., on November 19, 1972. He received the B.S.E.E. and M.S.E.E. degrees from the National Central University, Taoyuan, Taiwan, in 1995 and 1997, respectively, and the Ph.D. degree from the Institute of Communication Engineering, National Taiwan University, Taipei, Taiwan, in 2004. From 1998 to 1999, he was an Information Technique Officer with the Chinese Army. From 2003 to 2004, he was granted the National Science Council scholarship for advanced research from the Department of Electrical and Computer Engineering, University of California, Santa Barbara. He is currently an Associate Professor with the Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei. His research interests include multimedia signal processing, multimedia security, digital halftoning, and digital watermarking. Dr. Guo is a member of the IEEE Signal Processing Society. He received the Research Excellence Award in 2008, the Acer Dragon Thesis Award in 2005, the Outstanding Paper Awards from IPPR, Computer Vision and Graphic Image Processing in 2005 and 2006, and the Outstanding Faculty Award in 2002 and 2003.

Yun-Fu Liu was born in Hualien, Taiwan, R.O.C., in 1984. He received the B.S.E.E. degree from Jin Wen University of Science and Technology, Taipei, Taiwan, in 2007, and the M.S.E.E. degree from Chang Gung University, Taoyuan, Taiwan, in 2009. He is currently pursuing the Ph.D. degree in Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei. His research interests include intelligent transportation system, digital halftoning, and digital watermarking.

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on August 31, 2009 at 03:14 from IEEE Xplore. Restrictions apply.

Improved Dot Diffusion by Diffused Matrix and ... - Semantic Scholar

Taiwan University of Science and Technology, Taipei, Taiwan, R.O.C. (e-mail: [email protected]; ...... Imaging: Processing, Hardcopy, and Applications, Santa Clara, CA,. Jan. 2003, vol. 5008, pp. ... University of California,. Santa Barbara.

4MB Sizes 1 Downloads 321 Views

Recommend Documents

Dot Plots - Semantic Scholar
Dot plots represent individual observations in a batch of data with symbols, usually circular dots. They have been used for more than .... for displaying data values directly; they were not intended as density estimators and would be ill- suited for

Dot Plots - Semantic Scholar
computer programs that attempt to reproduce these historical plots have unfortunately resorted to simple histogram binning instead of ... Dot plots have a long history. Jevons (1884) used dot plots to ... duce plots that resemble the line printer ast

Diffusion Equations over Arbitrary Triangulated ... - Semantic Scholar
3, MAY/JUNE 2008. • The authors are with the Department of Mathematics, University of ...... search Fund for the Doctoral Program of Higher Education. (No. .... [54] C.L. Wu, J.S. Deng, W.M. Zhu, and F.L. Chen, “Inpainting Images on Implicit ...

Diffusion Equations over Arbitrary Triangulated ... - Semantic Scholar
geometry (the angles of each triangle are arbitrary) and topology (open meshes or closed meshes of arbitrary genus). Besides the ... In [10], the authors considered data processing over implicit surfaces. ...... California Inst. Technology, 2003.

Improved Competitive Performance Bounds for ... - Semantic Scholar
Email: [email protected]. 3 Communication Systems ... Email: [email protected]. Abstract. .... the packet to be sent on the output link. Since Internet traffic is ...

Tag-basedWeb Photo Retrieval Improved by Batch ... - Semantic Scholar
Tag-based Web Photo Retrieval Improved by Batch Mode Re-Tagging. Lin Chen. Dong Xu ... photo sharing web- sites, hosts more than two billion Flickr images.

LEARNING IMPROVED LINEAR TRANSFORMS ... - Semantic Scholar
each class can be modelled by a single Gaussian, with common co- variance, which is not valid ..... [1] M.J.F. Gales and S.J. Young, “The application of hidden.

Scaled Entropy and DF-SE: Different and Improved ... - Semantic Scholar
Unsupervised feature selection techniques for text data are gaining more and ... Data mining techniques have gained a lot of attention of late. ..... Bingham, Mannila, —Random Projection in Dimensionality Reduction: Applications to image and.

Scaled Entropy and DF-SE: Different and Improved ... - Semantic Scholar
from the fact that clustering does not require any class label information for every .... A good feature should be able to distinguish between different classes of documents. ..... Department of Computer Science, University of Minnesota, TR#01-40.

MATRIX DECOMPOSITION ALGORITHMS A ... - Semantic Scholar
solving some of the most astounding problems in Mathematics leading to .... Householder reflections to further reduce the matrix to bi-diagonal form and this can.

Non-Negative Matrix Factorization Algorithms ... - Semantic Scholar
Keywords—matrix factorization, blind source separation, multiplicative update rule, signal dependent noise, EMG, ... parameters defining the distribution, e.g., one related to. E(Dij), to be W C, and let the rest of the parameters in the .... contr

MATRIX DECOMPOSITION ALGORITHMS A ... - Semantic Scholar
... of A is a unique one if we want that the diagonal elements of R are positive. ... and then use Householder reflections to further reduce the matrix to bi-diagonal form and this can ... http://mathworld.wolfram.com/MatrixDecomposition.html ...

The Information Content of Trees and Their Matrix ... - Semantic Scholar
E-mail: [email protected] (J.A.C.). 2Fisheries Research Services, Freshwater Laboratory, Faskally, Pitlochry, Perthshire PH16 5LB, United Kingdom. Any tree can be .... parent strength of support, and that this might provide a basis for ...

Improved estimation of clutter properties in ... - Semantic Scholar
in speckled imagery, the statistical framework being the one that has provided users with the best models and tools for image processing and analysis. We focus ...

Improved prediction of nearly-periodic signals - Semantic Scholar
Sep 4, 2012 - A solution to optimal coding of SG signals using prediction can be based on .... (4) does not have a practical analytic solution. In section 3 we ...

Translating Queries into Snippets for Improved ... - Semantic Scholar
User logs of search engines have recently been applied successfully to improve var- ious aspects of web search quality. In this paper, we will apply pairs of user ...

Kondo quantum dot coupled to ferromagnetic leads - Semantic Scholar
Jul 18, 2007 - ferromagnetic leads, given that an external magnetic field6 or electric field7 gate voltage is ... dot under study, it is of big importance to search for alterna- ...... Finite temperature: Comparison with experimental data. In a recen

Improved quantum hypergraph-product LDPC codes - Semantic Scholar
Leonid Pryadko (University of California, Riverside). Improved quantum ... Example: Suppose we take LDPC code [n,k,d] with full rank matrix ; then parameters of ...

An Improved Version of Cuckoo Hashing: Average ... - Semantic Scholar
a consequence, new implementations have been suggested [5–10]. One of ... such a case, the whole data structure is rebuild by using two new hash functions. ... by s7,s6,...,s0 and assume that f denotes an array of 32-bit random integers.

Learning improved linear transforms for speech ... - Semantic Scholar
class to be discriminated and trains a dimensionality-reducing lin- ear transform .... Algorithm 1 Online LTGMM Optimization .... analysis,” Annals of Statistics, vol.

An Improved Version of Cuckoo Hashing - Semantic Scholar
Proof (Sketch). We model asymmetric cuckoo hashing with help of a labelled bipartite multigraph, the cuckoo graph (see [11]). The two sets of labelled nodes ...

Development of pre-breeding stocks with improved ... - Semantic Scholar
sucrose content over the zonal check variety CoC 671 under early maturity group. Key words: Recurrent selection – breeding stocks - high sucrose – sugarcane.

Improved Video Categorization from Text Metadata ... - Semantic Scholar
Jul 28, 2011 - mance improves when we add features from a noisy data source, the viewers' comments. We analyse the results and suggest reasons for why ...