A New Temporal-Constraint-Based Algorithm by Handling Temporal Qualities for Video Enhancement Jun Xie1, Weiyao Lin1, Hongxiang Li2, Ning Xu1, Hongyu Gao1, Lining Zhang1 2 Institute of Image Communication and Information Department of Electrical and Computer Engineering Processing, Department of Electronic Engineering, North Dakota State University Shanghai Jiao Tong University, Shanghai 200240, China Fargo, ND 58108, USA 1

Abstract—Video enhancement has played very important roles in many applications. However, most existing enhancement methods only focus on the spatial quality within a frame while the temporal qualities of the enhanced video are often unguaranteed. In this paper, a new algorithm is proposed for video enhancement. The proposed algorithm introduces new temporal constraints and combines them with the spatial constraints such that both the spatial and temporal qualities of the video can be improved. Two strategies are proposed for including the temporal constraints. Experimental results demonstrate the effectiveness of the proposed algorithm.

integrating the temporal constraints. The proposed algorithm provides a generalized way for handling the video temporal qualities. Experimental results demonstrates the effectiveness of our proposed algorithm. The rest of the paper is organized as follows: Section II discusses the basic idea of our algorithm. The proposed TCB algorithm is described with details in Section III. Section IV shows the experimental results and Section V concludes the paper. II. THE BASIC IDEA OF THE PROPOSED METHOD

I. INTRODUCTION AND RELATED WORKS Video enhancement is of increasing importance in many applications such as medical services, display processing, texture analysis, surveillance, and scientific visualization [2,9-12]. Many researches have been done on video or image enhancement [1-10]. Many works perform enhancement based on histogram equalization (HE) such that the contract of the image can be improved [1,3-6]. Since pure HE may often lead to unnatural effects in the image, various HEmodification methods are proposed which introduce spatial constraints to reduce these unnatural effects [3-6]. Other works [2,10] select some appealing images as examples and perform enhancement based on these example images. However, most of the existing enhancement algorithms only focus on improving the spatial quality within a single frame or an image. They are not suitable for enhancing videos since the temporal quality consistencies among frames are not guaranteed. There are some works proposed for enhancing videos under some specific applications [9,10]. For example, Liu et al. [10] propose a learning-based method for video conferencing where frames share the same tone mapping function if their backgrounds do not change by much. Although this method can achieve good temporal quality in video conferencing scenarios, it cannot be extended to other scenarios since there are many videos whose backgrounds or contents are changing frequently. Furthermore, Mangiat et al. [9] use two cameras with different exposures to create high-dynamic-range frames. However, this method has specific system requirements and thus cannot be easily used in other scenarios. Therefore, it is still desirable to develop a more generalized enhancement algorithm which can handle the temporal qualities of various videos. In this paper, a new Temporal-Constraint-Based (TCB) algorithm is proposed for video enhancement. The proposed TCB algorithm introduces new temporal constraints and combines them with the spatial constraints such that both the spatial and the temporal qualities of videos are improved. Two different strategies are proposed for defining and

As mentioned, most existing works [2-8] cannot effectively handle the temporal consistencies in a video. Although these methods can achieve proper visual qualities in each frame, the qualities among different frames (i.e., temporal qualities) may vary. This inconsistency may become severe when the algorithm adaptively adopts different enhancement parameters for different frames or when the color histograms of the original videos are changing quickly. Therefore, new algorithms are needed for handling the temporal consistencies. For the ease of description, we will discuss the idea of our algorithm based on the HE Modification-based (HEM) method [5]. However, it should be noted that the idea of our algorithm is general and it can be extended to other enhancement algorithms [2-4,6-9]. The HEM method can be described as in Eqn. (1). Instead of using the histogram distribution to construct the tone mapping function directly, the method formulated a weighted sum of two objectives as: 2

2

min( h − e + λ h − u )

(1)

where u is the uniform distribution, h is the desired color histogram and e is the color histogram of the original image. λ is a parameter balancing the importance between u and e [5]. From Eqn. (1), the desired h can be achieved by:

⎛ λ ⎞ ⎛ 1 ⎞ h=⎜ ⎟⋅u ⎟⋅e + ⎜ ⎝1+ λ ⎠ ⎝1+ λ ⎠

(2)

Based on Eqn. (2), a tone mapping function can be calculated which enhances the original image based on the desired color histogram h [5]. The basic idea of the HEM method is that by introducing another spatial constraint (i.e., |h ﹣u|2), the unnatural effects in the traditional HE method [1] can be effectively reduced. However, the HEM method is still a spatial-based method which does not consider the temporal continuities among frames. In order to handle temporal consistency, we can extend Eqn. (1) by simply including an additional temporal

constraint, as in Eqn. (3). 2

2

2

min( h − e + λ h − u + γ h − ht −1 )

(3)

where ht-1 is the desired color histogram of the previous frame t-1 and γ is another balancing parameter which handling the importance of the temporal constraint. Note that Eqn. (3) is only one example of using our proposed idea for including temporal constraints. In practice, more generalized form can be used. For example, various enhancement algorithms [1-4,7-10] can be utilized to take the place of the HEM method, other temporal constraints can be defined instead of only comparing with the previousframe histogram, and other constraint combination methods can be applied besides using addition for combination. Therefore, based on the above discussions, we can propose a new Temporal-Constraint-Based (TCB) algorithm for video enhancement. The proposed TCB algorithm will be described in detail in the following section. III. THE TEMPORAL-CONSTRAINT-BASED VIDEO ENHANCEMENT ALOGORITHM The framework of the proposed TCB algorithm can be described as in Fig. 1. Current frame

Spatial constraints

Frame enhancement

Temporal constraints

Previous methods Proposed TCB algorithm Enhanced frame

Fig. 1 The framework of the proposed TCB algorithm.

In Fig. 1, when the current frame is input to the system, video enhancement is performed by including both the spatial and the temporal constraints such that both the spatial and temporal visual qualities can be improved. Note that the main difference between our TCB algorithm and the previous algorithms is that the TCB algorithm introduces the temporal constraints for handling the temporal visual qualities while the previous algorithms only include the spatial constraints. As mentioned, there could be many ways to define the temporal constraints and to combine them with the spatial constraints. In this paper, we propose two strategies for defining and integrating the temporal constraints and they will be described in the following. Again, we will describe our strategies based on the HEM method while the proposed strategies can be easily extended to other enhancement algorithms. Furthermore, we will only describe the process for a single channel in this section. For color videos with three channels, the same process can be performed on each channel independently. A. Two-step-bAsed Tone adjustment (TAT) strategy The Two-step-bAsed Tone adjustment (TAT) strategy integrates the temporal constraints in a two-step way. In the first step, a pre-processing is performed to make the color tones of the current frame more consistent with the previous enhanced frames. This pre-processing can be

performed by a tone mapping on the current frame. The tone mapping function can be calculated by the constraints in Eqn. (4) and Eqn. (5).

f (L ) = L'

(4)

x≤0

0 ⎧ ⎪ 2 f ( x ) = ⎨ax + bx ⎪ 255 ⎩

0 < x < 255

(5)

x ≥ 255

where f (x) is the tone mapping function. a and b are two parameters that can be derived from Eqn. (4) and (5). L and L’ are the original average intensity and the desired average intensity, respectively. L and L’ can be calculated by Eqn. (6) and (7), respectively.

L=

1 N

∑ I ( x, y )

(6)

x, y

where I (x, y)  is the pixel value at (x, y) and N is the total number of pixels in the frame.

⎛ 1 ⎞ ⎛ β L' = ⎜⎜ ⎟⎟ ⋅ L + ⎜⎜ ⎝1+ β ⎠ ⎝1+ β

⎞ ⎟⎟ ⋅ Lavg _ past ⎠

(7)

where L is the original average intensity in the current frame and Lavg_past  is the average  intensity of the previous enhanced frames. β is a parameter balancing the importance between L and Lavg_past. Fig. 2 shows two examples of the calculated tone mapping function. From Fig. 2, we can see that the color tones of the current frame can be stretched brighter or darker based on the already-enhanced results in the previous frames.

Fig. 2 Tone mapping curves associated with previous frames

In the second step of the TAT strategy, another set of the spatial and temporal constraints are included in order to enhance the spatial qualities of the frame while keeping the temporal consistencies at the same time. In the experiments of this paper, the included spatial and temporal constraints are the same as in Eqn. (3). Based on Eqn. (3), the final desired color histogram can be calculated by Eqn. (8).

⎛ ⎛ λ 1 ⎞ h = ⎜⎜ ⎟⎟ ⋅ e + ⎜⎜ ⎝1+ λ +γ ⎠ ⎝1+ λ +γ

⎞ ⎛ γ ⎞ ⎟⎟ ⋅ u + ⎜⎜ ⎟⎟ ⋅ ht −1 (8) ⎠ ⎝1+ λ +γ ⎠

where e here is the color hitogram of the tone-mapped current frame from the first step. It should also be noted that λ and γ should be set as relatively small values since the priority in this step is to improve the spatial visual quality of the frame.

From the discussions above, we can see that the temporal constraint in the first step (i.e., the tone mapping pre-processing) is the key to improve the temporal qualities of a video while the temporal constraint in the second step can guarantee the temporal consistencies during the spatial enhancement process. B. Entropy-based Adaptive strategy The Entropy-based Adaptive strategy (EA) strategy improves the temporal qualities by integrating the temporal constraints in the balancing parameter of the spatial constraint. The enhancement process by the EA strategy can be described by Eqn. (9). 2

2

min( h − e + λEA h − u )

(9)

where λEA is the balancing parameter with the temporal constraints embedded and the other parameters are the same as in Eqn. (2). λEA can be calculated by Eqn. (10).

λEA = max (arg (min E (t ) − E (t − 1) ), LB ) (10)

suddenly changing brightness when the object goes into a shadow, the lightning impacts on the backgrounds, or the unstable video sequences captured by low-battery cameras. (b) The original video sequence is temporally consistent but has low spatial quality. After using the previous enhancement methods, the spatial qualities may be improved but the temporal consistency decreases. In this case, our TCB algorithm can also be effective by introducing temporal constraints. Based on the above application scenarios, various experiments are performed. However, due to the limited space, we only show results for the first scenario in this paper. Fig. 3 shows the results of one experiment. In Fig. 3, (a)-(e) are the frames of an original video. This video has inconsistent temporal qualities since it was captured under a lightning weather. Fig. 3 (f)-(j) are the enhancement results by the HEM method [5], and Fig. 3 (k)-(o) are the enhancement results of our TCB algorithm with the TAT strategy where the parameters β, λ, γ in Eqn. (7) and (8) are set to be 1.5, 2 and 3, respectively.

where E(t) is the entropy of frame t and it can be calculated in the following.

E = ∑ − p ( k ) ⋅ log p( k )

(11)

k

In Eqn. (11), p(k) is the histogram value at bin k. Note that a lower-bound LB is defined in Eqn. (10) to ensure that the temporal constraint can be effective in controlling the temporal consistencies. After the desired histogram h is calculated, a tone mapping can be performed on the original frame to achieve the final enhanced frame. The tone mapping function can be described as in Eqn. (12) [5]:

(a)

(f)

(k)

(b)

(g)

(l)

(c)

(h)

(m)

(d)

(i)

(n)

n

T [ n ] = ( 2 B − 1) ∑ p[ j ] + 0.5

(12)

j =0

where p[j] is the value of the j-th bin in the histogram normalized from h. B is the number of bits used to represent the pixel values and n ∈ ⎡⎣ 0, 2 B − 1⎤⎦ . From Eqns. (9)-(12), we can see that the EA strategy embeds the temporal constraints in the balancing parameter λEA such that HE-enhanced frames can be shifted to a relatively stable intensity level. Thus, the visual qualities in each frame can be kept similar and the temporal discontinuity can be greatly reduced. Compared with the TAT strategy, the EA strategy is more straightforward and cost less complexity. Therefore, it can be used in applications where computation complexity is one of the major concerns. IV. EXPERIMENTAL RESULTS In this section, we show experimental results of our proposed TCB algorithm. Note that our algorithm is most effective in the following two application scenarios: (a) The original video is flicking or temporally inconsistent and the proposed algorithm can effectively improve these temporal inconsistencies. This scenario is very significant because a lot of practical video sequences contain visually discontinuous frames. For example, the

(e) (j) (o) Fig. 3 (a)-(e) Original video sequence, (f)-(j) Enhanced video sequence by HEM, (k)-(o) Enhanced video sequence by our TCB algorithm with TAT strategy. ((a)-(e) are the 3rd, 4th, 7th, 11th and 38th frames of the original video)

Comparing the sequences in Fig. 3, the effect of our TCB algorithm is apparent. In the first group (i.e., (a)-(e)), the illumination flickers obviously in different frames due to the unstable weather condition. Although the HEM method can properly improve the spatial quality in each frame (Fig. 3 (f)-(j)), the temporal inconsistency among frames still exists. Compared with the HEM method, the

proposed TCB algorithm can effectively improve both the spatial and the temporal consistencies in the video. We can see from Fig. 3 (k)-(o) that the temporal inconsistency is properly eliminated and the contrast in each frame is also properly tuned to become visually appealing. Fig. 4 shows the corresponding histograms of the 3rd and 4th frame in this sequence where Fig. 4 (a)-(b) correspond to the original sequence, (c)-(d) correspond to the results enhanced by the HEM method, and (e)-(f) correspond to the results by the proposed TCB algorithm. From Fig. 4, we can see that due to the lightening weather condition, there is a clear distribution shift in the histograms of the original sequence (i.e., (a) and (b)). When enhanced by the HEM algorithm, the distribution shift is still obvious in (c) and (d). However, when enhanced by our TCB algorithm, the histogram distributions of the two continuous frames are tuned to have similar shapes and distributions. This also demonstrates the ability of our algorithm in handling the temporal consistency.

(a)

(b)

(c)

(e)

(f)

(g)

(d) (h) Fig. 5 (a)-(d): Original video sequence (e)-(h) Enhanced video sequence by our TCB algorithm with the EA strategy (a)

(c)

(e)

Acknowledgements This paper is supported in part by the following grants: Chinese national 973 grants (2010CB731401 and 2010CB731406), Chinese national 863 program grant (2009AA01Z331), National Science Foundation of USA grant (1032567), National Science Foundation of China grants (60632040, 60902073, 60928003, 60973067, and 61001146).

References (b) (d) (f) Fig. 4 Histograms of two continuous frames for different methods. ((a),(c),(e): results for the 3rd frame; (b),(d),(f): results for the 4th frame; (a)-(b): the original video frames; (c)-(d): results by the HEM method; (e)-(f): results by the proposed TCB algorithm)

Furthermore, Fig. 5 shows another experiment by our algorithm. In Fig. 5, (a)-(d) correspond to the original frames of a video sequence. In this sequence, since the camera is quickly moving along different roads, the light and shade in the background are quickly changing, which leads to unsatisfactory temporal qualities. Fig. 5 (e)-(h) are the results enhanced by our TCB algorithm with the EA strategy. We can see from Fig. 5 that after the TCB algorithm is applied, the enhanced video become more appealing in each frame. Besides, the temporal inconsistency due to poor illumination is also eliminated. V. CONCLUSION In this paper, we propose a new Temporal-ConstraintBased algorithm for video enhancement. By introducing the temporal constraints during enhancement, the temporal video qualities can be improved together with the improvement of the spatial qualities. Two different strategies are proposed to define and integrate the temporal constraints. Experimental results demonstrate the effectiveness of the proposed algorithm.

[1] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 2002. [2] W.-C. Chiou and C.-T. Hsu, "Region-Based Color Transfer from Multi-Reference with Graph-Theoretic Region Correspondence Estimation," ICIP, pp. 501-504, 2009. [3] S.-D. Chen, A. Ramli, “Minimum mean brightness error bi-histogram equalization in contrast enhancement,” IEEE Transactions on Consumer Electronics, vol.49, no.4, pp.1310-1319, Apr. 2003 [4] Q. Wang, R. Wang, “Fast Image/Video Contrast Enhancement Based on WTHE,” IEEE 8th Workshop on Multimedia Signal Processing, pp. 338–343, Oct. 2006. [5] T. Arici, S. Dikbas, Y. Altunbasak, “A Histogram Modification Framework and Its Application for Image Contrast Enhancement,” IEEE Transactions on image processing, vol. 18, no. 9, pp. 1921-1935, Sept. 2009. [6] J. Alex Stark, “Adaptive Image Contrast Enhancement using Generalizations of Histogram Equalization,” IEEE Transactions on image processing, vol. 9, no. 5, pp. 889 – 896, May 2000. [7] L. Jin, S. Satoh, M. Sakauchi, “A novel adaptive image enhancement algorithm for face detection,” ICPR 2004, Proceedings of the 17th International Conference on Pattern Recognition, vol. 4, pp. 843-848, Aug. 2004. [8] E. P. Bennett, L. McMillan, “Video Enhancement Using Per-Pixel Virtual Exposures,” ACM Transactions on Graphics, vol. 24, no. 3, pp. 845-852, July 2005. [9] S. Mangiat, J. Gibson, “Automatic scene relighting for video conferencing,” IEEE International Conference on Image Processing, pp. 2781 –2784, Nov. 2009. [10] Z. Liu, C. Zhang, Z. Zhang, “Learning-Based Perceptual Image Quality Improvement for Video Conferencing,” IEEE International Conference on Multimedia and Expo, pp. 1035-1038, July 2007. [11] W. Lin, M.-T. Sun, R. Poovendran and Z. Zhang, “Group event detection with a varying number of group members for video surveillance,” IEEE Trans. Circuits and Systems for Video Technology, no. 8, pp. 1057-1067, 2010. [12] W. Lin, M.-T. Sun, R. Poovendran and Z. Zhang, “Human activity recognition for video surveillance,” ISCAS, pp. 2737-2740, 2008.

A New Histogram Modification-based Method for ...

Abstract—Video enhancement has played very important roles in many applications. However, most existing enhancement methods only focus on the spatial quality within a frame while the temporal qualities of the enhanced video are often unguaranteed. In this paper, a new algorithm is proposed for video enhancement.

663KB Sizes 2 Downloads 258 Views

Recommend Documents

Corrigendum to “Broad histogram method for ...
[Solid State Communications 114 (8) (2000) 447–452]. A.R. Lima*, P.M.C. de Oliveira, T.J.P. Penna. Instituto de Fısica, Universidade Federal Fluminense, Av. Litorânea, s/n-24210-340 Niterói, Rio de Janeiro, Brazil. The following equations were p

Histogram-based Generation Method of Membership ...
to a destination space Tiss with the membership degree µSI. Tiss in a signal intensity space SI. 2.2 Interpretation of the histogram features. The gray level ...

Keyword Spices: A New Method for Building Domain ...
domain-specific search engine for computer science research papers. ... We call this the filtering model for building .... simplify keyword spices in the way that results in high value ..... national World Wide Web Conference(WWW6), pages 189–.

Modeling of a New Method for Metal Filaments Texturing
Key words: Metallic Filament, Yarn, Texturizing, Modeling, Magnetic Field. Introduction ... The Opera 8.7 software is used for simulating the force of rotating ...

A new characterisation method for rubber (PDF Download Available)
heterogeneous mechanical test, measuring the displacement/strain field using suitable ..... ments, load, specimen geometry and unknown parameters.

A New Method for Computing the Transmission ...
Email: [email protected], [email protected]. Abstract—The ... the transmission capacity of wireless ad hoc networks for three plausible point ...

A new method for evaluating forest thinning: growth ...
treatments designed to reduce competition between trees and promote high ... However, this advantage may be offset by the countervailing physiological constraints imposed by large size, resulting in lower growth rates. ..... Europe: data set.

A new hybrid method for gene selection - Springer Link
Jul 15, 2010 - Abstract Gene selection is a significant preprocessing of the discriminant analysis of microarray data. The classical gene selection methods can ...

Development of a new method for sampling and ...
excel software was obtained. The calibration curves were linear over six .... cyclophosphamide than the analytical detection limit. The same time in a study by.

A new method for evaluating forest thinning: growth ...
compared with cumulative growth (percentage of total) for each tree in that order. ..... Europe: data set. Available from ... Comprehensive database of diameter-based biomass re- gressions for ... Plant physiology: a big issue for trees. Nature.

A New Method for Shading Removal and Binarization ...
pixel and using the procedure in section 2.1 this is not feasible. On the other hand be divided into non-overlapping blocks of sizes pixels. The main drawback of ...

A New Point Pattern Matching Method for Palmprint
Email: [email protected]; [email protected]. Abstract—Point ..... new template minutiae set), we traverse all of the candidates pair 〈u, v〉 ∈ C × D.

A New Method for Computing the Transmission ...
the transmission capacity of wireless ad hoc networks for three plausible point ... no coordination, PCP used to model sensor networks where clustering helps in ...

A new method for shear bond strength measurement
fibre-fibre shear bond strength, will be discussed in this paper in detail. ..... solid mass of fibres. Multiplying both sides of Equation 6 by w, we get: wt. Mw ××= × ρ.

New Modulation Method for Matrix Converters_PhD Thesis.pdf ...
New Modulation Method for Matrix Converters_PhD Thesis.pdf. New Modulation Method for Matrix Converters_PhD Thesis.pdf. Open. Extract. Open with. Sign In.

A Variational Model for Intermediate Histogram ...
+34 93 238 19 56, Fax. +34 93 309 31 88. ... +34 93 542 29 37, Fax. +34 93 542 25 69 ...... V. Caselles also acknowledges partial support by IP project. ”2020 3D ...

Gradient Histogram: Thresholding in a Region of ...
Oct 30, 2009 - Indian Statistical Institute. 203 B.T. Road, Kolkata, West Bengal, India 700108 .... Our aim in this paper is to study and develop a gradient histogram threshold- ing methodology ... application of any linear gradient operator on the i

A Comparison Between Broad Histogram and ... - Springer Link
called Entropic Sampling, which, from now on, we call ESM. We present ... to note that these movements are virtual, since they are not actually per- ..... B. A. Berg, in Proceedings of the International Conference on Multiscale Phenomena and.

Bonus play method for a gambling device
Mar 14, 2006 - See application ?le for complete search history. (Us). (56) ... Play and/0r apply ..... As shoWn, there are numerous variations on the theme that.