OE Letters

Image-based fusion for video enhancement of night-time surveillance Yunbo Rao,a Weiyao Lin,b and Leiting Chena University of Electronic Science and Technology of China, School of computer science and engineering, Chengdu, China, 611731 b Shanghai Jiao Tong University, Institute of Image Communications and Information Processing, Department of Electronic Engineering, Shanghai, China, 200240 E-mail: [email protected]

a

Abstract. In this paper, a novel image-based fusion video enhancement algorithm is proposed for night-time video surveillance applications by a combination of illumination fusion and based on moving objects fusion. The proposed algorithm fuses video frames from high quality day-time and night-time background with low quality night-time videos. For improving the perception quality of the moving objects, based on moving objects of region fusion method is proposed. Experimental reC 2010 sults show the effectiveness of the proposed algorithm.  Society of Photo-Optical Instrumentation Engineers. [DOI: 10.1117/1.3520553]

Subject terms: video enhancement; illumination fusion; image-based fusion. Paper 100440LR received Oct. 6, 2010; revised manuscript received Oct. 25, 2010; accepted for publication Mar. 11, 2010; published online Dec. 29, 2010.

1 Introduction Video surveillance is often more important in the dark environment since many activities of interest often occur in the dark environment.1, 2 Video enhancement plays a key part in night-time video surveillance so that the objects or activities of interest can be clearly monitored. The problem of video enhancement for low quality video has become increasing by acute.3 The goal of the enhancement4 is to improve the visual appearance of the video, or to provide a “better” representation for further processing, such as analysis, detection, segmentation, and recognition. However, it is still a challenging problem for night-time video applications. To traditional algorithms, when the background is enhanced, the contrast often remains low, or the noise is often greatly enhanced. Since day-time videos are also available in many surveillance applications, there are many attempts to take this advantage to combine the day-time and the night-time scenes to enhance the night-time videos.5–7 However, since there are different moving objects in the day-time video and night-time video, and also the condition in the background may be different, it is not easy to produce good results with fusion. To further improve the enhancement quality, it is desirable to give larger weight to the moving objects. However, accurate moving foreground extraction is difficult, especially with low contrast and noisy videos. Reference 8 used a multi-color background model per pixel to enhance the foreground movC 2010 SPIE 0091-3286/2010/$25.00 

Optical Engineering

ing objects. However, the method suffers from slow learning at the beginning, especially under busy background. It also cannot distinguish moving shadows from moving objects. In addition, the model does not update with time and therefore often fails under outdoor environments where the scene lighting changes frequently with time. Reference 9 presented a method which improves this adaptive background mixture model. In this paper, a new method is proposed to fuse video frames from high quality day-time and night-time backgrounds with low quality night-time video frames. With the proposed algorithm, day-time images and night-time images are combined together to provide a much more enhanced background. In order to enhance the moving objects as well, we also propose a moving objects of region fusion method for improving the sharpness of the moving objects. 2 The Proposed Algorithms The detailed procedures of the proposed method can be described as in Fig. 1. It should be noted that in Fig. 1, the ‘day-time image’ as well as the ‘night-time image’ are the high-quality images for providing a better enhanced background and the night-time video is the actually low-quality input that we need to enhance. In the following, we will describe algorithm. 2.1 Illumination Segmentation We first decouple an input color image f (x, y) into intensity I (x, y), and color layer C(x, y) where our algorithm is mainly processed on the intensity layer I (x, y). Then an image is separated into the illumination layer L(x, y) and the reflectance layer R(x, y) of the day-time and night-time background image can be obtained by Retinex theory.10 It is assumed that the available luminance data in the image is the product between illumination and reflectance. The input image I (x, y) is represented by the product of the illumination and the reflectance as follows: I (x, y) = R(x, y) × L(x, y)

(1)

The illumination L(x, y) is assumed to be contained in the low frequency components of the image while the reflectance R(x, y) mainly represents the high frequency components of the image. The Gaussian low-pass filtered result of the intensity mage is used as the estimation of the illumination. The filtering process is actually a 2D discrete convolution with Gaussian kernel, which can be mathematically expressed as11 : L(x, y) =

M−1 N −1 

I (m, n)G(m + x, n + y)

(2)

m=0 n=0

where G is the 2D Gaussian function with size M × N . Gaussian kernel G is defined as:   −(x 2 + y 2 ) (3) G(x, y) = q. exp c2 q is a normalization factor:    −(x 2 + y 2 ) =1 q · exp c2 x y

120501-1

(4)

December 2010/Vol. 49(12)

OE Letters where w k is the weight parameter of the K th Gaussian component. η(X N ; θ j ) is the normal distribution of K th component represented by    η(X ; θk ) = η X ; u k, K

1 − 1 (X −μ K )T −1 K X −μ K (8) 1/2 e 2 (2π ) D/2 K

where u k is the mean and K = σ K2 I is the covariance of the K th component. The K distributions are ordered based on the fitness value w k /σk and the first B distributions are used as a model of the background of the scene where B is estimated as ⎞ ⎛ b  wj > T⎠ (9) B = arg min ⎝

=

back

Fig. 1 A block diagram of the proposed algorithm

b

and c is a scale constant (c = 2 ∼ 5 is commonly used). Here, c is set to 3. 2.2 Enhanced Background We adopt the weighted-average image-fusion algorithm to enhance nighttime background using illumination images L(x, y). The proposed fusion equation is as follows. B L (x, y) = α ∗ N L (x, y) + (1 − α) ∗ D L (x, y)

(5)

where B L (x, y) is the final background image, N L (x, y) is the nighttime illumination image, D L (x, y) is the daytime illumination image. The weight α is in the range [0, 1]. In our algorithm, α is empirically determined by the global mean N L (x, y) and D L (x, y) of the illumination image based on the image enhancement experiments. 2.3 Enhanced (Night-time) Video Due to the low contrast, we can not clearly extract moving objects from the dark background. We propose an enhanced-video step to facilitate the extraction moving objects. The tone-mapping approach is used to enhance the video frames and to separate an image into details and large scale features. More specifically, the nonlinear tone mapping function is used to attenuate image details and to adjust the contrast of large scale features,12 as in Eqn. (6).   x (ψ − 1) + 1 log x Max (6) m(x, ψ) = log(ψ) The white level of the input illumination is set by x Max and ψ controls the attenuation profile. This mapping function exhibits a similar characteristic as the traditional Gamma correction.

j=1

The threshold T is the minimum fraction of the background model. Under this method, a pixel will be detected as a foreground pixel if it is more than 2.5 standard deviations away from any of the B distributions. 2.5 Final Fusion and Enhancement After getting the weighting based fusion background image and the resulting motion detection video frames, we will perform the final video enhancement by a combination of illumination and based on moving objects region fusion. The proposed combination of illumination and region fusion mathematic equation is as follows: FL (x, y) = β M(x, y) + γ N L (x, y) + (1 − γ )B L (x, y)

(10)

where FL (x, y) is the final illumination image, M(x, y) is motion-detection video frame, N L (x, y) is the night-time illumination image and B L (x, y) is enhanced background illumination as in Eqn. (5). β and γ are the weights for these input images, respectively. In our algorithm, β and γ are determined in the following way.  β = 1 and γ = 1 if M(x, y) + N L (x, y) + B L (x, y) ≥ 1 β = 0 and γ = K if M(x, y) + N L (x, y) + B L (x, y) < 1 (11) When M(x, y) + N L (x, y) + B L (x, y) < 1, it assumes that there are no moving objects in the current pixel. Then the coefficient β = 0 and γ can be decided by the illumination

2.4 Motion Segmentation After enhanced the night-time videos, motion detection is performed to extract the foreground moving objects,9 as in Eqn. (7). Each pixel in the scene is modeled by a mixture of K Gaussian distributions. The probability that a certain pixel has a value of X N at time N can be written as. p(X N ) =

K 

w j η(X N ; θ j )

j=1

Optical Engineering

(7)

Fig. 2 Enhancing a traffic night video. (a) A low quality night-time video frame. (b) A frame from the final result of the proposed algorithm.

120501-2

December 2010/Vol. 49(12)

OE Letters resources (i.e. color levels) more efficiently and it is robust and effective.

Acknowledgments This work is partly supported by National High-Tech Program 863 of China (Grant No. 2007AA010407 and 2009GZ0017), National Research Program of China (Grant No.9140A06060208DZ0207), National Science Foundation of China (61001146), and China Scholarships Council. References

(a)

(b)

Fig. 3 (a) Original low quality night-time video frame and histogram, (b) the enhanced result of the proposed algorithm and histogram.

images N L (x, y) and B L (x, y). In our experiments, we set the values γ = K = 0.4. On the other hand, if M(x, y) + N L (x, y) + B L (x, y) ≥ 1, it is assumed that there are moving objects in the current pixel. And in this case, both β and γ are set to be 1. Furthermore, in order to avoid FL (x, y) exceeds the illumination range of [0,255], a pixel will be set to be 255 if its value exceeds 255. 3 Final Experimental Results and Conclusions In this paper, we propose a novel algorithm to enhance nighttime video. We focus on addressing the following two key issues for video enhancement. (1) illumination-based fusion for enhancement background image. (2) Moving objects of region fusion for improving sharpness of the moving objects. Figures 2 and 3 show the experimental results. The result demonstrates that the proposed algorithm can use the color

Optical Engineering

1. W. Lin, M.-T. Sun, R. Poovendran, and Z. Zhang, “Human activity recognition for video surveillance,” ISCAS, pp. 2737–2740, 2008. 2. W. Lin, M.-T. Sun, R. Poovendran, and Z. Zhang, “Group Event Detection with a Varying Number of Group Members for Video Surveillance,” IEEE Transactions on Circuits and Systems for Video Technology, 20(8), 1057–1067(2010). 3. X. Dong, Y. Pang, and J. Wen, “Fast efficient algorithm for enhancement of low lighting video,” SIGGRAPH 2010, Los Angeles, California, July 25–29, 2010. 4. S.S. Agaian, B. Silver, and K.A. Panetta, “Transform coefficient histogram-based image enhancement algorithms using contrast entropy,” IEEE transactions on image processing. 16(3), 741–758 (2007). 5. M. H. Asmare, V.S. Asirvadam, Iznita, A. Fadzil, and M. Hani, “Image enhancement by fusion in contourlet transform,” International journal on electrical engineering and informatics, 2(1), 2010. 6. A. Ilie, R. Raskar, and J. Yu, “Gradient domain context enhancement for fixed cameras,” Pattern recognition and artificial intelligence. 19(4), 533–549 (2005). 7. A. Yamasaki, H. Takauji, Shun’ichi Kaneko, T. Kanade, and H. Ohki1, “Denighting: enhancement of nighttime image for a surrveillance camera,” Proc. ICPR, San Diego, CA, USA (2008). 8. C. Stauffer and G. Wel, “Learning patterns of activity using real-time tracking,” Pattern analysis and machine intelligence. 22(8), 747–757 (2000). 9. P. KaewTraKulPong and R. Bowden, “An improved adaptive background mixture model for real-time tracking with shadow detection,” In Processing 2nd European workshop on advanced video based surveillance systems, AVBS01, 2001. 10. E.H. Land and J.J. McCann, “Lightness and retinex theory,” Journal of the optical society of america. 61, 1–11 (1971). 11. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” In Proceedings of ICCV. pp. 836–846, 1998. 12. F. Durand and J. Dorsey, “Fast bilateral filtering for the display of highdynamic range images,” ACM transactions on graphics, 21(3), 257–266 (2002).

120501-3

December 2010/Vol. 49(12)

Image-based fusion for video enhancement of night ...

School of computer science and engineering, Chengdu,. China, 611731. bShanghai Jiao Tong University, Institute of Image. Communications and Information ... online Dec. 29, 2010. .... where FL(x, y) is the final illumination image, M(x, y) is.

308KB Sizes 0 Downloads 245 Views

Recommend Documents

Target Enhancement Oriented Fusion Method using ...
Poisson reconstruction based fusion method for polarimetric SAR images. 2.1 The definition of local .... data processing, such as multi-frequency and etc.

Learning the Fusion of Audio and Video Aggression ...
aggression using a multimodal system, given multiple unimodal features. ... a lot, making a lot of noise, not showing the ticket to the conductor, disturbing the ... classifier has a good performance in predicting the multimodal label given the ...

Cheap Nintendo Gba Game Metroid Fusion Video Game Cartridge ...
Cheap Nintendo Gba Game Metroid Fusion Video Game C ... ⁄ Ita Language Free Shipping & Wholesale Price.pdf. Cheap Nintendo Gba Game Metroid Fusion ...

disability, status enhancement, personal enhancement ...
This paper was first presented at a conference on Disability and Equal Opportunity at. Bergen in 2006. .... or companies, rather than the tax payer. Those who ..... on what we might call liberal democratic, rather than strict egalitarian, grounds.

Contourlet based Fusion Contourlet based Fusion for Change ...
Contourlet based Fusion for Change Detection on for Change Detection on for Change Detection on SAR. Images. Remya B Nair1, Mary Linda P A2 and Vineetha K V3. 1,2M.Tech Student, Department of Computer Engineering, Model Engineering College. Ernakulam

Cheap 12MP HD USB Webcam Night Vision Chat Skype Video ...
Cheap 12MP HD USB Webcam Night Vision Chat Skype ... mera for PC Laptop New Promotion High Quality.pdf. Cheap 12MP HD USB Webcam Night Vision ...

Online education, the new age tool for enhancement of knowledge ...
Online education, the new age tool for enhancement of knowledge and skill.pdf. Online education, the new age tool for enhancement of knowledge and skill.pdf.

Multipath Routing process for enhancement of base station ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 653- 658. Deepika ... Computer Science and Engineering.

Adaptive and Mobility Based Algorithm for enhancement of VANET's ...
In this paper an analytical model for the reliability of vehicular ad hoc networks (VANETs) is ... In vehicular ad hoc networks, vehicles download data from RSUs.

Enhancement of Special Casual Leave for Organ Transplantation
The State Election Commissioner, Kernla, Thiruvananthapuram. The State Chief Information Commissioner (with C.L.). )Phe Nodal Officer, www.tmnance.

Enhancement of broadband performance for on-chip ...
of a broadband disc monopole, Electron Lett 29 (1993), 406-407. 2. N.P. Agrawall .... Ko, Novel fully symmetrical inductor, IEEE Electron Device Lett 25. (2004) ...

for Enhancement of Lifetime in Wireless Sensor ...
Wireless sensor network technology finds its applications in diverse fields:-military .... squares of edge length 'a/√n' for the deployment of 'n' sensor motes in a ...

Multipath Routing process for enhancement of base station security in ...
Wireless Sensor Networks is habitually consisting of colossal number of restricted sensor devices which are commune Over the wireless forum. As sensor devices are lean, the networks unprotected to assorted kinds of attacks and customary defenses agai

Multipath Routing process for enhancement of base station ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, ... Index Terms—Wireless Sensor Networks, Sensor Nodes, Secure Routing, ... In topical years WSN found a prodigious number of applications in the turf of ...

Power Management for Throughput Enhancement
consumption and end-to-end network throughput in a wireless ad-hoc en- vironment. This power management ..... ing factor that reflects the relative importance of the two com- ponents of the afore ..... lar Technology, Vol. 42, No. 4, Nov. 1993.

Center for the Enhancement of Teaching Excellence -
Partners-in-Excellence (P.I.E.) is a peer coaching program sponsored by the Center for the Scholarly Advancement of Learning ... development is not always on the top of the list. For that reason, we encourage you to .... Scholars in higher education

Fusion of micro-metrology techniques for the flexible ...
Cell phones, for instance, are ... most important “non-contact” micro-metrology techniques (optical and non-optical), performing a comparison of these methods ...

C230 Fusion of Perceptions for Perceptual Robotics.pdf
X22 by mapping the data for visual openness perception which is. Fig. 4. Schematic representation of the probabilistic perception process via exponentially ...

IMPROVED SYSTEM FUSION FOR KEYWORD ...
the rapid development of computer hardwares, it is possible to build more than .... and the web data of the OpenKWS15 Evaluation (denoted as. 202Web). 202VLLP ..... specificity and its application in retrieval,” Journal of documentation, vol.

Fusion and Summarization of Behavior for Intrusion ...
Department of Computer Science, LI67A ... of the users, hosts, and networks under the administrator's .... gram representing local host connectivity information.

An Adaptive Fusion Algorithm for Spam Detection
An email spam is defined as an unsolicited ... to filter harmful information, for example, false information in email .... with the champion solutions of the cor-.

Enhancement of electronic transport and ...
Jun 15, 2007 - mable read-only memory), flash memories in computer applications and uncooled infrared imaging systems, became possible due to the ...

Orthomolecular Enhancement of Human Development
The human body, like the rest of the universe, is composed of matter and ... I can understand that the sort of training, the sort of physical experience that is given to .... end. In both guinea pigs and human beings these homeostatic mechanisms.

The international nature of germplasm enhancement - ACIAR
supported in part by an ACIAr scholarship. “At the moment there is an IT boom in India and not many parents like their children to get into agricultural science,” ...