IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 443-449

International Journal of Research in Information Technology (IJRIT)

www.ijrit.com

ISSN 2001-5569

Compression Artifacts Removal on Contrast Enhanced Video Jean P Johny, Reshma S, Roopa Gokul and Sira Salim, Abstract— In this paper, we put forward a novel framework for the removal of artifacts on the enhanced video. The video which is enhanced using the gamma correction method is further filtered to remove the artifacts. A compression artifacts removal algorithm that is adaptive to the artifacts visibility level of the input video signal is used. The artifacts visibility is determined per frame by the ratio of the accumulated gradient on the block edges to that of the remaining area. The filtering of each video frame is optimized using a least mean square mechanism which trains on pairs of target images and decompressed images of similar quality as the input frame. Index Terms— Gamma correction, Adaptive filters, Compression artifacts removal, Least Mean Square Optimization

I. INTRODUCTION Video enhancement process consists of a collection of techniques that seek to improve the visual appearance of the video or to convert the video to a form better suited for analysis by a human or machine. Contrast Enhancement is essential to improve the quality of the videos that are captured in extreme lighting conditions, such as excessively bright or dark environments that produce low contrast, which produce normal global contrast videos with a low dynamic range in shadowed areas. The principal objective of video enhancement is to modify the attributes of the video to make it more suitable for a given task and a specific observer. During this process, one or more attributes of the video are modified. One of the most popular global contrast enhancement techniques is histogram equalization (HE) [1]. It flattens and stretches the dynamic range of the frames histogram and results in overall contrast improvement. Efficient contras t enhancement using adaptive Gamma correction with weighting distribution is an efficient method to modify the histograms and enhance contrast in videos. Gamma correction techniques make up a family of general HM techniques obtained simply by using a varying adaptive parameter .The simple form of the transform-based gamma correction (TGC) is derived by      

(1)

where lmax is the maximum intensity of the input. The intensity l of each pixel in the input frame is transformed as T(l) after performing this. However, when the contrast is directly modified by gamma correction, different frames will exhibit the same changes in intensity as a result of the fixed parameter. Fortunately, the probability density of each intensity level in a frame can be calculated to solve this problem. The probability density function (pdf) can be approximated by    /

(2)

where nl is the number of pixels that have intensity l and MN is the total number of pixels in a frame. The cumulative distribution function (cdf) is based on pdf , and is formulated as:   ∑ 

(3)

After the cdf of the frame is obtained from equation (3), traditional Histogram Equalization (THE) directly uses cdf as a transformation curve expressed by   

(4)

On compressing these enhanced videos some problems may arise. Coding/video compression standards have evolved from JPEG to MPEG-2/4 to the latest H.264/AVC. These block transform based codecs divide the image or video frame into non-overlapping blocks (usually with the size of 8 x 8 pixels), and apply discrete cosine transform (DCT) on them. At high or medium compression rates, the coarse quantization will result in various noticeable coding artifacts, such as blocking, ringing and mosquito artifacts. Among the coding artifacts, blockiness which appears as discontinuities along block boundaries is the most annoying. Elimination of coding artifacts in the enhanced video is still a critical and indispensable

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 443-449

solution. Most coding artifact reduction techniques based on postprocessing, e.g. [2, 3], are designed according to heuristic tuning and testing, which takes a lot of time and is not always effective. Recently, classification-based trained filters (TF) have been proposed for optimally removing digital coding artifacts [3, 4]. The local image characteristics can be classified using local structure or activity information. For each class, the optimized filter coefficients are obtained by an off-line training process, which trains on the combination of targeted images and the degraded versions thereof that act as source. The methods introduced in [3, 4] produce promising results, when the quality of the test sequence is similar to that of the source sequences used during training. It is because a fixed level of compression is adopted for degrading the target images. We propose to train the algorithm on a range of compression levels and to select the most suitable set of filter coefficients for the test sequence. To do that, a quality (or blockiness) metric is required to indicate the quality level of the test sequence. The rest of this paper is organized as follows. Section II describes the previous works. Section III describes the proposed methodology in detail. Finally, conclusion is given in Section VI. II. LITERATURE SURVEY Several methods for contrast enhancement have been proposed in the recent years. Arici. T et.al proposed a histogram modification framework and its application for image contrast enhancement in 2009[1]. The presented framework employs carefully designed penalty terms to adjust the various aspects of contrast enhancement. Hence, the contrast of the image/ video can be improved without introducing visual artifacts that decrease the visual quality of an image and cause it to have an unnatural look. Even though it does not produce any artifacts as Histogram Equalization and weighted threshold HE its time complexity is worse. Kim. M et.al proposed a new histogram equalization method, called RSWHE (Recursively Separated and Weighted Histogram Equalization), for brightness preservation and image contrast enhancement[5]. The essential idea of RSWHE is to segment an input histogram into two or more sub histograms recursively, to modify the subhistograms by means of a weighting process based on a normalized power law function, and to perform histogram equalization on the weighted sub-histograms independently. This approach produce better image quality and displays more information the image holds but it shows some unnatural high intensity in some images. Later Lee.C et.al proposed A power constrained contrast-enhancement algorithm for emissive displays based on histogram equalization (HE) [6]. They first propose a log-based histogram modification scheme to reduce overstretching artifacts of the conventional HE technique. Then, they develop a power-consumption model for emissive displays and formulate an objective function that consists of the histogram-equalizing term and the power term. By minimizing the objective function based on the convex optimization theory, the proposed algorithm achieves contrast enhancement and power saving simultaneously. But this approach decreases the overall brightness of the input image. Also it reduces the contrast for infrequent input-pixel values and for some images it sacrifices the details. Most coding artifact reduction techniques based on postprocessing, [2, 3], are designed according to heuristic tuning and testing, which takes a lot of time and is not always effective. Recently, classification-based trained filters (TF) have been proposed for optimally removing digital coding artifacts [3, 4]. The local image characteristics can be classified using local structure or activity information. For each class, the optimized filter coefficients are obtained by an off-line training process, which trains on the combination of targeted images and the degraded versions thereof that act as source. The methods introduced in [3, 4] produce promising results, when the quality of the test sequence is similar to that of the source sequences used during training. It is because a fixed level of compression is adopted for degrading the target images. We propose to train the algorithm on a range of compression levels and to select the most suitable set of filter coefficients for the test sequence. To do that, a quality (or blockiness) metric is required to indicate the quality level of the test sequence. In this work we put forward a novel framework for the removal of artifacts on the enhanced video. Here first we enhance the video using the gamma correction with weighting distribution [7] .Then we apply quality adaptive algorithm based on trained filters [8] to remove the artifacts. III. PROPOSED METHODOLOGY To enhance the video sequence, our proposed method uses adaptive gamma correction. First, the video sequences are converted into frames. These frames are processed using AGCWD method[1].to remove the artifacts quality adaptive algorithm based on trained filters s applied. 1.

Proposed Algorithm 1. 2. 3. 4. 5. 6.

The original input video is converted into frames. First incoming frame is directly stored in the frame storage. Then this frame is used to generate a mapping curve for the AGCWD method. For subsequent incoming video frames, entropy model is used to measure the differences of the information content between two successive frames. When the absolute difference between the current H and previous H exceeds threshold Th, the frame storage can be updated by the incoming frame, while the transformation curve is also modified. Modified frame is used to generate enhanced video.

Jean P Johny, IJRIT

444

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 443-449

7. 8. 9. 10. 11. 2.

To remove the artifacts quality adaptive algorithm based on trained filters is performed. During training, multiple LUTs are obtained for different degradation levels. The quality of an input video frame is designated by a blockiness visibility metric. A most suitable LUT for that quality level is then chosen for filtering the input frame. Final video is generated using these frames.

AGCWD method

To enhance the video sequence, our proposed method uses adaptive gamma correction. In our method a hybrid HM method is used, which combines the Traditional Gamma Correction and THE (Traditional Histogram Equalisation) methods. The adaptive gamma correction (AGC) is formulated as follows:            !"# (5) 



The AGC method can progressively increase the low intensity and avoid the significant decrement of the high intensity. Furthermore, the weighting distribution (WD) function is also applied to slightly modify the statistical histogram and lessen the generation of adverse effects. The WD function is formulated as: %"# %"#&' (   %"#&'

$    %"#

(6)

where α is the adjusted parameter, pdfmax is the maximum pd f of the statistical histogram, and pdfmin is the minimum pdf . The modified cdf is: 

 $   ∑ $ / ∑ $

(7)

where the sum of pdfw is calculated as follows:  ∑ $  ∑ $ 

(8)

Finally, the gamma parameter based on cdf of Equation () is modified as follows: )  1 + $ 

(9)

The method is applied to the video frames in HSV color model. In the HSV color model, the hue (H) and the saturation (S) can be used to represent the color content ,with the value (V) representing the luminance intensity. The color video frame can be enhanced by preserving H and S while enhancing only V. AGCWD method was applied to the V component for color contrast enhancement. Fig 1 shows the overall AGCWD method.

Fig 1: Flowchart of the AGCWD method. 3.

Quality Adaptive Trained Filters

The algorithm is composed of two parts: the offline training process and the run-time filtering process. Fig. 1 shows the proposed training process. Uncompressed images are used as target images. The target images are degraded with different degradation levels. For coding artifacts reduction, the degradation is compression. The compressed images is referred as degraded images. For each degradation level, each pixel in the degraded images is then classified on that pixel’s

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 443-449

neighborhood using a classification method. The classification method used here is Adaptive Dynamic Range Coding (ADRC), which represents the structure information of a local region, coupled with a complexity measure.

Fig. 2: The training process of the proposed method. The ADRC code of each pixel i x in an observation aperture is defined as: ,-./ 0   0; if V 0  7 89 = 1; otherwise

(10)

where V 0  is the value of pixel 0 and 89 is the average of all the pixel values in the aperture. The ADRC code of an image kernel is the concatenation of the ADRC codes of all the pixels in that kernel. All the pixels and their neighborhoods belonging to a specific class and their corresponding pixels in the target images are accumulated, and the optimal coefficients per class are obtained by making the Mean Square Error (MSE) minimized statistically. The Least Mean Square (LMS) optimization for a certain degradation level is discussed below. Let :;,= , :>,= be the apertures of the degraded images and the target images for a particular class c, respectively. Then the filtered pixel :?,= can be obtained by the desired optimal coefficients as follows: D

:?,=  @0





A= B :;,= B, C

(11)

where A= B, B ∈ F1, . . H are the desired coefficients, and n is the number of pixels in the aperture. The summed square error between the filtered pixels and the target pixels is:

(12) where = represents the number of pixels belonging to class c. To minimize I J ,

(13) By solving the above equation using Gaussian elimination, we will get the optimal coefficients as follows:

(14)

Where

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 443-449

The LMS optimized coefficients for each class are then stored in a look-up table (LUT) for future use. For different degradation levels, several LUTs are obtained after training. Fig. 3 shows the filtering process of the algorithm.

Fig. 3: The filtering process of the proposed method. The quality of each input image is first evaluated using a quality metric. Then, the most suitable LUT is selected for that image according to its quality score. The optimized coefficients are retrieved from that LUT to filter the image based on pixel classification. 4.

Artifact Visibility Metric

For an input video frame, a quality metric is required to indicate which LUT is the most appropriate. For compression artifacts removal, an artifact strength measure can be used as the quality metric. Since blocking artifacts are the most noticeable, a blockiness metric, introduced in [2], is adopted to measure the artifact level of a video frame. The visibility of a block edge is determined by the contrast between the local gradient and the average gradient of the adjacent pixels. Efficient algorithm for detection of the grid position and estimation of a block edge visibility is applied since block discontinuities can be spotted as edges that stand out from the spatial activity in their vicinity. The detection of vertical block edges and horizontal is similar. To express the similarity between the local gradient and its spatial neighbors, we introduce the normalized horizontal gradient -K,DLM as the ratio of the absolute gradient and the average gradient calculated over N adjacent pixels to the left and to the right:

(15) i and j denote the pixel and line position, respectively. The presence of blocking artifacts will result in pronounced maxima in NK .

(16) The visual strength of the blocking artifacts can be determined by averaging NK over the block edge and intermediate positions. The Blocking Strength (BS) for the whole frame is then defined as:

(17)

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 443-449

Where NK̅ PQ and NK̅ Q + PQ denote the average value of NK at the block edge and intermediate positions, respectively. The BS parameter is defined for horizontal and vertical directions. The figure 4 (a) shows the image branch encoded at a bit-rate of 2Mb/s,and 4(b) shows the horizontal accumulator NK .

Fig. 4: (a) shows the image branch encoded at a bit-rate of 2Mb/s, (b) shows the horizontal accumulator NK . IV. CONCLUSION In this paper, we put forward a novel framework for the removal of artifacts on the enhanced video. The video which is enhanced using the gamma correction method is further filtered to remove the artifacts. A compression artifacts removal algorithm that is adaptive to the artifacts visibility level of the input video signal is used. The artifacts visibility is determined per frame by the ratio of the accumulated gradient on the block edges to that of the remaining area. The filtering of each video frame is optimized using a least mean square mechanism which trains on pairs of target images and decompressed images of similar quality as the input frame. REFERENCES 1. 2. 3. 4. 5. 6. 7. 8.

T. Arici, S. Dikbas, and Y. Altunbasak,”A histogram modification framework and its application for image contrast enhancement,” IEEE Trans. Image Process., vol. 18, no. 9, pp.1921–1935 Sep. 2009. I. Kirenko, R. Muijs and L. Shao, “Coding artifact reduction using non-reference block grid visibility measure”, Proc. IEEE International Conference on Multimedia and Expo, Toronto, Canada, July 2006. L. Shao, “Unified compression artifacts removal based on adaptive learning on activity measure”, Digital Signal Processing, doi:10.1016/j.dsp.2006.10.007. M. Zhao, R. E. J. Kneepkens, P. M. Hofman and G. de Haan,“Content adaptive image de-blocking”, Proc. IEEE International Symposium on Consumer Electronics, Sep. 2004. M. Kim and M. G. Chung, ”Recursively separated and weighted histogram equalization for brightness preservation and contrast enhancement,” IEEE Trans. Consum. Electron., vol. 54, no. 3, pp.1389–1397 Aug. 2008. Chulwoo Lee , Chul Lee , Young-Yoon Lee , C. Kim ,”Power-Constrained Contrast Enhancement for Emissive Displays Based on Histogram Equalization method” IEEE Trans. Image Process., vol. 21, pp. 80–93 Jan. 2012. S. Huang, F. Cheng, and Yi-Sheng Chiu, ”Efficient Contrast Enhancement Using Adaptive Gamma Correction With Weighting Distribution” IEEE Trans. Image Process., vol. 22, No. 3, pp.1032–1041 Mar. 2013. Ling Shao, Jingnan Wang, Ihor Kirenko, Gerard de Haan ,”Quality Adaptive Trained Filters For Compression Artifacts Removal” 897-900

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 443-449

Jean P Johny, BTech in Computer Science and Engineering from Federal Institute of Science and Technology, Mookkannoor. Currently doing MTech at Federal Institute of Science and Technology, Mookkannoor. Her research interests are Image Processing and Visual cryptography.

Reshma S, BTech in Computer Science and Engineering from Federal Institute of Science and Technology, Mookkannoor. Currently doing MTech at Federal Institute of Science and Technology, Mookkannoor. Her research interests are Image Processing, Data structures and algorithms.

Roopa Gokul, BTech in Computer Science and Engineering from Viswajyothi college of Engineering and Technology, Vazhakulam. Currently doing MTech at Federal Institute of Science and Technology, Mookkannoor. Her research interests are Image Processing and Database Management System

Sira Salim, BTech in Computer Science and Engineering from METS School of Engineering, Mala, Thrissur. Currently doing MTech at Federal Institute of Science and Technology, Mookkannoor. Her research interests are Image Processing and Data Mining

Compression Artifacts Removal on Contrast Enhanced Video

adaptive to the artifacts visibility level of the input video signal is used. ... to improve the quality of the videos that are captured in extreme lighting conditions, ...

1MB Sizes 0 Downloads 258 Views

Recommend Documents

Compression Artifacts Removal on Contrast Enhanced ...
The Blocking Strength (BS) for the whole frame is then defined as: (17) ... Reshma S, BTech in Computer Science ... of Science and Technology, Mookkannoor.

Removal of spatial biological artifacts in functional maps by local ...
template by means of local similarity minimization (LSM), achieved by ... fax: +972 8 9342438. ...... the online version, at doi:10.1016/j.jneumeth.2008.11.020.

Real-Time Video Compression
Can degrade easily under network overload or on a slow platform. ... technique does not take advantage of the similarities between adjacent frames. ...... case, although complex wiring is required (512 individually wired 16-bit words), the load ...

A Scheme for Attentional Video Compression
In this paper an improved, macroblock (MB) level, visual saliency algorithm ... of low level features pertaining to degree of dissimilarity between a region and.

Joint Optimization of Data Hiding and Video Compression
Center for Visualization and Virtual Environments and Department of ECE. University ..... http://dblp.uni-trier.de/db/conf/icip/icip2006.html#CheungZV06. [5] Chen ...

A Scheme for Attentional Video Compression
of low level features pertaining to the degree of dissimilarity between a .... rameters of salient and non-salient MBs to achieve compression, i.e, reduce rate.

A Scheme for Attentional Video Compression
ity, to yield probabalistic values which form the saliency map. These ... steps for computing the saliency map. ... 2.4 Learning to Integrate the Feature Maps.

MULTI-VIDEO SUMMARIZATION BASED ON VIDEO-MMR
we propose a criterion to select the best combination of parameters for Video-MMR. ... Marginal Relevance can be used to construct multi-document summaries ... is meaningful to compare Video-MMR to human choice. In a video set, 6 videos ...

Robot Body Occlusion Removal in Omnidirectional Video Using Color ...
painting is done by projecting to perspective images and then inpainted in a frame-by-frame manner in [1] or done by find- ing camera position correspondence ...

Robot Body Occlusion Removal in Omnidirectional Video Using Color ...
Robot Body Occlusion Removal in Omnidirectional Video Using Color and Shape Information. Binbin Xu, Sarthak Pathak, Hiromitsu Fujii, Atsushi Yamashita, Hajime Asama. Graduate School of Engineering, The University of Tokyo. 7-3-1 Hongo, Bunkyo-ku, Tok

Data Compression on DSP Processors
This report aims at studying various compression techniques for data ..... The GIF (Graphics Interchange Format) and the UNIX compress utility, both use.

Interactive lesion segmentation on dynamic contrast ...
*[email protected]; phone: +1.512.471.1771; fax: +1.512.471.0616; http://www.bme.utexas.edu/research/informatics/. Medical Imaging 2006: Image ...

abdominal multi-organ localization on contrast ... - IEEE Xplore
Clinical Center, National Institutes of Health, Bethesda, MD, USA. 2. General ... automated organ localization method, which used an MAP framework, and ...

Actionable = Cluster + Contrast? - GitHub
pared to another process, which we call peeking. In peeking, analysts ..... In Proceedings of the 6th International Conference on Predictive Models in Software.

ICT Artifacts on HB website-2016.pdf
Whoops! There was a problem loading this page. Whoops! There was a problem loading this page. Whoops! There was a problem loading this page. Retrying.

On the Relationship of Arousals and Artifacts in Respiratory Effort ...
plethysmography sensor (RIP) during whole-night polysomnog- raphy (including sleep stage and arousal annotations done with. EEG) and annotated the ...