A Smart Browsing System with Colour Image Enhancement for Surveillance Videos

Published on January 2021 | Categories: Documents | Downloads: 19 | Comments: 0 | Views: 291
of 6
Download PDF   Embed   Report

Comments

Content

 

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 3

SSN: 2321-8169 1268 - 1273

 ________________________________________  __________________ ________________________________________ ________________________________ _______________________________ _______________________  ______  

A Smart Browsing System with Colour Image Enhancement for Surveillance Videos Kirti K. Kamble Electronics Department AISSMS college of Engineering Pune, India. [email protected]  

Prof. S. P. Bhosale. Electronics Department AISSMS college of Engineering Pune, India.

 —   Surveillance cameras have been widely installed in large cities to monitor and record human activities for different applications.  Absttrac  Abs ract  t 

Since surveillance cameras often record all events 24 hours/day, it necessarily takes huge workforce watching surveillance videos to search for specific targets, thus a system that helps the user quickly look for targets of interest is highly demanded. This paper proposes a smart surveillance video browsing system with colour image enhancement. The basic idea is to collect all of moving objects which carry the most significant information in surveillance videos to construct a corresponding compact video by tuning positions of these moving objects. The compact video rearranges the spatiotemporal coordinates of moving objects to enhance the compression, but the temporal relationships among moving objects are still kept. The compact video can preserve the essential activities involved in the original surveillance video. This paper  presents the details of browsing system and the approach to producing producing the comp compact act video from a source surveilla surveillance nce video. At the end we will get the compact video with high resolution.

K eyw eywords ords -Input video, Image segmentation, Background Subtraction, Filter, Pre-processing, Indexing, Colour image enhancement, Adaptive  filter.

 ____________  _______ __________ __________ __________ __________ ___________ ____________ ____________ ______*****__ *****________ ____________ ____________ ____________ ____________ ____________ ___________ ___________  ______   In their work,  Y. Pritch, A. Rav-Acha, and S. Peleg, et al. I.  INTRODUCTION [3] introduced Nonchronological Video Synopsis and Indexing.

Surveillance cameras areactivities widely installed largeorcities to monitor and record human either in in inside outside environments. To efficiently utilize surveillance videos, how to extract valuable information from hundreds-of-hours videos  becomess an important  become important task. An intuitive method is to retrieve relevant segments according to the user’s queries in surveillance videos. Unfortunately, it is still difficult to automatically understand the user’s intentions and the video contents. In their work, Cheng-Chieh Chiang, Ming-Nan, Huei Fang Yang et al. [1] introduced the quick browsing system for surveillance video. A quick surveillance video browsing system is introduced. The basic idea is to collect all of moving objects which carry the most significant information in surveillance videos to construct a corresponding compact video by tuning positions of these moving objects. The compact video rearranges the spatiotemporal coordinates of moving objects to enhance the compression, but the temporal relationships among moving objects are still kept. The compact video can preserve the essential activities involved in the original surveillance video. This paper presents the details of our browsing system and the approach to producing the compact video from a source. In their work, Shizheng Wang, Jianwei Yang, Yanyun Zhao, Anni Cai, Stan Z.Li et al. [2] proposed a framework for efficient storing and scalable browsing of surveillance video  based on video synopsis. synopsis. The framework framework employs employs a novel novel synopsis analysis scheme named Detail-based video synopsis to generate a set of object flags to store and browse surveillance video synopsis. synopsis. The main contributions of project work are: 1) highlighting important contents of surveillance video; 2) improving the storage efficiency of original video and synopsis video; 3) realizing multi-scale scalable browsing of synopsis video while reserving essential information.

They proposed dynamic videothat synopsis to sshorten videos  by defining an the energy function describe describes activities of moving objects in a video. The energy function is minimized to optimally compress the corresponding behaviors of the moving objects to form the video synopsis. Their method can achieve a very large compression ratio in video representation, with destroying temporal relationship among objects. It may  be difficult to focus on correc correctt targets when the user looks for subjects of interest in surveillance videos. In their work, Rohit Nair, Benny Bing et al. [4] proposed Video surveillance systems are becoming increasingly popular due to the emergence of high-speed wireless Internet (such as Wi-Max and LTE), bandwidth efficient video compression schemes (such as H.264), and low-cost (and high-resolution) IP video cameras. This presents two applications of an advanced surveillance system, specifically in suspicious activity detection and human fall detection, for both indoor and outdoor environments. The implemented prototype captures and analyzes live high-definition (HD) video that is streamed from a remote camera. In their work, Lijing Zhang and Yingh Liang et al. [5]  proposed  propo sed the motion human detection based on backgro background und subtraction. According to the result of moving object detection research on video sequences, they proposed a new method to detect moving object based on background subtraction. First of all, we establish a reliable background updating model based on statistical and use a dynamic optimization threshold method to obtain a more complete moving object. And then, morphological filtering is introduced to eliminate the noise and solve the background disturbance problem. At last, contour  projection  projec tion analy analysis sis is comb combined ined with the shape analysis analysis to remove the effect of shadow; the moving human body is accurately and reliably detected. The experiment results show that the proposed method runs quickly, accurately and fits   for the real-time detection. detection. 1268

IJRITCC | March 2015, Available @  @ http://www.ijritcc.org  

 _____________________  __________ ______________________ ______________________ ______________________ ______________________ ______________________ ______________________  ___________  

 

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 3

SSN: 2321-8169 1268 - 1273

 __________________________________________________________  ________________________________________ ________________________________ _______________________________ _______________________  ______   In their work, Fei hui and Xiang-mo Zhao et al. [6] They that in this paper we used dynamic optimization threshold were described a morphology method for moving body method to obtain a more complete moving objects. This tracking system. They were established a simple technique for method can effectively eliminate the impact of light changes. dynamic human body tracking based on image sequence. They  presented a novel method of template matching matching and predictive II.  PROBLEM DEFINITION tracking for moving body in clutter background. The method Surveillance is very useful to governments [1] and law exploits the fact that moving human body changed in enforcement to maintain social maintain social control, control, recognize  recognize and monitor approximate rigid parts threats, and prevent/investigate  prevent/investigate  criminal activity. Surveillance In their work, Xinghalo Ding,Xinxin Wang,Quan Xiao et al. cameras such as these are installed by the millions in many [7] proposed the colour image enhancement with a human countries, and are now a days monitored by automated visual system based on Adaptive Filter by considering the computer programs instead of humans. Surveillance cameras adaptive characteristics of human visual system. The new are widely installed in large cities to monitor and record algorithm is divided into three major parts: obtain luminance human activities either inside or outside environments. To image and background image, adaptive adjustment and colour efficiently utilize surveillance videos, how to extract valuable restoration. Unlike traditional colour image enhancement information from hundreds-of-hours videos becomes an algorithms, the adaptive filter in the algorithm takes colour important task. An intuitive method is to retrieve relevant information into consideration. The algorithm finds the segments according to the user’s queries in surveillance videos.  importance of colour information in colour image enhancement and utilizes colour space conversion to obtain a III.  .NEED OF QUICK BROWSING SYSTEM much better visibility. Using the adaptive filter can overcome The number of surveillance cameras is fast increasing. The the inaccurate estimation of background image in traditional Heathrow airport in London has 5,000 surveillance cameras. technologies. In their work, Meylan L, Susstrunk S. et al. [8] The number of surveillance cameras will increase worldwide  proposed  propo sed High dynamic range image rendering rendering with a retinexr etinexmore than 40% per year in the next five years. Due to temporal  based adaptive filter. Retinex is an effective effective technique for nature of data storage space consumption problem typically colour image enhancement, which can produce a very good assignment 2-16 cameras, 7 or 30 days of recording is 2-10 enhanced result. But the enhanced image has colour distortion Mb / min.1.5 GB per day per camera 20 - 700 GB total. The and the calculation is complex. In their work, Li Tao, Vijayan K.. Asari. et al. [9] Proposed a robbust robbust image enhancement technique for for improving image visual quality in shadowed scenes. The algorithm can enhance colour image without distortion, but the edges of the colour image could not be handled well. The algorithm use Gaussian filter to estimate background image. Gaussian kernel function is isotropic, which leads to the inaccurate estimation of  background  backgro und image, image, resulting in in the halo phenome phenomenon. non. In their work, Wang Shou-jue, Ding Xing-hao, Liao Yinghao, Guo dong-hui, et al. [10] proposed A Novel Bio-inspired Algorithm for Colour Image Enhancement, Considering the above two algorithms, a new bio-inspired colour image enhancement algorithm is proposed by the author . The algorithm is based on bilateral filter and has much better effect than the abovementioned two algorithms. However, the image still exhibits halo phenomenon at the edges in spite of the algorithm improving it. In distance and luminance information of pixels are considered in the bilateral filter instead of only considering distance information in Gaussian filter. But the colour information is still not taken into consideration. In their work, Ms Jyoti J. Jadhav, et al. [11] Moving object detection and Tracking has been widely used in diverse discipline such as intelligent transportation system, airport security system, video surveillance applications, and so on. This paper presents the moving object detection and tracking using reference Background Subtraction. In this method, we used Static camera for video and first frame of video is directly consider as Reference Background Frame and this frame is subtract from current frame to detect moving object and then set threshold T value. If the pixel difference is greater than the set threshold T, then it determines that the pixels from

data management and data retrieval problem occurs. In London bombing video backtracking experience. Manual  browsing  brow sing of millions of hours of digitized video from thousands of cameras proved impossible within time-sensed  period. IV.  PRESENT THEORY & PRACTICES Presently the dynamic video synopsis [3] is used to shorten videos by defining an energy function that describes activities of moving objects in a video. The energy function is minimized to optimally compress the corresponding behaviors of the moving objects to form the video synopsis. This method can achieve a very large compression ratio in video representation, with destroying temporal relationship among objects. It may be difficult to focus on correct targets when the user looks for subjects of interest in surveillance videos.  

Fig.1 Video Synopsis

moving object, otherwise, as the pixels. But fixed threshold suitable only forbackground an ideal condition is this not suitable for complex environment with lighting changes. So 1269 IJRITCC | March 2015, Available @  @ http://www.ijritcc.org  

 _____________________  __________ ______________________ ______________________ ______________________ ______________________ ______________________ ______________________  ___________  

 

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 3

SSN: 2321-8169 1268 - 1273

 ________________________________________  __________________ ________________________________________ ________________________________ _______________________________ _______________________  ______  

Frame

Real time  video

Frame separation 

Pre processing

Background subtraction 

suppression

Fig.2 Video Synopsis Image  V.  .PROPOSED SYSTEM In project [1] presents a quick surveillance video browsing system. The basic idea is to collect all of moving objects which carry the most significant information in surveillance videos to construct a corresponding compact video by tuning  positions of these moving objects. objects. The compact compact video video rearranges the spatiotemporal coordinates of moving objects to enhance the compression, but the temporal relationships among moving objects are still kept by using background subtraction. The compact video can preserve the essential activities involved in the original surveillance video. We can easily find out the suspected target by using quick browsing system. The colour image enhancement by using adaptive filter is used to increase the clarity of image. Block diagram of project is as shown in Figure 3.For each short-time segment from surveillance videos, a background model can be constructed under the assumptions of the fixed camera view and the unchanged lighting, and thus the corresponding background images are generated. We employ a  background  backgro und model for executing executing the difference difference between the current image and background image [4], to eliminate all same frames. The compact video is the collection of all compact frames. The compact video not only compactly represents for a copious surveillance video but also preserves all essential components of moving objects appeared in the source video. Using our system, the user can spend only several minutes watching the compact video instead of hours monitoring a large number of surveillance videos. This paper is organized as the follows.

Reconstructed video 

CI E

Enhanced video

User interface 

Fig.3 Block Diagram of Smart Browsing System with Colour Image Enhancement VI.  .BACKGROUND SUBTRACTION METHOD The background subtraction [5] method is the common method of motion detection. It is a technology that uses the difference of the current image and the background image to detect the motion region, and it is generally able to provide data included object information. The key of this method lies in the initialization and update of the background image. The effectiveness of both will affect the accuracy of test results. Therefore, this paper uses an effective method to initialize the  background,  backgro und, and update update the b backgro ackground und in real real time.  A. Background image initialization: There are many ways to obtain the initial background image. For example, with the first frame as the background directly, or the average pixel brightness of the first few frames as the background or using a background image sequences

without the prospect of moving objects to estimate the  background model parameters  background parameters and so on. Amo Among ng these methods, the time average method is the most commonly used method of the establishment of an initial background. However, this method cannot deal with the background image (especially the region of frequent movement) which has the shadow problems. While the method of taking the median from continuous multi-frame can resolve this problem simply and effectively. So the median method is selected in this paper to initialize the background. Expression is as follows:

          

  Where the initial background, n is the total number of frames selected.  B. Background Update: Update:

For the canbebetter [6] inadapt to light changes, thebackground background model needs to updated real time, so as to accurately extract the moving object. In detection of the 1270 IJRITCC | March 2015, Available @  @ http://www.ijritcc.org  

 _____________________  __________ ______________________ ______________________ ______________________ ______________________ ______________________ ______________________  ___________  

 

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 3

SSN: 2321-8169 1268 - 1273

 __________________________________________________________  ________________________________________ ________________________________ _______________________________ _______________________  ______   moving object, the pixels judged as belonging to the moving object maintain the original background gray values, not be updated. For the pixels which are judged to be the background, we update the background model according to following rules:

                 

  Where is the pixel gray value in the current are respectively the background frame. value of the current frame and the next frame. As the camera is fixed, the background model can remain relatively stable in the long period of time. Using this method can effectively avoid the unexpected phenomenon of the background, such as the sudden appearance of something in the background which is not included in the original background. Moreover by the update of pixel gray value of the background, the impact  brought  broug ht by light, weather weather and other changes in the external external environment can be effectively adapted. Input Video

Frame Separation 

Image Sequence

The current  frame image 

Background Frame image 

Background Subtraction 

Background Update  

Fig.7 Difference & Pre-processing image VII. COLOUR IMAGE ENHANCEMENT ALGORITHM The algorithm proposed [7] consists of three major parts: (1) Obtain luminance image and background image, (2)Adaptive adjustment, (3) Colour restoration. Firstly, we get the luminance image and background image using colour space conversion, and then adaptively adjust the luminance image to compress the colour image dynamic range and enhance local contrast. The intensity level human eyes can identify at one time is small, so the high dynamic range image is intended to be compressed. Contrast enhancement can improve important visual details so that we can get an image with better visibility. Finally, we obtain the enhanced colour image after a linear colour restoration process. The process of colour image enhancement is shown in Fig.8.The luminance image of the original colour image is we get the  background image  background through adaptive filtering. Then, we adaptive adjust in both global and local range to obtain the local enhanced image after index transformation and colour restoration we can get the enhanced colour image.   is local is the function of adaptive regulation. enhanced colour image, and the enhanced colour image can be obtained after the colour restoration for  

              

 A.Obtain Luminance Image and Background IImage: mage: The colour images we usually see are mostly in RGB colour space, which employ red, green, and blue three primary colours to produce other colours. In RGB colour space, other colours are synthesized by three primary colours, which is not effective in some cases. Consequently, we use another colour

Moving Object

 pre processing

Shape Analysis

Fig.4 Flowchart of background subtraction

space YUV colour space instead of the RGB colour space in the algorithm proposed. The importance of using YUV colour space is that its brightness image Y and chrome images U, V are separate. Y stands for the luminance, and U, V are colour components, which constitute the colour information of colour image. If were moves the U, V images, the original colour image will become a gray image. The intensity of the pixel at ( x  x,  y) is the Y value at the point. Subjective luminance is the logarithmic function of the light intensity into human eyes [10]. We get the logarithmic function of the original luminance image and then normalize it to get the subjective luminance .

    



    





 

Where, Y (  xx, y) is the Y value of the pixel ( x, y) in YUV space. We use the formula (3) to get the background image. Fig.5 Background image

Fig.6 Current image   1271

IJRITCC | March 2015, Available @  @ http://www.ijritcc.org  

 _____________________  __________ ______________________ ______________________ ______________________ ______________________ ______________________ ______________________  ___________  

 

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 3

SSN: 2321-8169 1268 - 1273



 __________________________________________________________  ________________________________________ ________________________________ _______________________________ _______________________  ______    B.Adaptive Adjustment  Adjustment  :  :  The N ( x  x, y) represents the pixel of ( x, y), is thescale The image human eye seeing is related to the contrast  parameter of pixel filtering, and is the  between  betwee n the image and its background image [7]. We enhance distanceparameter. distancepar ameter. In classical filter,  N ( x  x,  y) is the intensity of the image by making use of the relationship between the image  x,  y). But the pixel ( x,  y) of colour image has three the pixel ( x and its background image. We use formula (1) to adaptive values in fact, which are Y, U and V values. We usually adjustment, adjustme nt, and define: overlook the colour information of colour images in filtering. In the paper, when we get the background image, we take all   this three values into consideration. It means that  N ( x  x,  y) has three components, Y is luminance value, and U, V is colour Where,   is intensity coefficient according to thecumulative values. Therefore, to obtain the background image, we modify distribution function (CDF) of the luminance image. (  xx,  y) is the formula (3) according to the Y, U, V values at pixel ( x, y), the ratio value between the background image and the intensity image. a and b are constants, we can adjust them to   achieve good adjustment results.

                                                                                            

(12)

 g is the grayscale level when the cumulative distribution function(CDF) of the intensity image is 0.1. If more than90% of all pixels have intensity higher than 190, a is 1;when 10% of all pixels have intensity lower that 60, a is0; other times a linear changes between 0 and 1.

 

        



 

C. Colour Restoration: Through index transformation of  I (  xx,  y)  E we can get the image. Subsequently, we use the colour restoration to obtain the enhanced colour image, which is based on a linear process of the original colour image.

                      

            

Fig.8.colour image enhancement Where,   is the intensity at  are the colour values of the pixel ( x,  y) are the corresponding scale parameters. Transforming the RGB colour image into YUV colour space, we can get directly the luminance image. Let the YUV colour image through the adaptive filter, and the background image can be



     

                 

 

obtained.Fig.8 shows the example of luminance image and  background  backgro und image. image. 

    

 

 

 R(  xx,  ycolour  y),  B(  x x,  y ),  G(  xx,image, )x,represent values original  R ' (  x  y),G' (  xx, ythe ),  B BR, ' (  xxG, , y)Bare the R,ofG,the B values of enhanced colour image

Fig.8 Original Image

Fig.9Enhanced Colour Image

VIII. OBSERVATIONS Here some observations are given from the experimental study of one real time video. Normalized Correlation between input, .output colourinimage enhanced video is mentioned  below  below. There and is change number of frames, length of video, size of video. As compared to input video the size and length 1272

IJRITCC | March 2015, Available @  @ http://www.ijritcc.org  

 _____________________  __________ ______________________ ______________________ ______________________ ______________________ ______________________ ______________________  ___________  

 

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 3

SSN: 2321-8169 1268 - 1273

 __________________________________________________________  ________________________________________ ________________________________ _______________________________ _______________________  ______   [8]  Meylan L, Susstrunk S. High dynamic range image rendering of output video is less. There is lots of difference between the with a retinex-based adaptive filter[J]. IEEE Transactions on input and output video in all manner. After colour image Image Processing, 2006, 15(9): 2820-2830. enhancement also there is no change in the length and size of [9]  Li Tao, Vijayan K.. Asari. A Robbust Image Enhancement the output video and the colour image enhanced video. TABLE.I OBSERVATIONS Video

Type of video

Input video

AVI Video File (.avi)

Output video Colour image enhanced video

AVI Video File (.avi) AVI Video File (.avi)

No of frames 200

Length of video 13 sec

Size of video 219MB

34

2sec

42.4 MB

34

2sec

42.4 MB

Technique for Improving Image Visual Quality in Shadowed Scenes[A]. Proccedings of the 4th International Conference on Image and Video Retrieval[C]. Springer, Berlin, ALLEMAGNE, 2005, vol.3568,395-404. [10]  Wang Shou-jue, Ding Xing-hao, Liao Ying-hao, Guo dong-hui, A Novel Bio-inspired Algorithm for Colour Image Enhancement, Acta Electronica Sinica, 2008.10, Vol.36, No.10: 1970-1973.(in Chinese) [11]  Ms Jyoti J. Jadhav, Moving Object Detection and Tracking for Video , International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014,ISSN 20912730.

IX. CONCLUSIONS  This work is based on background subtraction algorithm. It reduces the video length of Surveillance video. The reliable background model is established for finding the motion between two frames so we can easily collect the moving objects which carry the most significant information and in pre-processing the nose is suppressed. The threshold is set for difference between the background image and current image. If the difference between the background image and current image is greater than threshold then those images are kept and other images are discarded. Then the video of these images is reconstructed. This video is having very small length and requires less memory for storage. Then colour image enhancement by using adaptive filter is applied on the reconstructed video. This algorithm is simple and it reduces halo effect. Finally we get the small video with colour image enhancement . REFERENCES [1]  Cheng-Chieh Chiang ,Ming-Nan ,Huei-Fang Yang,”  A Quick Browsing System for Surveillance Videos” MVA2011 IAPR Conference on Machine Vision Applications, June 13-15, 2011,  Nara, JAPAN. JAPAN. [2]  Shizheng Wang,Jianwei Yang, Yanyun Zhao,Anni Cai,Stan Z.Li,”  “A Surveillance Video Analysis and Storage Scheme for

[3] 

[4]  [5] 

[6] 

 

Browsing” Scalable Synopsis 2011 IEEE International Conference on Computer Vision  Workshops,978-1-4673-00636/11/$26.00 c 2011 IEEE. P. De Camp, G. Shaw, R. Kubat, and D. Roy, An Immersive System for Browsing and Visualizing Surveillance Video, in Proceedings of ACM International Conference on Multimedia,2010 Y. Pritch, A. Rav-Acha, and S. Peleg, “Nonchronological Video Synopsis and Indexing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 11, pp. 1971-1984, 2008. Lijing Zhang, Yingli Liang “Motion human detection based on  background  backgro und Subtractio Subtraction” n”  2010 Second International Workshop on Education Technology and Computer Science, 978-0-76953987-4/10 $26.00 © 2010 IEEE. Axel Beaugendre,Hiroyoshi Miyano,Eiki Ishidera,Satoshi Goto.”Human Tracking System for Automatic Video Surveillance with Particle Filters” 978 -1-4244-7456-1/10/$26.00 ©2010 IEEE.

Ding, Wang, [7] Xinghao Enhancement withXinxin a human visualQuan systemXiao, based “Colour Adaptive Image Filter,” 978-1-4244-5555-3/10/$26.00 ©2010 IEEE.

1273 IJRITCC | March 2015, Available @  @ http://www.ijritcc.org  

 _____________________  __________ ______________________ ______________________ ______________________ ______________________ ______________________ ______________________  ___________  

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close