Please use this identifier to cite or link to this item: http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/2518
Title: MsEDNet: Multi-Scale deep saliency learning for moving object detection
Authors: Patil, P. W.
Murala, S.
Dhall, A.
Chaudhary, S.
Keywords: Histogram
Background estimation
CNN
EncoderDecoder network
foreground detection
Issue Date: 27-Aug-2021
Abstract: Moving object detection (foreground and background) is an important problem in computer vision. Most of the works in this problem are based on background subtraction. However, these approaches are not able to handle scenarios with infrequent motion of object, illumination changes, shadow, camouflage etc. To overcome these, here a two stage robust and compact method for moving object detection (MOD) is proposed. In first stage, to generate the saliency map, background image is estimated using a temporal histogram technique with the help of several input frames. In the second stage, multiscale encoderdecoder network is used to learn multiscale semantic feature of estimated saliency for foreground extraction. The encoder is used to extract multi-scale features from multi-scale saliency map. The decoder part is designed to learn the mapping of low resolution multi-scale features into high resolution output frame. To observe the efficacy of proposed MsEDNet, experiments are conducted on two benchmark datasets (change detection (CDnet-2014) [1] and Wallflower [2]) for MOD. The precision, recall and F-measure are used as performance parameter for comparison with the existing state-of-the-art methods. Experimental results show a significant improvement in detection accuracy and decrement in execution time as compared to the state-of-the-art methods for MOD.
URI: http://localhost:8080/xmlui/handle/123456789/2518
Appears in Collections:Year-2019

Files in This Item:
File Description SizeFormat 
Full Text.pdf939.92 kBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.