dc.description.abstract |
Moving object detection (foreground and
background) is an important problem in computer vision. Most of
the works in this problem are based on background subtraction.
However, these approaches are not able to handle scenarios with
infrequent motion of object, illumination changes, shadow,
camouflage etc. To overcome these, here a two stage robust and
compact method for moving object detection (MOD) is proposed.
In first stage, to generate the saliency map, background image is
estimated using a temporal histogram technique with the help of
several input frames. In the second stage, multiscale encoderdecoder network is used to learn multiscale semantic feature of
estimated saliency for foreground extraction. The encoder is used
to extract multi-scale features from multi-scale saliency map. The
decoder part is designed to learn the mapping of low resolution
multi-scale features into high resolution output frame. To observe
the efficacy of proposed MsEDNet, experiments are conducted on
two benchmark datasets (change detection (CDnet-2014) [1] and
Wallflower [2]) for MOD. The precision, recall and F-measure
are used as performance parameter for comparison with the
existing state-of-the-art methods. Experimental results show a
significant improvement in detection accuracy and decrement in
execution time as compared to the state-of-the-art methods for
MOD. |
en_US |