Abstract:
In video frame segmentation, many existing deep networks and contemporary approaches give a remarkable performance with the assumption that the only foreground is moving,
and the background is stationary. However, in the presence of infrequent motion of foreground objects, sudden illumination changes in
the background, bad weather, and dynamic background, the accurate foreground object(s) segmentation is a challenging task. Generative adversarial networks (GAN) based training shows fruitful
results in various fields like image-to-image style transfer, image
enhancement, semantic segmentation, image super-resolution, etc.
The limited results of hand-crafted approaches for moving object
segmentation (MOS) and the robustness of adversarial training for
a given task inspired us to propose a novel approach for moving
object segmentation (MOS). In this context, an end-to-end generative adversarial network (two generators) with recurrent technique
is proposed for MOS and is named as RMS-GAN. The proposed
RMS-GAN is able to incorporate foreground probability knowledge with residual and weight sharing based recurrent technique
for accurate segmentation. The recurrent technique helps us to
exhibit the temporal behavior between successive video frames,
which is more prominent for any video processing applications.
Also, to enhance the spatial coherence of the obtained foreground
probability map using the generator-1 network, the cascaded architecture of two generators is proposed. The effectiveness of the
proposed approach is evaluated both qualitatively and quantitatively on three benchmark video datasets for MOS. Experimental
result analysis shows that the proposed network outperforms the
existing state-of-the-art methods on three benchmark datasets for
MOS.