Please use this identifier to cite or link to this item: http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/1854
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPatil, P. W.-
dc.contributor.authorDudhane, A.-
dc.contributor.authorMurala, S.-
dc.date.accessioned2021-06-19T09:36:27Z-
dc.date.available2021-06-19T09:36:27Z-
dc.date.issued2021-06-19-
dc.identifier.urihttp://localhost:8080/xmlui/handle/123456789/1854-
dc.description.abstractIn video frame segmentation, many existing deep networks and contemporary approaches give a remarkable performance with the assumption that the only foreground is moving, and the background is stationary. However, in the presence of infrequent motion of foreground objects, sudden illumination changes in the background, bad weather, and dynamic background, the accurate foreground object(s) segmentation is a challenging task. Generative adversarial networks (GAN) based training shows fruitful results in various fields like image-to-image style transfer, image enhancement, semantic segmentation, image super-resolution, etc. The limited results of hand-crafted approaches for moving object segmentation (MOS) and the robustness of adversarial training for a given task inspired us to propose a novel approach for moving object segmentation (MOS). In this context, an end-to-end generative adversarial network (two generators) with recurrent technique is proposed for MOS and is named as RMS-GAN. The proposed RMS-GAN is able to incorporate foreground probability knowledge with residual and weight sharing based recurrent technique for accurate segmentation. The recurrent technique helps us to exhibit the temporal behavior between successive video frames, which is more prominent for any video processing applications. Also, to enhance the spatial coherence of the obtained foreground probability map using the generator-1 network, the cascaded architecture of two generators is proposed. The effectiveness of the proposed approach is evaluated both qualitatively and quantitatively on three benchmark video datasets for MOS. Experimental result analysis shows that the proposed network outperforms the existing state-of-the-art methods on three benchmark datasets for MOS.en_US
dc.language.isoen_USen_US
dc.subjectGenerative adversarial networksen_US
dc.subjectmotionen_US
dc.subjectrecurrenten_US
dc.subjectvideo frame segmentationen_US
dc.titleEnd-to-End recurrent generative adversarial network for traffic and surveillance applicationsen_US
dc.typeArticleen_US
Appears in Collections:Year-2020

Files in This Item:
File Description SizeFormat 
Fulltext.pdf4.17 MBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.