Please use this identifier to cite or link to this item:
http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/2007
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Patil, P. W. | - |
dc.contributor.author | Biradar, K. M. | - |
dc.contributor.author | Dudhane, A. | - |
dc.contributor.author | Murala, S. | - |
dc.date.accessioned | 2021-07-04T10:38:36Z | - |
dc.date.available | 2021-07-04T10:38:36Z | - |
dc.date.issued | 2021-07-04 | - |
dc.identifier.uri | http://localhost:8080/xmlui/handle/123456789/2007 | - |
dc.description.abstract | Moving object segmentation in videos (MOS) is a highly demanding task for security-based applications like automated outdoor video surveillance. Most of the existing techniques proposed for MOS are highly depend on fine-tuning a model on the first frame(s) of test sequence or complicated training procedure, which leads to limited practical serviceability of the algorithm. In this paper, the inherent correlation learning-based edge extraction mechanism (EEM) and dense residual block (DRB) are proposed for the discriminative foreground representation. The multi-scale EEM module provides the efficient foreground edge related information (with the help of encoder) to the decoder through skip connection at subsequent scale. Further, the response of the optical flow encoder stream and the last EEM module are embedded in the bridge network. The bridge network comprises of multi-scale residual blocks with dense connections to learn the effective and efficient foreground relevant features. Finally, to generate accurate and consistent foreground object maps, a decoder block is proposed with skip connections from respective multi-scale EEM module feature maps and the subsequent down-sampled response of previous frame output. Specifically, the proposed network does not require any pre-trained models or fine-tuning of the parameters with the initial frame(s) of the test video. The performance of the proposed network is evaluated with different configurations like disjoint, cross-data, and global training-testing techniques. The ablation study is conducted to analyse each model of the proposed network. To demonstrate the effectiveness of the proposed framework, a comprehensive analysis on four benchmark video datasets is conducted. Experimental results show that the proposed approach outperforms the state-of-the-art methods for MOS | en_US |
dc.language.iso | en_US | en_US |
dc.title | An End-to-End edge aggregation network for moving object segmentation | en_US |
dc.type | Article | en_US |
Appears in Collections: | Year-2020 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Fulltext.pdf | 984.79 kB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.