Please use this identifier to cite or link to this item:
http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/4308
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Patil, P. | - |
dc.contributor.author | Singh, J. | - |
dc.contributor.author | Hambarde, P. | - |
dc.contributor.author | Kulkarni, A. | - |
dc.contributor.author | Chaudhary, S. | - |
dc.contributor.author | Murala, S. | - |
dc.date.accessioned | 2022-12-15T10:02:18Z | - |
dc.date.available | 2022-12-15T10:02:18Z | - |
dc.date.issued | 2022-12-15 | - |
dc.identifier.uri | http://localhost:8080/xmlui/handle/123456789/4308 | - |
dc.description.abstract | Automated video-based applications are a highly demanding technique from a security perspective, where detection of moving objects i.e., moving object segmentation (MOS) is performed. Therefore, we have proposed an effective solution with a spatio-temporal squeeze excitation mechanism (SqEm) based multi-level feature sharing encoder-decoder network for MOS. Here, the SqEm module is proposed to get prominent foreground edge information using spatio-temporal features. Further, a multi-level feature sharing residual decoder module is proposed with respective SqEm features and previous output features for accurate and consistent foreground segmentation. To handle the foreground or background class imbalance issue, we propose a region of interest-based edge loss. The extensive experimental analysis on three databases is conducted. Result analysis and ablation study proved the robustness of the proposed network for unseen video understanding over SOTA methods. | en_US |
dc.language.iso | en_US | en_US |
dc.title | Robust unseen video understanding for various surveillance environments | en_US |
dc.type | Article | en_US |
Appears in Collections: | Year-2022 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Full Text.pdf | 3.16 MB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.