dc.description.abstract |
The current prevailing algorithms highly depend on
additional pre-trained modules trained for other applications or
complicated training procedures or neglect the inter-frame spatiotemporal structural dependencies. Also, the generalized effect of
existing works with completely unseen data is difficult to identify.
Specifically, the outdoor videos suffer from adverse atmospheric
conditions like poor visibility, inclement weather, etc. In this letter,
a novel end-to-end multi-scale temporal edge aggregation (MTPA)
network is proposed with adversarial learning for scene dependent
and independent object segmentation. The MTPA is proposed to
extract the comprehensive spatio-temporal features from the current and reference frame. These MTPA features are used to guide
the respective decoder through skip connections. To get authentic
and consistent foreground object(s), the respective scale feedback of
previous frame output is provided with respective MTPA features
at each decoder input. The performance analysis of the proposed
method is verified on CDnet-2014 and LASIESTA video datasets.
The proposed method outperforms the existing state-of-the-art
methods with scene dependent and independent analysis. |
en_US |