Abstract:
Scene changes that typically occur in a real-world setting degrade anomaly detection performance over the long run. Most
of the existing methods ignore the challenge of temporal concept drift in video surveillance. In this paper, we propose an
unsupervised end-to-end framework for adaptive scene level
anomaly detection. We utilize multivariate Gaussian mixtures
for adaptive scene learning. The mixture represents the possible distribution of normal and abnormal events shown till
now. The distribution adapts itself according to the slow scene
changes. We introduce a Mahalanobis distance-based contribution factor to update mixture parameters on the arrival of
each new event. A detailed discussion and experiments are
conducted to decide optimum local as well as global temporal context. The existing public datasets for anomaly detection are of very short duration (maximum of 1.5 hours) to be
used for evaluating adaptive approaches. Therefore we also
collected a longer duration dataset of continuous 10 hours
duration. We achieved a promising performance of 85.14%
AUC and 21.26% EER on this data.