Please use this identifier to cite or link to this item:
http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/3620
Title: | MTGLS: multi-task gaze estimation with limited supervision |
Authors: | Ghosh, S. Hayat, H. Dhall, A. Knibbe, J. |
Keywords: | Biometrics Face Processing Human-Computer Interaction Few-shot Large-scale Vision Applications Semi- and Un- supervised Learning Transfer |
Issue Date: | 14-Jul-2022 |
Abstract: | Robust gaze estimation is a challenging task, even for deep CNNs, due to the non-availability of large-scale labeled data. Moreover, gaze annotation is a time-consuming process and requires specialized hardware setups. We propose MTGLS: a Multi-Task Gaze estimation framework with Limited Supervision, which leverages abundantly available non-annotated facial image data. MTGLS distills knowledge from off-the-shelf facial image analysis models, and learns strong feature representations of human eyes, guided by three complementary auxiliary signals: (a) the line of sight of the pupil (i.e. pseudo-gaze) defined by the localized facial landmarks, (b) the head-pose given by Euler angles, and (c) the orientation of the eye patch (left/right eye). To overcome inherent noise in the supervisory signals, MT-GLS further incorporates a noise distribution modelling approach. Our experimental results show that MTGLS learns highly generalized representations which consistently perform well on a range of datasets. Our proposed framework outperforms the unsupervised state-of-the-art on CAVE (by ∼ 6.43%) and even supervised state-of-the-art methods on Gaze360 (by ∼ 6.59%) datasets. |
URI: | http://localhost:8080/xmlui/handle/123456789/3620 |
Appears in Collections: | Year-2022 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Full Text.pdf | 7.29 MB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.