INSTITUTIONAL DIGITAL REPOSITORY

Automatic eye gaze estimation using geometric and texture-based networks

Show simple item record

dc.contributor.author Jyoti, S.
dc.contributor.author Dhall, A.
dc.date.accessioned 2019-05-14T13:39:15Z
dc.date.available 2019-05-14T13:39:15Z
dc.date.issued 2019-05-14
dc.identifier.uri http://localhost:8080/xmlui/handle/123456789/1226
dc.description.abstract Eye gaze estimation is an important problem in automatic human behavior understanding. This paper proposes a deep learning based method for inferring the eye gaze direction. The method is based on the use of ensemble of networks, which capture both the geometric and texture information. Firstly, a Deep Neural Network (DNN) is trained using the geometric features that are extracted from the facial landmark locations. Secondly, for the texture based features, three Convolutional Neural Networks (CNN) are trained i.e. for the patch around the left eye, right eye, and the combined eyes, respectively. Finally, the information from the four channels is fused with concatenation and dense layers are trained to predict the final eye gaze. The experiments are performed on the two publicly available datasets: Columbia eye gaze and TabletGaze. The extensive evaluation shows the superior performance of the proposed framework. We also evaluate the performance of the recently proposed swish activation function as compared to Rectified Linear Unit (ReLU) for eye gaze estimation. en_US
dc.language.iso en_US en_US
dc.title Automatic eye gaze estimation using geometric and texture-based networks en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account