Please use this identifier to cite or link to this item: http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/1962
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGupta, P.-
dc.contributor.authorDhall, A.-
dc.contributor.authorChugh, K.-
dc.contributor.authorSubramanian, R.-
dc.date.accessioned2021-07-02T00:09:32Z-
dc.date.available2021-07-02T00:09:32Z-
dc.date.issued2021-07-02-
dc.identifier.urihttp://localhost:8080/xmlui/handle/123456789/1962-
dc.description.abstractWe present FakeET– an eye-tracking database to understand human visual perception of deepfake videos. Given that the principal purpose of deepfakes is to deceive human observers, FakeET is designed to understand and evaluate the ease with which viewers can detect synthetic video artifacts. FakeET contains viewing patterns compiled from 40 users via the Tobii desktop eye-tracker for 811 videos from the Google Deepfake dataset, with a minimum of two viewings per video. Additionally, EEG responses acquired via the Emotiv sensor are also available. The compiled data confirms (a) distinct eye movement characteristics for real vs fake videos; (b) utility of the eye-track saliency maps for spatial forgery localization and detection, and (c) Error Related Negativity (ERN) triggers in the EEG responses, and the ability of the raw EEG signal to distinguish between real and fake videosen_US
dc.language.isoen_USen_US
dc.subjectdeepfakeen_US
dc.subjectvisual perceptionen_US
dc.subjecteye-trackingen_US
dc.subjectEEGen_US
dc.titleThe eyes know it: FakeET- An Eye-tracking database to understand deepfake perceptionen_US
dc.typeArticleen_US
Appears in Collections:Year-2020

Files in This Item:
File Description SizeFormat 
Fulltext.pdf1.45 MBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.