Please use this identifier to cite or link to this item:
http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/1398
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Mehta, V. | - |
dc.contributor.author | Katta, S.S. | - |
dc.contributor.author | Yadav, D.P. | - |
dc.contributor.author | Dhall, A. | - |
dc.date.accessioned | 2019-11-26T13:33:13Z | - |
dc.date.available | 2019-11-26T13:33:13Z | - |
dc.date.issued | 2019-11-26 | - |
dc.identifier.uri | http://localhost:8080/xmlui/handle/123456789/1398 | - |
dc.description.abstract | Traffic accidents cause over a million deaths every year, of which a large fraction is attributed to drunk driving. An automated intoxicated driver detection system in vehicles will be useful in reducing accidents and related financial costs. Existing solutions require special equipment such as electrocardiogram, infrared cameras or breathalyzers. In this work, we propose a new dataset called DIF (Dataset of perceived Intoxicated Faces) which contains audiovisual data of intoxicated and sober people obtained from online sources. To the best of our knowledge, this is the first work for automatic bimodal non-invasive intoxication detection. Convolutional Neural Networks (CNN) and Deep Neural Networks (DNN) are trained for computing the video and audio baselines, respectively. 3D CNN is used to exploit the Spatio-temporal changes in the video. A simple variation of the traditional 3D convolution block is proposed based on inducing non-linearity between the spatial and temporal channels. Extensive experiments are performed to validate the approach and baselines. | en_US |
dc.language.iso | en_US | en_US |
dc.title | DIF: Dataset of Perceived Intoxicated Faces for Drunk Person Identification | en_US |
dc.type | Article | en_US |
Appears in Collections: | Year-2019 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Full Text.pdf | 1.2 MB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.