Please use this identifier to cite or link to this item: http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/2905
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDhall, A.-
dc.contributor.authorJoshi, J.-
dc.contributor.authorGoecke, R.-
dc.contributor.authorHoey, J.-
dc.contributor.authorGhosh, S.-
dc.contributor.authorGedeon, T.-
dc.date.accessioned2021-10-06T17:25:30Z-
dc.date.available2021-10-06T17:25:30Z-
dc.date.issued2021-10-06-
dc.identifier.urihttp://localhost:8080/xmlui/handle/123456789/2905-
dc.description.abstractResearch in automatic affect recognition has come a long way. This paper describes the fifth Emotion Recognition in the Wild (EmotiW) challenge 2017. EmotiW aims at providing a common benchmarking platform for researchers working on different aspects of affective computing. This year there are two sub-challenges: a) Audio-video emotion recognition and b) group-level emotion recognition. These challenges are based on the acted facial expressions in the wild and group affect databases, respectively. The particular focus of the challenge is to evaluate method in ‘in the wild’ settings. ‘In the wild’ here is used to describe the various environments represented in the images and videos, which represent real-world (not lab like) scenarios. The baseline, data, protocol of the two challenges and the challenge participation are discussed in detail in this paper.en_US
dc.language.isoen_USen_US
dc.subjectAudio-video data corpusen_US
dc.subjectEmotion recognitionen_US
dc.subjectGroup-level emotion recognitionen_US
dc.subjectFacial expression challengeen_US
dc.subjectAffect analysis in the wilden_US
dc.titleFrom individual to group-level emotion recognition: emoti W 5.0en_US
dc.typeArticleen_US
Appears in Collections:Year-2017

Files in This Item:
File Description SizeFormat 
Full Text.pdf3.42 MBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.