Please use this identifier to cite or link to this item: http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/2421
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSharma, G.-
dc.contributor.authorGhosh, S.-
dc.contributor.authorDhall, A.-
dc.date.accessioned2021-08-19T18:41:25Z-
dc.date.available2021-08-19T18:41:25Z-
dc.date.issued2021-08-20-
dc.identifier.urihttp://localhost:8080/xmlui/handle/123456789/2421-
dc.description.abstractThis paper proposes a database for group level emotion recognition in videos. The motivation is coming from the large number of information which the users are sharing online. This gives us the opportunity to use this perceived affect for various tasks. Most of the work in this area has been restricted to controlled environments. In this paper, we explore the group level emotion and cohesion in a real-world environment. There are several challenges involved in moving from a controlled environment to real-world scenarios such as face tracking limitations, illumination variations, occlusion and type of gatherings. As an attempt to address these challenges, we propose a ‘Video level Group AFfect (VGAF)’ database containing 1,004 videos downloaded from the web. The collected videos have a large variations in terms of gender, ethnicity, the type of social event, number of people, pose, etc. We have labelled our database for group level emotion and cohesion tasks and proposed a baseline based on the Inception V3 network on the database.en_US
dc.language.isoen_USen_US
dc.subjectGroup Level Emotionen_US
dc.subjectGroup Cohesionen_US
dc.subjectMultimodal affecten_US
dc.subjectContext analysisen_US
dc.titleAutomatic group level affect and cohesion prediction in videosen_US
dc.typeArticleen_US
Appears in Collections:Year-2019

Files in This Item:
File Description SizeFormat 
Full Text.pdf2.34 MBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.