Please use this identifier to cite or link to this item: http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/2415
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGhosh, B.-
dc.contributor.authorDhall, A.-
dc.contributor.authorSingla, E.-
dc.date.accessioned2021-08-18T23:11:39Z-
dc.date.available2021-08-18T23:11:39Z-
dc.date.issued2021-08-19-
dc.identifier.urihttp://localhost:8080/xmlui/handle/123456789/2415-
dc.description.abstractIn this paper, we present an end-to-end system for enhancing the effectiveness of non-verbal gestures in human robot interaction. We identify prominently used gestures in performances by TED talk speakers and map them to their corresponding speech context and modulated speech based upon the attention of the listener. Gestures are localised with convolution neural networks based approach. Dominant gestures of TED speakers are used for learning the gestureto-speech mapping. We evaluated the engagement of the robot with people by conducting a social survey. The effectiveness of the performance was monitored by the robot and it selfimprovised its speech pattern on the basis of the attention level of the audience, which was calculated using visual feedback from the camera. The effectiveness of interaction as well as the decisions made during improvisation was further evaluated based on the head-pose detection and an interaction surveyen_US
dc.language.isoen_USen_US
dc.titleAutomatic speech-gesture mapping and engagement evaluation in human robot interactionen_US
dc.typeArticleen_US
Appears in Collections:Year-2019

Files in This Item:
File Description SizeFormat 
Full Text.pdf4.72 MBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.