INSTITUTIONAL DIGITAL REPOSITORY

EEG-ConvTransformer for single-trial EEG-based visual stimulus classification

Show simple item record

dc.contributor.author Bagchi, S.
dc.contributor.author Bathula, D.R.
dc.date.accessioned 2022-07-16T19:41:35Z
dc.date.available 2022-07-16T19:41:35Z
dc.date.issued 2022-07-17
dc.identifier.uri http://localhost:8080/xmlui/handle/123456789/3653
dc.description.abstract Different categories of visual stimuli evoke distinct activation patterns in the human brain. These patterns can be captured with EEG for utilization in application such as Brain-Computer Interface (BCI). However, accurate classification of these patterns acquired using single-trial data is challenging due to the low signal-to-noise ratio of EEG. Recently, deep learning-based transformer models with multi-head self-attention have shown great potential for analyzing variety of data. This work introduces an EEG-ConvTranformer network that is based on both multi-headed self-attention and temporal convolution. The novel architecture incorporates self-attention modules to capture inter-region interaction patterns and convolutional filters to learn temporal patterns in a single module. Experimental results demonstrate that EEG-ConvTransformer achieves improved classification accuracy over state-of-the-art techniques across five different visual stimulus classification tasks. Finally, quantitative analysis of inter-head diversity also shows low similarity in representational space, emphasizing the implicit diversity of multi-head attention. en_US
dc.language.iso en_US en_US
dc.subject Deep learning en_US
dc.subject EEG en_US
dc.subject Head representations en_US
dc.subject Inter-head diversity en_US
dc.subject Inter-region similarity en_US
dc.subject Multi-head attention en_US
dc.subject Temporal convolution en_US
dc.subject Transformer en_US
dc.subject Visual stimulus classification en_US
dc.title EEG-ConvTransformer for single-trial EEG-based visual stimulus classification en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account