eprintid: 14951 rev_number: 8 eprint_status: archive userid: 2 dir: disk0/00/01/49/51 datestamp: 2024-10-31 23:30:12 lastmod: 2024-10-31 23:30:13 status_changed: 2024-10-31 23:30:12 type: article metadata_visibility: show creators_name: Rehman, Madiha creators_name: Anwer, Humaira creators_name: Garay, Helena creators_name: Alemany Iturriaga, Josep creators_name: Díez, Isabel De la Torre creators_name: Siddiqui, Hafeez ur Rehman creators_name: Ullah, Saleem creators_id: creators_id: creators_id: helena.garay@uneatlantico.es creators_id: josep.alemany@uneatlantico.es creators_id: creators_id: creators_id: title: Decoding Brain Signals from Rapid-Event EEG for Visual Analysis Using Deep Learning ispublished: pub subjects: uneat_eng divisions: uneatlantico_produccion_cientifica divisions: uninipr_produccion_cientifica divisions: unic_produccion_cientifica divisions: uniromana_produccion_cientifica full_text_status: public keywords: BCI; EEG; visual classification; rapid-event design; block design abstract: The perception and recognition of objects around us empower environmental interaction. Harnessing the brain’s signals to achieve this objective has consistently posed difficulties. Researchers are exploring whether the poor accuracy in this field is a result of the design of the temporal stimulation (block versus rapid event) or the inherent complexity of electroencephalogram (EEG) signals. Decoding perceptive signal responses in subjects has become increasingly complex due to high noise levels and the complex nature of brain activities. EEG signals have high temporal resolution and are non-stationary signals, i.e., their mean and variance vary overtime. This study aims to develop a deep learning model for the decoding of subjects’ responses to rapid-event visual stimuli and highlights the major factors that contribute to low accuracy in the EEG visual classification task.The proposed multi-class, multi-channel model integrates feature fusion to handle complex, non-stationary signals. This model is applied to the largest publicly available EEG dataset for visual classification consisting of 40 object classes, with 1000 images in each class. Contemporary state-of-the-art studies in this area investigating a large number of object classes have achieved a maximum accuracy of 17.6%. In contrast, our approach, which integrates Multi-Class, Multi-Channel Feature Fusion (MCCFF), achieves a classification accuracy of 33.17% for 40 classes. These results demonstrate the potential of EEG signals in advancing EEG visual classification and offering potential for future applications in visual machine models. date: 2024-10 publication: Sensors volume: 24 number: 21 pagerange: 6965 id_number: doi:10.3390/s24216965 refereed: TRUE issn: 1424-8220 official_url: http://doi.org/10.3390/s24216965 access: open language: en citation: Artículo Materias > Ingeniería Universidad Europea del Atlántico > Investigación > Producción Científica Universidad Internacional Iberoamericana Puerto Rico > Investigación > Producción Científica Universidad Internacional do Cuanza > Investigación > Artículos y libros Universidad de La Romana > Investigación > Producción Científica Abierto Inglés The perception and recognition of objects around us empower environmental interaction. Harnessing the brain’s signals to achieve this objective has consistently posed difficulties. Researchers are exploring whether the poor accuracy in this field is a result of the design of the temporal stimulation (block versus rapid event) or the inherent complexity of electroencephalogram (EEG) signals. Decoding perceptive signal responses in subjects has become increasingly complex due to high noise levels and the complex nature of brain activities. EEG signals have high temporal resolution and are non-stationary signals, i.e., their mean and variance vary overtime. This study aims to develop a deep learning model for the decoding of subjects’ responses to rapid-event visual stimuli and highlights the major factors that contribute to low accuracy in the EEG visual classification task.The proposed multi-class, multi-channel model integrates feature fusion to handle complex, non-stationary signals. This model is applied to the largest publicly available EEG dataset for visual classification consisting of 40 object classes, with 1000 images in each class. Contemporary state-of-the-art studies in this area investigating a large number of object classes have achieved a maximum accuracy of 17.6%. In contrast, our approach, which integrates Multi-Class, Multi-Channel Feature Fusion (MCCFF), achieves a classification accuracy of 33.17% for 40 classes. These results demonstrate the potential of EEG signals in advancing EEG visual classification and offering potential for future applications in visual machine models. metadata Rehman, Madiha; Anwer, Humaira; Garay, Helena; Alemany Iturriaga, Josep; Díez, Isabel De la Torre; Siddiqui, Hafeez ur Rehman y Ullah, Saleem mail SIN ESPECIFICAR, SIN ESPECIFICAR, helena.garay@uneatlantico.es, josep.alemany@uneatlantico.es, SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR (2024) Decoding Brain Signals from Rapid-Event EEG for Visual Analysis Using Deep Learning. Sensors, 24 (21). p. 6965. ISSN 1424-8220 document_url: http://repositorio.unic.co.ao/id/eprint/14951/1/sensors-24-06965-v2.pdf