http://opendata.unex.es/recurso/ciencia-tecnologia/investigacion/publicaciones/Publicacion/2021-102

Literals

  • ou:urlOrcid
  • vivo:identifier
    • 2021-102
  • ou:eid
    • 2-s2.0-85101703822
  • dcterms:contributor
    • Bird J.J., Faria D.R., Manso L.J., Ayrosa P.P.S., Ekart A.
  • bibo:doi
    • 10.1088/1741-2552/abda0c
  • fabio:hasPublicationYear
    • 2021
  • bibo:eissn
    • 1741-2552
  • dcterms:creator
    • Bird J.J.
  • ou:bibtex
    • @article{ec02c9023eb642f491a92fff4988cd0d, title = 'A study on CNN image classification of EEG Signals represented in 2D and 3D', abstract = 'Objective. The novelty of this study consists of the exploration of multiple new approaches of data pre-processing of brainwave signals, wherein statistical features are extracted and then formatted as visual images based on the order in which dimensionality reduction algorithms select them. This data is then treated as visual input for 2D and 3D convolutional neural networks (CNNs) which then further extract 'features of features'. Approach. Statistical features derived from three electroencephalography (EEG) datasets are presented in visual space and processed in 2D and 3D space as pixels and voxels respectively. Three datasets are benchmarked, mental attention states and emotional valences from the four TP9, AF7, AF8 and TP10 10-20 electrodes and an eye state data from 64 electrodes. Seven hundred twenty-nine features are selected through three methods of selection in order to form 27 × 27 images and 9 × 9 × 9 cubes from the same datasets. CNNs engineered for the 2D and 3D preprocessing representations learn to convolve useful graphical features from the data. Main results. A 70/30 split method shows that the strongest methods for classification accuracy of feature selection are One Rule for attention state and Relative Entropy for emotional state both in 2D. In the eye state dataset 3D space is best, selected by Symmetrical Uncertainty. Finally, 10-fold cross validation is used to train best topologies. Final best 10-fold results are 97.03% for attention state (2D CNN), 98.4% for Emotional State (3D CNN), and 97.96% for Eye State (3D CNN). Significance. The findings of the framework presented by this work show that CNNs can successfully convolve useful features from a set of pre-computed statistical temporal features from raw EEG waves. The high performance of K-fold validated algorithms argue that the features learnt by the CNNs hold useful knowledge for classification in addition to the pre-computed features.', keywords = 'EEG classification, applied intelligence, data preprocessing, human-machine interaction', author = 'Bird, {Jordan J} and Faria, {Diego R} and Manso, {Luis J} and Ayrosa, {Pedro Paulo Da Silva} and Aniko Ekart', note = '{\textcopyright}2021 IOP Publishing Ltd. After the Embargo Period, the full text of the Accepted Manuscript may be made available on the non-commercial repository for anyone with an internet connection to read and download. After the Embargo Period a CC BY-NC-ND 3.0 licence applies to the Accepted Manuscript, in which case it may then only be posted under that CC BY-NC-ND licence provided that all the terms of the licence are adhered to, and any copyright notice and any cover sheet applied by IOP is not deleted or modified.', year = '2021', month = apr, doi = '10.1088/1741-2552/abda0c', language = 'English', volume = '18', journal = 'Journal of Neural Engineering', issn = '1741-2560', publisher = 'IOP Publishing Ltd.', number = '2', }
  • bibo:issn
    • 1741-2560
  • dcterms:publisher
    • Journal of Neural Engineering
  • ou:tipoPublicacion
    • Article
  • dcterms:title
    • A study on CNN image classification of EEG signals represented in 2D and 3D
  • vcard:url
  • ou:urlScopus
  • ou:vecesCitado
    • 25
  • bibo:volume
    • 18

Recognized prefixes