A structure of decision-level fusion-based autism diagnosis using analysis of EEG signals related to presentation of facial expression modes has been proposed. EEG signals of autistic and normal children were recorded during processing images of emotional facial expression modes, such as sadness, happiness, and calmness. Then brain signals were mapped into the feature space by applying a novel hybrid model utilizing the brain potentials recorded during the examination task. The aim of mapping was to achieve separation of the autistic samples from normal ones with the highest precision. The created map provides the feature vectors that reflected spatial, temporal, and spectral data, as well as the coherence degrees for distinct areas of the brain. The mapping process was optimized using a genetic algorithm by assigning weighs to the feature vectors. Then the feature vectors corresponding to the three facial expressions of emotional modes were classified by support vector machines. Finally, using decision-level fusion through a majority voting rule, we see that the proposed structure is able to effectively distinguish the autistic individuals from normal ones.
http://ift.tt/2s0gzSt
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου