#PAGE_PARAMS# #ADS_HEAD_SCRIPTS# #MICRODATA#

A zero-shot learning approach to the development of brain-computer interfaces for image retrieval


Autoři: Ben McCartney aff001;  Jesus Martinez-del-Rincon aff001;  Barry Devereux aff001;  Brian Murphy aff001
Působiště autorů: Queen’s University Belfast, United Kingdom aff001;  BrainWaveBank Ltd. Belfast, United Kingdom aff002
Vyšlo v časopise: PLoS ONE 14(9)
Kategorie: Research Article
prolekare.web.journal.doi_sk: https://doi.org/10.1371/journal.pone.0214342

Souhrn

Brain decoding—the process of inferring a person’s momentary cognitive state from their brain activity—has enormous potential in the field of human-computer interaction. In this study we propose a zero-shot EEG-to-image brain decoding approach which makes use of state-of-the-art EEG preprocessing and feature selection methods, and which maps EEG activity to biologically inspired computer vision and linguistic models. We apply this approach to solve the problem of identifying viewed images from recorded brain activity in a reliable and scalable way. We demonstrate competitive decoding accuracies across two EEG datasets, using a zero-shot learning framework more applicable to real-world image retrieval than traditional classification techniques.

Klíčová slova:

Biology and life sciences – Engineering and technology – Research and analysis methods – Neuroscience – Cognitive science – Cognitive psychology – Learning – Learning and memory – Psychology – Social sciences – Computer and information sciences – Medicine and health sciences – Physiology – Diagnostic medicine – Clinical medicine – Imaging techniques – Brain mapping – Functional magnetic resonance imaging – Neuroimaging – Diagnostic radiology – Magnetic resonance imaging – Radiology and imaging – Sensory perception – Electrophysiology – Neurophysiology – Bioassays and physiological analysis – Electrophysiological techniques – Brain electrophysiology – Electroencephalography – Clinical neurophysiology – Vision – Linguistics – Semantics – Software engineering – Preprocessing – Computer vision – Computer imaging


Zdroje

1. Vidal J. Toward Direct Brain-computer Communication. Annual Review of Biophysics and Bioengineering. 1973.

2. Rajendra Acharya U, Fujita U. Application of entropies for automated diagnosis of epilepsy using EEG signals: A review. Knowledge-Based Systems. 2015;88:85–96. doi: 10.1016/j.knosys.2015.08.004

3. Bhat S, Rajendra Acharya U, Dadmehr N, Adeli H. Clinical neurophysiological and automated EEG-based diagnosis of the Alzheimer’s disease. European Neurology. 2015;74:202–210. doi: 10.1159/000441447 26588015

4. Schwartz AB, Cui XT, Weber DJ, Moran DW. Brain-Controlled Interfaces: Movement Restoration with Neural Prosthetics. Neuron. 2006;52(1):205–220. doi: 10.1016/j.neuron.2006.09.019 17015237

5. Li Y, Guan C, Li H, Chin Z. A self-training semi-supervised SVM algorithm and its application in an EEG-based brain computer interface speller system. Pattern Recognition Letters. 2008;29(9):1285–1294. doi: 10.1016/j.patrec.2008.01.030

6. Pedroso RV, Fraga FJ, Corazza DI, Andreatto CAA, Coelho FGdM, Costa JLR, et al. P300 latency and amplitude in Alzheimer’s disease: A systematic review. Brazilian Journal of Otorhinolaryngology. 2012;78(4):126–132. doi: 10.1590/S1808-86942012000400023 22936149

7. Sarnthein J, Andersson M, Zimmermann MB, Zumsteg D. High test-retest reliability of checkerboard reversal visual evoked potentials (VEP) over 8 months. Clinical Neurophysiology. 2009;120(10):1835–1840. doi: 10.1016/j.clinph.2009.08.014 19762276

8. Larson MJ, Clayson PE, Clawson A. Making sense of all the conflict: A theoretical review and critique of conflict-related ERPs. International Journal of Psychophysiology. 2014;93(3):283–297. doi: 10.1016/j.ijpsycho.2014.06.007 24950132

9. Haxby JV. Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex. Science. 2001;293(5539):2425–2430. doi: 10.1126/science.1063736 11577229

10. Murphy B. EEG responds to conceptual stimuli and corpus semantics. In: Conference on Empirical Methods in Natural Language Processing; 2009. p. 619–627.

11. Matran-Fernandez A, Poli R. Collaborative brain-computer interfaces for target localisation in rapid serial visual presentation. In: 2014 6th Computer Science and Electronic Engineering Conference (CEEC). IEEE; 2014. p. 127–132.

12. Sajda P, Pohlmeyer E, Wang J, Hanna B, Parra LC, Chang Sf. Cortically-Coupled Computer Vision. Brain-Computer Interfaces. 2010; p. 133–148.

13. Mitchell TM, Shinkareva SV, Carlson A, Chang KM, Malave VL, Mason Ra, et al. Predicting human brain activity associated with the meanings of nouns. Science (New York, NY). 2008;320(5880):1191–1195. doi: 10.1126/science.1152876

14. Palatucci MM. Thought recognition: predicting and decoding brain activity using the zero-shot learning model. Citeseer; 2011.

15. Kay KN, Naselaris T, Prenger RJ, Gallant JL. Identifying natural images from human brain activity. Nature. 2008;452(7185):352–5. doi: 10.1038/nature06713 18322462

16. Carlson T, Tovar DA, Kriegeskorte N. Representational dynamics of object vision: The first 1000 ms. Journal of Vision. 2013;13:1–19. doi: 10.1167/13.10.1 23908380

17. Clarke A, Devereux BJ, Randall B, Tyler LK. Predicting the time course of individual objects with MEG. Cerebral Cortex. 2015;25(10):3602–3612. doi: 10.1093/cercor/bhu203 25209607

18. Sudre G, Pomerleau D, Palatucci M, Wehbe L, Fyshe A, Salmelin R, et al. Tracking neural coding of perceptual and semantic features of concrete nouns. NeuroImage. 2012;62(1):451–463. doi: 10.1016/j.neuroimage.2012.04.048 22565201

19. Kaneshiro B, Perreau Guimaraes M, Kim HS, Norcia AM, Suppes P. A Representational Similarity Analysis of the Dynamics of Object Processing Using Single-Trial EEG Classification. Plos One. 2015;10(8):e0135697. doi: 10.1371/journal.pone.0135697 26295970

20. Nolan H, Whelan R, Reilly RB. FASTER: Fully Automated Statistical Thresholding for EEG artifact Rejection. Journal of neuroscience methods. 2010; p. 152–162. doi: 10.1016/j.jneumeth.2010.07.015 20654646

21. Cecchi M, Moore DK, Sadowsky CH, Solomon PR, Doraiswamy PM, Smith CD, et al. A clinical trial to validate event-related potential markers of Alzheimer’s disease in outpatient settings. Alzheimer’s and Dementia: Diagnosis, Assessment and Disease Monitoring. 2015;1(4):387–394. doi: 10.1016/j.dadm.2015.08.004 27239520

22. Ramos-Murguialday A, Birbaumer N. Brain oscillatory signatures of motor tasks. Journal of neurophysiology. 2015;7:jn.00467.2013.

23. HURST HE. Long-Term Storage Capacity of Reservoirs. Trans Amer Soc Civil Eng. 1951;116:770–799.

24. Perrin F, Pernier J, Bertrand O, Echallier J. Spherical splines for scalp potential and current density mapping. Electroencephalography and clinical neurophysiology. 1989;72(2):184–187. doi: 10.1016/0013-4694(89)90180-6 2464490

25. Jas M, Engemann D, Raimondo F, Bekhti Y, Gramfort A. Automated rejection and repair of bad trials in MEG/EEG. In: 2016 International Workshop on Pattern Recognition in Neuroimaging (PRNI). IEEE; 2016. p. 1–4.

26. Makeig S, Ca SD, Bell AJ, Sejnowski TJ. Independent component analysis of electroencephalographic data. Advances in neural information processing systems. 1996; p. 145–151.

27. Vorobyov S, Cichocki A. Blind noise reduction for multisensory signals using ICA and subspace filtering, with application to EEG analysis. Biological Cybernetics. 2002;86(4):293–303. doi: 10.1007/s00422-001-0298-6 11956810

28. Caceres CA, Roos MJ, Rupp KM, Milsap G, Crone NE, Wolmetz ME, et al. Feature Selection Methods for Zero-Shot Learning of Neural Activity. Frontiers in Neuroinformatics. 2017;11(June):1–12.

29. Carlson TA, Ritchie JB, Kriegeskorte N. RT for Object Categorisation Is Predicted by Representational Distance. Journel of Cognitive Neuroscience. 2007; p. 1–11.

30. Clarke A, Taylor KI, Tyler LK. The evolution of meaning: spatio-temporal dynamics of visual object recognition. Journal of cognitive neuroscience. 2011;23(8):1887–1899. doi: 10.1162/jocn.2010.21544 20617883

31. Clarke A, Taylor KI, Devereux B, Randall B, Tyler LK. From perception to conception: how meaningful objects are processed over time. Cerebral Cortex. 2012;23(1):187–197. doi: 10.1093/cercor/bhs002 22275484

32. Clarke A, Tyler LK. Understanding what we see: how we derive meaning from vision. Trends in cognitive sciences. 2015;19(11):677–687. doi: 10.1016/j.tics.2015.08.008 26440124

33. Hamilton W. Biologically Inspired Object Recognition using Gabor Filters; 2013.

34. Leeds DD, Seibert DA, Pyles JA, Tarr MJ. Comparing visual representations across human fMRI and computational vision. Journal of Vision. 2013;13(13):25–25. doi: 10.1167/13.13.25 24273227

35. Nishimoto S, Vu AT, Naselaris T, Benjamini Y, Yu B, Gallant JL. Reconstructing visual experiences from brain activity evoked by natural movies. Current Biology. 2011;21(19):1641–1646. doi: 10.1016/j.cub.2011.08.031 21945275

36. Jones JP, Palmer LA. An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. Journal of neurophysiology. 1987;58(6):1233–1258. doi: 10.1152/jn.1987.58.6.1233 3437332

37. Lindeberg T. Scale Invariant Feature Transform. Scholarpedia. 2012;7(5):10491. doi: 10.4249/scholarpedia.10491

38. Yang J, Jiang YG, Hauptmann AG, Ngo CW. Evaluating bag-of-visual-words representations in scene classification. In: Proceedings of the international workshop on Workshop on multimedia information retrieval. ACM; 2007. p. 197–206.

39. Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. Imagenet: A large-scale hierarchical image database. 2009.

40. Sural S, Gang Qian, Pramanik S. Segmentation and histogram generation using the HSV color space for image retrieval. Proceedings International Conference on Image Processing. 2002;2:II–589–II–592.

41. Güçlü U, van Gerven MAJ. Semantic vector space models predict neural responses to complex visual stimuli. arXiv preprint. 2015.

42. Trask A, Gilmore D, Russell M. Modeling order in neural word embeddings at scale. arXiv preprint arXiv:150602338. 2015.

43. Pennington J, Socher R, Manning C. Glove: Global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP); 2014. p. 1532–1543.

44. GloVe: Global Vectors for Word Representation;. https://nlp.stanford.edu/projects/glove/.

45. Mitchell J, Lapata M. Composition in distributional models of semantics. Cognitive science. 2010;34(8):1388–1429. doi: 10.1111/j.1551-6709.2010.01106.x 21564253

46. Sudre G, Pomerleau D, Palatucci M, Wehbe L, Fyshe A, Salmelin R, et al. Tracking neural coding of perceptual and semantic features of concrete nouns. NeuroImage. 2012;62(1):451–463. doi: 10.1016/j.neuroimage.2012.04.048 22565201

47. Kay KN, Naselaris T, Prenger RJ, Gallant JL. Identifying natural images from human brain activity. Nature. 2008;452(7185):352. doi: 10.1038/nature06713 18322462

48. Devereux BJ, Clarke A, Tyler LK. Integrated deep visual and semantic attractor neural networks predict fMRI pattern-information along the ventral object processing pathway. Scientific reports. 2018;8(1):10636. doi: 10.1038/s41598-018-28865-1 30006530

49. Kriegeskorte N. Deep neural networks: a new framework for modeling biological vision and brain information processing. Annual review of vision science. 2015;1:417–446. doi: 10.1146/annurev-vision-082114-035447 28532370

50. Bojanowski P, Grave E, Joulin A, Mikolov T. Enriching word vectors with subword information. arXiv preprint arXiv:160704606. 2016.

51. Frome A, Corrado GS, Shlens J, Bengio S, Dean J, Mikolov T, et al. Devise: A deep visual-semantic embedding model. In: Advances in neural information processing systems; 2013. p. 2121–2129.


Článok vyšiel v časopise

PLOS One


2019 Číslo 9
Najčítanejšie tento týždeň
Najčítanejšie v tomto čísle
Kurzy

Zvýšte si kvalifikáciu online z pohodlia domova

Aktuální možnosti diagnostiky a léčby litiáz
nový kurz
Autori: MUDr. Tomáš Ürge, PhD.

Všetky kurzy
Prihlásenie
Zabudnuté heslo

Zadajte e-mailovú adresu, s ktorou ste vytvárali účet. Budú Vám na ňu zasielané informácie k nastaveniu nového hesla.

Prihlásenie

Nemáte účet?  Registrujte sa

#ADS_BOTTOM_SCRIPTS#