A Brain-inspired Computational Model for Human-like Concept Learning (2401.06471v1)
Abstract: Concept learning is a fundamental aspect of human cognition and plays a critical role in mental processes such as categorization, reasoning, memory, and decision-making. Researchers across various disciplines have shown consistent interest in the process of concept acquisition in individuals. To elucidate the mechanisms involved in human concept learning, this study examines the findings from computational neuroscience and cognitive psychology. These findings indicate that the brain's representation of concepts relies on two essential components: multisensory representation and text-derived representation. These two types of representations are coordinated by a semantic control system, ultimately leading to the acquisition of concepts. Drawing inspiration from this mechanism, the study develops a human-like computational model for concept learning based on spiking neural networks. By effectively addressing the challenges posed by diverse sources and imbalanced dimensionality of the two forms of concept representations, the study successfully attains human-like concept representations. Tests involving similar concepts demonstrate that our model, which mimics the way humans learn concepts, yields representations that closely align with human cognition.
- Language and simulation in conceptual processing, Symbols, embodiment, and meaning (2008) 245–283.
- Y. Bi, Dual coding of knowledge in the human brain, Trends in Cognitive Sciences 25 (2021) 883–895.
- Two forms of knowledge representations in the human brain, Neuron 107 (2020) 383–393.
- How concepts are encoded in the human brain: A modality independent, category-based cortical organization of semantic knowledge, NeuroImage 135 (2016) 232–242. URL: https://www.sciencedirect.com/science/article/pii/S1053811916301021. doi:https://doi.org/10.1016/j.neuroimage.2016.04.063.
- Heteromodal cortical areas encode sensory-motor features of word meaning, Journal of Neuroscience 36 (2016) 9763–9769. URL: https://www.jneurosci.org/content/36/38/9763. doi:10.1523/JNEUROSCI.4095-15.2016.
- Intrinsic functional network architecture of human semantic processing: Modules and hubs, Neuroimage 132 (2016) 542–555.
- A tri-network model of human semantic processing, Frontiers in Psychology 8 (2017) 1538.
- M. O. Ernst, M. S. Banks, Humans integrate visual and haptic information in a statistically optimal fashion, Nature 415 (2002) 429–433.
- C. V. Parise, M. O. Ernst, Correlation detection as a general mechanism for multisensory integration, Nature Communications 7 (2016) 11543.
- Efficient computation and cue integration with noisy population codes, Nature neuroscience 4 (2001) 826–831.
- A computational perspective on the neural basis of multisensory spatial representations, Nature Reviews Neuroscience 3 (2002) 741–747.
- D. Kiela, L. Bottou, Learning image embeddings using convolutional neural networks for improved multi-modal semantics, in: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014.
- Deep learning for tactile understanding from visual and haptic data, in: 2016 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2016, pp. 536–543.
- Imagined visual representations as multimodal embeddings, in: Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, 2017.
- Learning multimodal word representation via dynamic fusion methods, in: Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
- Models of semantic representation with visual attributes, in: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2013, pp. 572–582.
- Multi-modal models for concrete and abstract concept meaning, Transactions of the Association for Computational Linguistics 2 (2014) 285–296.
- Multimodal distributional semantics, Journal of artificial intelligence research 49 (2014) 1–47.
- Mca-nmf: Multimodal concept acquisition with non-negative matrix factorization, PloS one 10 (2015) e0140732.
- F. Hill, A. Korhonen, Learning abstract concept embeddings from multi-modal data: Since you probably can’t see what i mean, in: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014, pp. 255–265.
- C. Silberer, M. Lapata, Learning grounded meaning representations with autoencoders, in: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2014, pp. 721–732.
- Associative multichannel autoencoder for multimodal word representation, in: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018, pp. 115–124.
- Tensor fusion network for multimodal sentiment analysis, arXiv preprint arXiv:1707.07250 (2017).
- Deepcu: Integrating both common and unique latent information for multimodal sentiment analysis, in: International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, 2019.
- Efficient low-rank multimodal fusion with modality-specific factors, arXiv preprint arXiv:1806.00064 (2018).
- C. P. Davis, E. Yee, Building semantic memory from embodied and distributional language experience, Wiley Interdisciplinary Reviews: Cognitive Science 12 (2021) e1555.
- L. W. Barsalou, Perceptions of perceptual symbols, Behavioral and brain sciences 22 (1999) 637–660.
- Z. S. Harris, Distributional structure, Word 10 (1954) 146–162.
- D. Lynott, L. Connell, Modality exclusivity norms for 423 object properties, Behavior Research Methods 41 (2009) 558–564.
- D. Lynott, L. Connell, Modality exclusivity norms for 400 nouns: The relationship between perceptual experience and surface word form, Behavior research methods 45 (2013) 516–526.
- Toward a brain-based componential semantic representation, Cognitive neuropsychology 33 (2016) 130–174.
- Distributed representations of words and phrases and their compositionality, Advances in neural information processing systems 26 (2013).
- Glove: Global vectors for word representation, in: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543.
- Y. Wang, Y. Zeng, Statistical analysis of multisensory and text-derived representations on concept learning, Frontiers in Computational Neuroscience 16 (2022).
- Simlex-999: Evaluating semantic models with (genuine) similarity estimation, Computational Linguistics 41 (2015) 665–695.
- Large-scale learning of word relatedness with constraints, in: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, 2012, pp. 1406–1414.
- Power spectrum analysis of bursting cells in area mt in the behaving monkey, Journal of Neuroscience 14 (1994) 2870–2892.
- W. R. Softky, C. Koch, Cortical cells should fire regularly, but do not (1992).
- Y. Wang, Y. Zeng, Multisensory concept learning framework based on spiking neural networks, Frontiers in Systems Neuroscience 16 (2022).
- Biological neuron coding inspired binary word embeddings, Cognitive Computation 11 (2019) 676–684.
- Yuwei Wang (60 papers)
- Yi Zeng (153 papers)