Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Neural networks for learning personality traits from natural language (2302.13782v1)

Published 23 Feb 2023 in cs.CL, cs.AI, and cs.LG

Abstract: Personality is considered one of the most influential research topics in psychology, as it predicts many consequential outcomes such as mental and physical health and explains human behaviour. With the widespread use of social networks as a means of communication, it is becoming increasingly important to develop models that can automatically and accurately read the essence of individuals based solely on their writing. In particular, the convergence of social and computer sciences has led researchers to develop automatic approaches for extracting and studying "hidden" information in textual data on the internet. The nature of this thesis project is highly experimental, and the motivation behind this work is to present detailed analyses on the topic, as currently there are no significant investigations of this kind. The objective is to identify an adequate semantic space that allows for defining the personality of the object to which a certain text refers. The starting point is a dictionary of adjectives that psychological literature defines as markers of the five major personality traits, or Big Five. In this work, we started with the implementation of fully-connected neural networks as a basis for understanding how simple deep learning models can provide information on hidden personality characteristics. Finally, we use a class of distributional algorithms invented in 2013 by Tomas Mikolov, which consists of using a convolutional neural network that learns the contexts of words in an unsupervised way. In this way, we construct an embedding that contains the semantic information on the text, obtaining a kind of "geometry of meaning" in which concepts are translated into linear relationships. With this last experiment, we hypothesize that an individual writing style is largely coupled with their personality traits.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)