Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 43 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 197 tok/s Pro
GPT OSS 120B 455 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Enhancing Suicide Risk Detection on Social Media through Semi-Supervised Deep Label Smoothing (2405.05795v1)

Published 9 May 2024 in cs.LG

Abstract: Suicide is a prominent issue in society. Unfortunately, many people at risk for suicide do not receive the support required. Barriers to people receiving support include social stigma and lack of access to mental health care. With the popularity of social media, people have turned to online forums, such as Reddit to express their feelings and seek support. This provides the opportunity to support people with the aid of artificial intelligence. Social media posts can be classified, using text classification, to help connect people with professional help. However, these systems fail to account for the inherent uncertainty in classifying mental health conditions. Unlike other areas of healthcare, mental health conditions have no objective measurements of disease often relying on expert opinion. Thus when formulating deep learning problems involving mental health, using hard, binary labels does not accurately represent the true nature of the data. In these settings, where human experts may disagree, fuzzy or soft labels may be more appropriate. The current work introduces a novel label smoothing method which we use to capture any uncertainty within the data. We test our approach on a five-label multi-class classification problem. We show, our semi-supervised deep label smoothing method improves classification accuracy above the existing state of the art. Where existing research reports an accuracy of 43\% on the Reddit C-SSRS dataset, using empirical experiments to evaluate our novel label smoothing method, we improve upon this existing benchmark to 52\%. These improvements in model performance have the potential to better support those experiencing mental distress. Future work should explore the use of probabilistic methods in both natural language processing and quantifying contributions of both epistemic and aleatoric uncertainty in noisy datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. doi:10.1371/journal.pone.0245271.
  2. doi:10.1145/3308558.3313698.
  3. doi:10.1007/s41347-021-00239-x.
  4. doi:10.1038/nm0217-264d.
  5. doi:10.1038/s41398-020-00965-5.
  6. doi:10.1080/13811118.2017.1334610.
  7. doi:10.1016/s0009-8981(97)00232-5.
  8. doi:10.1016/j.ipm.2022.103173.
  9. doi:10.1109/tip.2021.3089942.
  10. doi:10.1038/s42256-018-0004-1.
  11. doi:10.1109/ACCESS.2022.3163384.
  12. doi:10.1016/j.inffus.2021.05.008.
  13. doi:10.1007/s10994-021-05946-3.
  14. doi:10.1109/wacv51458.2022.00267.
  15. doi:10.48550/ARXIV.2102.09427.
  16. doi:10.1364/boe.432365.
  17. arXiv:1506.02142.
  18. doi:10.1016/j.compbiomed.2021.105047.
  19. doi:10.1007/s11063-021-10714-4.
  20. doi:10.1109/CBMS52027.2021.00087.
  21. arXiv:1511.08458.
  22. doi:10.1109/taffc.2020.2997769.
  23. Y. Goldberg, A primer on neural network models for natural language processing (Oct. 2015). arXiv:1510.00726.
  24. arXiv:1809.08037.
  25. doi:10.48550/ARXIV.2104.00676.
  26. doi:10.1037/h0042519.
  27. arXiv:2008.05756.
  28. doi:10.1007/s10489-021-02635-5.
  29. doi:10.1145/3434237.
Citations (2)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets