Emergent Mind

Abstract

Large language model (LLM) chatbots are susceptible to biases and hallucinations, but current evaluations of mental wellness technologies lack comprehensive case studies to evaluate their practical applications. Here, we address this gap by introducing the MHealth-EVAL framework, a new role-play based interactive evaluation method designed specifically for evaluating the appropriateness, trustworthiness, and safety of mental wellness chatbots. We also introduce Psyfy, a new chatbot leveraging LLMs to facilitate transdiagnostic Cognitive Behavioral Therapy (CBT). We demonstrate the MHealth-EVAL framework's utility through a comparative study of two versions of Psyfy against standard baseline chatbots. Our results showed that Psyfy chatbots outperformed the baseline chatbots in delivering appropriate responses, engaging users, and avoiding untrustworthy responses. However, both Psyfy and the baseline chatbots exhibited some limitations, such as providing predominantly US-centric resources. While Psyfy chatbots were able to identify most unsafe situations and avoid giving unsafe responses, they sometimes struggled to recognize subtle harmful intentions when prompted in role play scenarios. Our study demonstrates a practical application of the MHealth-EVAL framework and showcases Psyfy's utility in harnessing LLMs to enhance user engagement and provide flexible and appropriate responses aligned with an evidence-based CBT approach.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.