Project Type(s):
Clinical Research
Hundreds of millions of people are already using Large Language Models (LLMs), including for mental health purposes, which has led to inadvertent harms. Critically, people with mental health conditions may be especially vulnerable to such harms.
In this project, we will develop the first computational framework to systematically quantify and benchmark the risks that LLMs present to people with mental health conditions. Our approach will simulate interactions of hundreds of users and LLMs to evaluate safety across a variety of mental health conditions, demographics, and AI failure modes.
Project Period:
January 1, 2025 — December 31, 2025
Yes - We have funding to contribute
Funding Type(s):
Philanthropy
Garvey Institute for Brain Health Solutions
Geographic Area(s):
National
Practice Type(s):
Online/remote/apps/social media
Patient Population(s):
Adolescents, Adults
Targeted Condition(s):
General Mental Well-Being