Evaluating large language models in conveying determinants of mental health

Project Type(s):

Principal Investigator(s):
Co-Investigator(s):
Accepting Trainees?

No

This project aims to evaluate LLMs’ bias and accuracy in conveying the causes of mental health disorders (e.g., anxiety). We will address the absence of related data and the challenge of annotating data by responsibly collecting human-LLM conversations about mental health advice and working with domain experts to create fact sheets about mental health conditions’ social and individual determinants. We will then analyze data to assess the extent to which LLMs present social determinants of health (SDOH) versus individual factors and the accuracy of the connections and causations they draw between SDOH, individual factors, and specific disorders. Our study will examine a range of increasingly popular LLMs, such as ChatGPT.


Project Period:
January 1, 2025 December 31, 2025

Funding Type(s):
Philanthropy

Funder(s):
Garvey Institute for Brain Health Solutions