This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Advertising Law Updates

| 1 minute read

Mental Health Startup Criticized for AI Use

Last week

, Koko, a mental health startup, received significant public criticism for allegedly using an artificial intelligence chatbot to conduct mental health counseling without obtaining informed consent from participants.

The controversy started when Koko’s co-founder, Rob Morris, tweeted that OpenAI's GPT-3 was used to provide mental health support to about 4,000 people. In the tweet thread, Morris said that the AI generated messages were rated higher than those written exclusively by humans but, once users learned that the messages were written by AI, the mental health support “didn’t work” because “simulated empathy feels weird.” AI ethicists and users reacted negatively to the tweet, both due to the apparent lack of consent from Koko users and at the use of a language model in a sensitive context.

Morris stated his belief that the experiment is exempt from informed consent requirements, but AI experts felt that the sensitive nature of mental health support requires considerations around accountability and ethical reviews.

This is yet another example of the grey area around legal compliance and ethical use of AI. Companies that intend to use AI should carefully evaluate their legal obligations, and, even where consent is not technically required by law, think about what a reasonable consumer would expect.

Our team will continue to follow developments in AI and post on our blog.

Tags

chatgpt, gpt3, ai, artificial intelligence, tech, technology, privacy