Listen to this story
Voiced by Amazon Polly
Home Tech Emerging Tech Mental Health in The Disinformation Age

Mental Health in The Disinformation Age

Listen to this story
Voiced by Amazon Polly

The recent Netflix documentary, The Social Dilemma, looks at some of the devastating impacts that technology and artificial intelligence (AI) are having on many of us. 

Through an AI lens, it examines the causes of the recent polarisation of society; why, how, and who controls our information; and the effects of social media on mental health – particularly young people. 

At the start of the decade, smart devices started to trickle down to teens and tweens, social media exploded, and AI began entering its golden age. This led to countless young people developing body-image issues in the new world of Snapchat- and Insta-dysmorphia. This resulted in youth self-harm and suicide rates skyrocketing in the western world. 

There’s an old axiom in the tech industry: if you don’t pay for the product, you are the product. There’s a reason that Facebook, Google, Twitter, Instagram, et cetera are free: it is because our attention is being sold. 

Jaron Lanier, considered a founder in the field of virtual reality, took it a step further in the documentary: “It’s the gradual, slight, imperceptible change in your own behaviour and perception that is the product. And that is the product. It’s the only possible product … That’s the only thing there is for them to make money from. Changing what you do, how you think, what you are”. 

In behavioural science and behavioural economics, this is known as nudging. Advertisers, influencers, and those with vested interests or strong opinions are trying to nudge our behaviour towards truths they believe or outcomes they want. 

Untested online mental health tools

While technology has facilitated the above problems, many companies have sought to create solutions with technology. 

Since the launch of the App Store and Google Play, there has been a proliferation of mental health applications released – over 10,000 according to the American Psychiatric Association (APA) – but they often lack substance. The APA stated that “the vast majority of commercially-available apps are not appropriate for clinical care”. 

One study, published in Nature, reviewed over 1400 mental health apps. After assessing and eliminating the bottom 95 per cent, only two provided evidence to support their claim that they could actually improve mental health. 

If app developers have not studied the research, applied first principles, followed scientific method, and co-designed the app alongside people with lived-experience of mental illness, it is likely that they have no idea if they are helping or harming. In fact, the APA found ample evidence of the latter.

Using AI ethically for better mental health

With mental illness affecting one in five people globally in any given year, you could argue it is the most widespread pandemic affecting humanity right now.

In Australia, the Productivity Commission’s 2019 report on mental health recognised that technology can play an important role in the early detection of mental health problems, while also providing online therapies to those who are yet-to-be, or mildly, affected.

According to the Australian Mental Health Commission, 40 percent of people with depression, anxiety and other mental health disorders stated that they “did not seek medical help because of the cost”. 

If developed correctly, AI-enabled mental health apps have the potential to dramatically improve the wellbeing of people who either choose not to seek medical help or cannot access those services – because of limitations such as cost and geographic isolation. This could alleviate the pressure from an overburdened mental health industry.

However, all of this is only possible if the tool’s efficacy is proven to work through rigorous, scientific analysis – preferably by independent third-party researchers.

What can we do to fix this?

What we need from governments and mental health peak bodies is to go a step further by co-designing assessment criteria and implementing a register of trusted and clinically-validated mental health applications that a) cause no harm, and b) can actually help. That would give the public some clear guidance on what’s helpful and what’s not, while also incentivising companies to develop their products correctly.

If you use mental health apps, or have been thinking about giving them a try, always research the company to see if they provide credible evidence to support their claims. 

And if you haven’t seen The Social Dilemma yet, take 90 minutes to gain an insider’s understanding of what this invisible technology is doing to society. 

Keep up to date with our stories on LinkedInTwitterFacebook and Instagram.

Dave Chetcuti
Dave Chetcuti is a Co-founder ofSvelte Ventures, technologist at Frank Wellbeing App and a cognitive science specialist.