ai

Canada Is Using AI to Study ‘Suicide-Related Behaviour’ on Social Media

By Melanie Ehrenkranz on at

This month the Canadian government is launching a pilot programme to research and predict suicide rates in the country using artificial intelligence. The pilot will mine Canadians’ social media posts “in order to identify patterns associated with users who discuss suicide-related behaviour,” according to a recently published contract document.

The Public Health Agency of Canada plans to work with an AI company called Advanced Symbolics to trawl through more than 160,000 Canadian social media accounts, according to CBC, but the contracted company says it will only use anonymised, public data.

The Advanced Symbolics website states that it does not use “private communications” and instead relies on “publicly available information shared on social platforms by consenting individuals.” Advanced Symbolics also told CBC that it doesn’t investigate isolated instances.

Specific details on how the firm’s technology will work in this pilot are thin. According to the contract, Advanced Symbolics will use its research to inform the Public Health Agency of Canada of suicide-related discussions by age and gender, as well as “changes in patterns and available risk and protective factors.” The contract suggests that Advanced Symbolics can somehow detect “ideation (i.e., thoughts), behaviours (i.e., suicide attempts, self-harm, suicide), and communications (i.e., suicidal threats, plans).”

“It’d be a bit freaky if we built something that monitors what everyone is saying and then the government contacts you and said, ‘Hi, our computer AI has said we think you’re likely to kill yourself’,” Kenton White, chief scientist with Advanced Symbolics, told CBC. We have reached out to Advanced Symbolics to learn more about how the technology works and how it will use the data collected throughout this pilot.

The pilot project is slated to start this month and wrap up by June 2018. The Canadian government expects to spend nearly CAD $25,000 (£14,708) on the pilot, and could extend the contract by up to five years, completing the research in June 2023 for about CAD $400,000 (£235,329) in total.

Approximately 11 people commit suicide daily in Canada, according to the Canadian Association for Suicide Prevention.

The Canadian government is certainly not the first to use artificial intelligence as a means to identify and prevent suicide. Facebook said in November of last year that it would expand its own AI-based suicide prevention programme, “using pattern recognition to detect posts or live videos where someone might be expressing thoughts of suicide, and to help respond to reports faster.”

The key difference between Facebook’s programme and Advanced Symbolics’ pilot project is that the latter isn’t trying to pinpoint individual cases on a platform, but rather identifying suicide trends in regions. The firm can reportedly alert the government to an increase in suicides up to three months ahead of time. It’s important to note that algorithms are not free from bias, and they also have a history of screwing up. But if the government is able to get ahead of an expected suicide spike, then it could, for instance, increase the presence of mental health professionals in a certain region or among a specific demographic.

Outside of Canada, other government entities have entertained the idea of deploying algorithms to deal with social issues. The White House declared a call for data scientists and technologists in September 2015 to aid in suicide prevention, and also hosted mental health hackathons across the US in December of that year. And this year, London’s Metropolitan Police announced that it was working with “Silicon Valley providers” to use machine learning to flag child abuse images on electronic devices. But, as we’ve seen time and again, artificial intelligence is far from flawless, and the police force noted that the system still confused photos of naked bodies with photos of deserts.


More Ai Posts: