Australian experts have spoken out about a recent US study that claimed to show artificial intelligence can identify people with suicidal thoughts - by analysing their brain scans.
It sounds promising - but it's worth pointing out only 79 people were studied, so are the results enough to show this is a path worth pursuing?
The research, published in Nature, studied brain activity in subjects when presented with a number of different words - like death, cruelty, trouble, carefree, good and praise. A machine-learning algorithm was then trained to see the neural response differences between the two groups involved - those with suicidal thoughts, and those with non-suicidal thoughts.
And it showed promise - the algorithm correctly identified 15 of 17 patients as belonging to the suicide group, and 16 of 17 healthy individuals as belonging to the control group. But does this mean it could be used as a diagnostic tool?
Professor Max Coltheart is the Emeritus Professor of Cognitive Science at the ARC Centre of Excellence in Cognition and its Disorders, and Department of Cognitive Science at Macquarie University
The title of the paper says brain imaging data 'identifies suicidal youth'. Read the fine print, though, and you will find this is not true.
This study had 79 people, 38 who reported that they thought about suicide and 41 who said they did not. Can brain imaging reliably tell us which subjects were which? The simple answer is no.
Of these 79 people, more than half (57 per cent) gave brain imaging data that were unusable for any attempt at classifying the subject as at-risk of suicide or not. That included 21 (55 per cent) of the people at risk of suicide. So, even if the results of this study generalised to all people, 55 per cent of people genuinely at risk could not be identified by the methods reported here.
Importantly, a check - which studies like this standardly use - was omitted. Even when you have found a way of classifying people into two groups on the basis of analysing brain imaging data, you cannot claim that you have a genuine method for doing such classification unless you show that the artificial intelligence algorithm can successfully classify a new set of people on whom it has not been trained. This is called cross-validation. Because this wasn’t done, the authors can’t even claim that this method will reliably detect risk of suicide in the 43 per cent of people who yield usable brain imaging data.
Professor Graham Martin is an Emeritus Professor in the Royal Brisbane Clinical Unit, Faculty of Medicine at The University of Queensland
Having been an avid adolescent reader of Isaac Asimov and Robert Heinlein robot stories, I was excited to read 'Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth'.
The premise is that machine learning may ultimately be better at discrimination of suicidal youth from non-suicidal youth - and attempters from non-attempters - based on emotional reactions to key words, and MRI study of brain areas lighting up in response. The authors hint this may help clinicians struggling to predict which suicidal people may ultimately complete suicide (supposedly necessary for allocation of scant clinical resources). But this mires us in the logical fallacy that past suicidality predicts future suicidality.
In the unlikely possibility that every clinician will have future access to an MRI scanner and machine learning algorithms, the real excitement in the paper is confirmation that several cheap, available questionnaires (the ASIQ, PHQ-9, ASR, Spielberger Anxiety (State) and the CTQ) significantly discriminated between the groups.
Suicidal people and suicide attempters deserve the clinical opportunity to work through past traumas, find solutions to current problems, and plan a positive future. Perhaps we should focus scant mental health funding on more trained available clinicians.
Associate Professor Sarah Whittle is from the Melbourne School of Psychological Sciences at The University of Melbourne
Just and colleagues report in new research that brain imaging techniques can be used to predict suicidal from non-suicidal young adults. The findings contribute to a growing body of research suggesting that "biological markers" can be equally, if not more useful than subjective measures (for example, a patient's own report of their feelings), in psychiatric decision making.
The research, however, is a long way from having an impact on the actual treatment of suicidal individuals. For one, there were a small number of participants in the study, and most were male. Therefore, we don’t know how reliable the results might be, or if they apply to females. Also, the suicidal young adults were more depressed and anxious than the non-suicidal adults. So, we don't know if the researchers' have found biological markers of suicidality, or psychiatric problems more generally.
If future research can show that the results are reliable, and are specific to suicidality, then it's possible that the brain-based biological markers could be used by healthcare professionals for identification and treatment of people at risk of suicide. However, given that brain scans are costly, these tools are likely only to be used for the most severely mentally-ill patients.
Professor Matthew Large is Conjoint Professor in the School of Psychiatry at the University of New South Wales
Suicide risk assessment works notoriously badly and it might be very useful to have some sort of test for future suicide. However, this study should be seen in the light of two major limitations.
First, it is entirely unsurprising that the very many data points produced by functional magnetic imaging can be used to retrospectively classify a very small sample of patients. Any excitement about this should await replication in a larger, untested group of people.
Second, even if suicide thoughts could be reliably determined by a machine, suicide thoughts themselves are only weakly associated with suicide attempts and are of next to no value in predicting who will and will not suicide.
Gizmodo Australia is gobbling up the news in a different timezone, so check them out if you need another Giz fix.