Facebook says it cares so much about the meat products that view its advertising that it now has an AI to screen users for mental health issues, with warnings of suicidal thoughts apparently able to be pulled out of the ether.
Calling it an AI might be a bit of a stretch, though, as surely all it has to do is set up a filter that pulls out posts that use the term "kill myself" and forward them onto the most empathic of its employees. Facebook says that suspect posts are indeed forwarded to the network's human review team, who have a little read and, if necessary, might get in touch with the poster. Which would be quite a shock.
It works by looking for similar patterns to those found in previously flagged messages, while also giving weight to replies saying things like "Are you OK?" in response to the poster's message. And when using live chat tools a popup offering help might appear too, if you're triggering its algos by being a little too morose in your morning chats. The system is being tested in the US first, seeing as they need it more than we do right now. [Facebook via BBC]