In March, a gunman walked into two mosques in Christchurch, New Zealand, opened fire, and killed dozens of worshippers. According to a police official, the suspected gunman was arrested 36 minutes after police were called to the scene. Now, a tech company believes its smart security cameras can prevent attacks like the tragedy in Christchurch, and says it plans to install its AI-powered systems in mosques around the world.
Athena Security, the tech company behind the security system, and Al-Ameri International Trading announced the Keep Mosques Safe initiative last week. Al-Ameri International Trading, along with several Islamic non-profit groups, will fund the Keep Mosques Safe effort. The Al-Noor Mosque, one of the two targeted mosques in the Christchurch attack, will be one of the first to have the active shooter detection system installed at the place of worship.
“The mass shooting at the Al-Noor Mosque was an unspeakable tragedy that no community should have to suffer,” Athena Security co-founder and CEO Lisa Falzone said in a press statement. “The Keep Mosques Safe initiative is an important step in giving mosques the tools to better protect themselves from extremist individuals that wish to do them harm, so we can help prevent horrific events like this in the future.”
The system can reportedly detect an active shooter before they open fire and then flag the security threat to law enforcement and first responders. “In today’s increasingly uncertain security climate, security cameras are becoming increasingly common everywhere from homes to businesses, from schools to stadiums,” the Athena Security website states. “One of the biggest headwinds to cameras is the need for skilled labour to monitor the feeds. A guard stepping away to get coffee at the wrong moment can render a security system useless.”
Athena claims on its website that current security systems are flawed because they aren’t preventative, they are susceptible to false alarms, and they require constant human labour. Its AI-powered tech allegedly “detects a weapon, automatically alerts, and notifies the shooter that they have been spotted and authorities are on the way.” The website doesn’t go into a lot of detail on exactly how its tech works, or what its accuracy rate is, it just states that it “analyses multiple data points in real time” to identify potential criminal activity.
An Athena spokesperson told Gizmodo in a statement that its AI-powered detection system can identify weapons and “aggressive behaviours” and that it is trained on “highly realistic videos” the team creates with law enforcement. The spokesperson also claimed that the security system can detect these factors “with 99% accuracy in three seconds.” Full statement below:
“Athena Security’s system can be deployed within a system that best suits our customers’ needs, whether they are individual places of worship like mosques or with businesses or other customers. We can integrate our system with existing cameras and establish an alert system that best fits their needs. If a customer has on-site security, we can set up a system that alerts security in the event of a threat and allows them to evaluate and confirm the threat before alerting law enforcement. We can also integrate with other systems like elevators and door locks, depending on our customer’s needs.
“Our system is focused on identifying weapons and aggressive behaviours, and is trained using highly realistic videos we develop, produce and shoot with the help of law enforcement to ensure the highest level of quality in training our artificial intelligence platform. Athena can detect weapons or aggressive behaviours with 99% accuracy in three seconds.”
There’s nothing inherently wrong with people wanting to feel safer in the wake of hateful and devastating attacks at their place of worship. The question is whether this AI-powered gun detection system is both ethical and effective. Rolling out powerful surveillance tools as a reaction to atrocious acts of violence is hardly revelatory, but to date, most of these systems raise some crucial concerns around privacy and bias.
Aside from these issues that still largely plague the AI space, it’s also unclear whether this type of system will even work. Surveillance systems powered by AI have yet to prove themselves impressively accurate. Perhaps Athena’s proprietary tech is a gamechanger, but without any evidence pointing to successes in similarly vulnerable situations—terrorist attacks at religious institutions—these “solutions” should be viewed with scepticism.
It’s also worth noting that AI surveillance technology is being weaponised in more oppressive parts of the world as a way to identify and track ethnic minorities. But what’s evident is that increasingly extreme and tragic situations like the Christchurch attack are driving people in less suppressed regions to voluntarily surveil themselves in even their most intimate spaces. It illustrates the complex relationship between surveillance and fear, and when the latter outweighs the possible consequences of the former’s failures, the lengths we are willing to go to feel just a little bit safer.
Featured image: Getty