Top researchers in the field of artificial intelligence (AI) have called for a boycott of the South Korean university KAIST after it opened a lab with the defence firm Hanwha Systems. Even if you’re not familiar with KAIST, you might know some of the school’s robots. The university won the top prize at the last DARPA Robotics Challenge in 2015 with its highly advanced DRC-HUBO robot.
Fifty researchers from 30 different countries released a letter on Wednesday calling for the boycott of KAIST, arguing that the partnership with the weapons company Hanwha raises ethical concerns and has the potential to, “permit war to be fought faster and at a scale great than ever before. They will have the potential to be weapons of terror.”
“This is a very respected university partnering with a very ethically dubious partner that continues to violate international norms,” said Toby Walsh, a professor at the University of New South Wales in Australia who helped organise the boycott. What’s so ethically dubious about Hanwha? The defence company still makes cluster bombs, which have been banned by 108 countries.
The team from KAIST won DARPA’s top prize ($2 million) in 2015 after the university’s robot completed an obstacle course with a perfect score in just 44 minutes and 28 seconds—lightning fast for an untethered robot. Each robot in the competition had to drive a car, exit the vehicle (this was arguably the hardest part for most robots at the competition), walk around, open doors, drill holes, and climb stairs, among other tasks.
But the university insists that it’s aware of the ethical challenges posed by AI and that it’s not going to produce anything that would be considered a “killer robot” at the new Research Centre for the Convergence of National Defence and Artificial Intelligence.
“I would like to reaffirm that KAIST does not have any intention to engage in development of lethal autonomous weapons systems and killer robots,” KAIST’s president, Sung-Chul Shin, said in a statement.
“I reaffirm once again that KAIST will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control,” KAIST’s president continued.
What does “meaningful human control” actually mean? That’s not exactly clear, given that the university is developing things like uncrewed undersea vehicles with Hanwha Systems. The university also deleted an announcement from February about the partnership that boasted of the “AI-based command and decision systems” and the “AI-based smart object tracking and recognition technology” that they would be developing.
[video width="800" height="448" mp4="https://media.gizmodo.co.uk/wp-content/uploads/2018/04/robots.mp4"][/video]
Most people today probably remember the robots that fell down at the DARPA Robotics Challenge. They were incredibly popular videos to watch and were objectively hilarious. Who doesn’t love watching robots fall down? But when it comes to the future of robotics and the ethical challenges we face, KAIST’s DRC-HUBO is one to look out for. Especially since he might be coming to a battlefield near you one day.
Toby Walsh, the Australian organiser of the boycott, told Reuters that he’s pleased with the statement put out by KAIST pledging, “not to develop autonomous weapons and to ensure meaningful human control.”
But again, we have to ask what things like “meaningful human control” actually mean. And researchers are going to be asking that for many years to come.
“We should not hand over the decision of who lives or dies to a machine. This crosses a clear moral line,” Walsh said. “We should not let robots decide who lives and who dies.” [The Guardian, Reuters and Centre of Impact of AI and Robotics]