Google (or probably we should start saying Alphabet by now) boss Eric Schmidt has written an opinion column for Time magazine giving us his thoughts on AI, including some wider, sci-fi-like concepts on what he thinks the overall ambition of computer intelligence should be.
He comes across a bit Asimov or Vulcan in his arching thoughts, outlining a three-point plan he'd like to see those involved in AI adopt. Schmidt's laws of artificial intelligence are:
- AI should benefit the many, not the few.
- AI research and development should be open, responsible and socially engaged.
- ...those who design AI should establish best practices to avoid undesirable outcomes.
That's it. There may also be some secret code about not killing humans, but that'll just be in the Ts&Cs people whizz through without reading. Nothing to worry about. All routine.
The third point is obviously the critical one when it comes to calming mass hysteria over rogue AIs, with Schmidt suggesting there should be some sort of human verification process involved, preferably involving a massive OFF switch, just in case. He continued elaborating on Law 3 with: "Have we thought through the way any system might yield unintended side effects -- and do we have a plan to correct for this? There should be verification systems that evaluate whether an AI system is doing what it was built to do."