The British Standards Institute has decided it's time the world of robots had some official laws, so it's presented a few guidelines in the dull form of official guidance paper BS 8611:2016 Robots and Robotic devices, a paper described as being a "guide to the ethical design and application of robots and robotic systems."
If you're working on a robot with the potential for harm, the rules are here. They concern the arrival of "potential ethical harm" from today's robotic trend, highlighting the worrying possibility of people becoming emotionally dependent on robots, and machine-learning systems learning to be racist and stupid by trawling the internet for ideas on how to behave.
One quote is rather important and echoes the commonly reported laws of old as formulated by Isaac Asimov. The BSI's paper says: "Robots should not be designed solely or primarily to kill or harm humans," which is nice to know that Argos won't be able to stock any killers.
It also addresses the difficult issue of who's responsible should a robot go rogue, stating: "...humans, not robots, are the responsible agents; it should be possible to find out who is responsible for any robot and its behaviour."