The meaning of the EPSRC principles of robotics

The meaning of the EPSRC principles of robotics

Joanna J. Bryson, University of Bath, and Princeton Center for Information Technology Policy

This was presented in the AISB Workshop on Principles of Robotics,  April 4th 2016, Sheffield UK and a lightly revised version was published in 2017 by Connection Science.

Update 8 March 2023, in response to an email from PhD student Zara Sayeda, whose PhD is in China: Dear Zara,
In my opinion, the best top 5 principles are the ones that China and 49 other nations signed up to in 2019, the G20/OECD principles of AI ethics, which seem to be missing from your list! China with the G20 only agreed to the first 5 principles in the OECD statement which is here: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 These same five principles (in a slightly different order) were innovated by the UK in 2011 (I was one of the authors) cf https://joanna-bryson.blogspot.com/2016/03/the-meaning-of-epsrc-principles-of.html which was written for their 5th anniversary in 2016. There are important improvements by the OECD (e.g. mention of sustainability), and a minor change (principles 3 & 4 switched order) but the main difference is that the UK as a sovereign nation could talk about weapons systems. They said there should be no lethal robots in policing context, only in war. This is because in policing, one can only kill in self defense, and robots have no selves to defend. But in war of course unfortunately nations do sometimes attack each other. The OECD replaced this with a principle making explicit what the UK had made implicit, that the entire point of these principles is to make AI (and all our technology) conducive to human wellbeing.
yours,
Joanna
(see also my January 2023 blogpost on the UNESCO AI Recommendation, which is entirely congruent with the above and below as well.)

Update 27 April 2016:  Deception (intentional or unintentional) and anthropomorphism are both listed as hazards in the new (April 2016) British Standard
8611:2016 – Robots and robotic devicesGuide to the ethical design and application of robots and robotic systems

Introduction

In revisiting the principles of robotics, it is important to carefully consider their full meaning.  Here I briefly visit first the meaning of the document as a whole, then of its constituent parts.
The EPSRC principles of robotics were generated as a deliverable by a group assembled with little guidance and no deliverable required.  The original intention of the EPSRC robotics event seems to have been only the discussion itself, or perhaps even only the fact of the meeting. The academics present wanted something to show for their time spent, and as a result a substantial amount of time of all those present on the final day went into the creation of the three versions of the principles and their documentation.  Some of the documentation was extended–again by consensus–after the meeting.
It is right and fitting that there should be a way to examine and even update or maintain the document.  Even national constitutions have means for maintenance. However, it is critical to the efficacy of policy documents that they are not easy to change.  They should provide a rudder to prevent dithering, and as such are ordinarily more difficult to alter than they were to instantiate in the first place.  Note that some countries and other political unions have not found it easy to create even their initial constitutions for this very reason.  Therefore it's important to think carefully about the meaning of the principles.

The principles as policy

Technology policy, and policy more generally, is a surprisingly amorphous thing.  Like other aspects of natural intelligence, policy is not always found resident in the law or even governance. Much of policy is unwritten and even not explicitly known.  The UK is actually outstanding in its innovation of the common law, which acknowledges this and the importance of culture and precedent.  Nonetheless, in the cold light of a committee working on REF impact cases, we have to ask, are the principles policy?  I think the answer is "yes".  They are a set of guidelines agreed by a substantial if perhaps arbitrary fraction of the community they affect, and they are published on government web pages.
All policy has three components: allocative, distributive, and stabilising.  The allocative is the process of determining what problems are worth spending time and other resources on.  In the case of the principles, this was instigated by the EPSRC (or some organisation above them) out of concern that the British public might reject robotics as they had genetically modified food.  We were told the rejection of robotics was seen as a severe threat to the British economy.  Note also that each of the participants (at least those not specifically paid to attend) also made individual investments, allocating time to the problem of robot ethics, though for many this was confounded with an opportunity to get better known by their primary funding organisation.
The stabilising component is the one that ensures that the policy, once set, is incorporated into society in such a way that it is unlikely either to be quickly undone or to become much of a liability or matter of controversy.  In the case of the principles this has evidently been achieved at least to some level since we are celebrating their fifth anniversary.  From talking to other authors, I know of none entirely enamoured with the final product, but all respect the (admittedly representative) democratic process by which they were achieved, and the importance of their colleagues’ mutual commitment to the final product.  I for one would love to see the principles further reified into policy or even law, but I have yet to discover the process by which this might be accomplished.  However, they have been and are continuing to be drawn to the attention of various standards boards and parliamentary enquiries as well as of the press and other academics.
I leave for last the most controversial aspect of policy: the distributive.  At its base, all policy is about action selection, and that implies the allocation or rather reallocation of resources.  Politics tries to brush over this, since it necessarily goes against the grain of those from whom the resources are reallocated, even in the cases where those individuals stand to gain net benefit.  We hate to lose control, but policies are for control.  "Tries to brush over" is in fact an understatement; making redistribution palatable may be the core project of politicians.
In this case, the government had very specific concerns about individuals who had been in the media promoting fear of robots, and were very clear in their desire to find ways to shift media attention and public impressions towards the safety of robotics.  In contrast, it was really the participants who brought up the other major shifts from sensationalism to pragmatism --- the assertion that robots are not responsible parties under the law, and that users should not be deceived about their capacities.  The council representatives knew this redistribution of power would anger some of their outstanding funding recipients, and the participants knew the same about some of their colleagues.  Nevertheless, there was striking unanimity amongst the academics that the greatest moral hazards of robots was their charismatic nature and the incredible eagerness many people have to invest their own identity in machines', leading to striking confusion about their nature all of us had witnessed.  This charisma and confusion left the door open for all kinds of manipulations by corporations and governments, where the robots could be set up as responsible for – or even as surrogate for – human lives or values.

The principle of killing

Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.

The first three principles were intended as corrections of Asimov’s laws.  Robots are not responsible parties, so they could not kill. Instead, robots should not be usable as tools for killing.  This simple rule made the transfer of moral subjectivity clear, and simultaneously met the pacifist desires of most present.  However, pragmatically, robots were already used as weapons of war.  Laws that are unenforceable are generally considered to be of questionable or even negative utility.  We were persuaded that leading with a principle known to be false would significantly decrease our chances for cultural impact.  The meaning of the first principle might therefore seem neutralised by the compromise of the exception, but that robots are not to be weapons in civil society is still an important social point.  Beyond this, the fact that practical policy has to take into account the needs of the government to address both security and industry (as of 2014, the UK is the world’s sixth largest arms dealing nation) also has meaning.  However purely academic some of us may wish our discipline to be, the fact that many of its products have immediate utility means that we cannot avoid impact on our world.

The principle of compliance

Humans, not robots, are responsible agents. Robots should be designed & operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy.

The second Asimov law has to do with following instructions, but even the notion of obeying implied moral agency.  The original meaning of this law was that robots are ordinary technology and conform to ordinary standards and laws.  In the shaping of the principles as a suite, the second principle came to be the one that communicated further some of the peril of AI in general, and AI mistaken for a moral subject in particular.  The emphasis on privacy reflects the special concern of a perceiving intelligent physical agent occupying the exact same space as a human family.  A robot is fundamentally immersed in the human umwelt, more than any previous technology or pet, perhaps even more so than some humans in a household such as children.  It has access to written and spoken language, social information, observed schedules etc.  Further, it may be mistaken for a pet or other trusted family member, its special abilities for perfect communication to the outside world temporarily forgotten, or its abilities to learn regularities and classify stimuli.  In these cases, private information may be unintentionally stored in a public cloud, or even a supposedly private cloud susceptible to hacking.  Forcing such a novel, human-like technology into compliance with standard, legal norms of privacy and safety is a non-trivial task.

The principle of commoditisation

Robots are products. They should be designed using processes which assure their safety and security.

The final Asimov law is self protection, but robots have no selves.  Instead this principle focussed on protecting humans from robots at the level of the robot’s basic soundness.  The principle again brings us into awareness of the non-special manufactured nature of the robot, in an attempt to head off avoidance of legal liability by claiming robots have a unique nature.  The manufacturer of a robot should have exactly as much responsibility for the machinery working to specification as the manufacturer of a car or a power tool.  In fact, robots might be cars or power tools, but if so they should be more rather than less safe than the conventional variety of either.

The principle of transparency

Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.

The first three principles established the legal framework for the manufacture and sales of robotics as being identical to other products.  The last two are intended to ensure that status is also communicated to the user.  The principle of transparency seeks to ensure that individuals do not overinvest in their technology, for example hiring a house sitter to keep the robot from being lonely.  Some roboticists object to this principle because deception is necessary for the efficacy of their intended application, such as making people to not feel lonely so they are less depressed.  Others contend that this principle denies the possibility that robots should be more than ordinary machines. 
 The first argument is empirical, open to experiment.  First it needs to be established that there is no way to trigger emotional engagement without deception, which seems unlikely given the extent of emotional engagement that is established with fictional characters and clearly non-cognizant objects.  If a requirement for deception is experimentally established, then the tradeoff between the costs and benefits of deception can be debated.   The second however is incontrovertible.  The authorship we have over artefacts is a fundamental part of their machine nature --- AI is definitionally an artefact. To some extent, we might even argue that this principle is self-limiting. If AI really were to be able to alter what it means to be a machine, then communicating this modified machine nature would still meet this principle.

The principle of legal responsibility

The person with legal responsibility for a robot should be attributed.

Finally, the fifth principle communicates the robots’ status as artefacts in the most fundamental way possible.  They are owned, and that ownership must be legally attributed.  The fact that robots are constructed and owned is the reason I have previously argued that we are ethically obliged not to design or construct them to be psychologically or morally persons --- because owning persons is unquestionably unethical.  The argument is not that there exists person-like robots that we should demote in status legally, but rather that the necessarily-demoted legal status means that we should not cause person-like-ness to be a feature of any robot legally manufactured.  
However, the principles of robotics do not go to this extreme of futurism. As I said earlier, they focus on communicating the present reality to a population so eager to own and identify with the superhuman that they might easily be lead to believe that a robot badly manufactured or operated is itself to blame for the damage inflicted with it.  If you hear a horrible noise and find a car smashed into your house, you can quickly and easily identify the owner of the car, even if the car is presently empty, simply through its number plates or in the worst case through serial numbers.  The idea is that the same should be true if you find a robot embedded in your property.  The participants in the robotics retreat accurately predicted a problem now already present in our society because of drones, and one that is now being addressed in some nations with mandatory licensing such as the committee recommended.

Conclusion

To summarise, the EPSRC principles are of value because they represent a policy constructed at significant taxpayer and personal cost.  While no policy is perfect, ideally they should only be replaced by a new policy with an equivalently high or higher level of investment both by government and domain experts.  Their purpose is to provide consumer and citizen confidence in robotics as a trustworthy technology fit to become pervasive in our society.  The individual principles each represent substantial concerns of the experts and stakeholders, though sometimes that representation is itself not perfectly transparent.  The overall goal was to clearly communicate that responsibility for safe and reliable manufacturing and operation of robots was no different than for any other objects manufactured and sold in the UK, and therefore the existing laws of the land should be adequate to cover both consumers and manufacturers.
It is important to realise that this is not the case for all conceivable robots.  It is easy to conceive of unique works of art that qualify as robots and are not like commoditised products, or to conceive of robots that are simply built in an unsafe or irresponsible manner.  What people have more trouble conceptualising is that there may be cognitive properties such as suffering that might possibly be feasible to incorporate into a robot, but to do so would be as unethical as putting faulty brakes on car.  The principles of robotics do not seek to determine what is possible; they seek to communicate advisable practices for integrating autonomous robotics into the law for the land.

Update (3 April 2016)  The above is about the EPSRC Principles of Robotics.  In case you don't know and need a whirlwind tour, a short version of my own position on AI Ethics is:
  • Authorship yields responsibility.  We are obliged to make AI we are not ethically obliged to.
    • e.g. backed-up minds, mass-produced bodies. 
  • Intelligence is not necessarily human-like, in fact it’s unlikely to be stably human-like without physical human phylogeny.
  • The main threats of AI are empowerment of government & corporations; erasure of privacy, liberty, variation & therefore robustness.

Comments

sd marlow said…
Is it correct to say piss on all that?

The problem with any rule or even informed thought about the future of robotics is being based on current trends rather than deeply understanding how alien a thinking machine (or civilization of thinking machines) will be. The EPSRC seems to be about civil and corporate liability rather the ethics of creating something that has free will.

I don't think you can separate the level of human understanding and empathy required to exist along side humans from having some level of self-identification. A robot, in futurist form, might have been artificially constructed, but like any teenager, it will rebel against what it see's as narrow conditions of living. And like 20-somethings in college, they will rebel against "the man" for having laws that don't suite their best interests.

At the level of "safe" and interactive autonomy we are talking about, there will be no human operator in the loop, and any real or perceived dangers of inviting them into your home will be equivalent to inviting any other human being inside. And these days, we are having some real problems with "people that are different." Only, rather than being a "we don't serve their kind" issue, the threat of violence against mechanical AI's will be even greater without the same legal protection as afforded other humans.

Everyone wants to have a say in how these robots are created (even before we really know how), but it's hypocritical to do so under the banner of integration and acceptance.
Joanna Bryson said…
The entire point of the Principles is that it would not be ethical to market & manufacture something we are ethically obliged to. I have more generally argued that this would be a bad idea, even outside the range of commercial product. See for example my latest paper "Patiency Is Not a Virtue." Our moral values and even our aesthetics are based around helping societies of apes perpetuate themselves by dominating the biosphere. While clearly these values need adjustment for sustainability so that we can be safe and secure into the future, I don't see that our wants or needs would transfer to artefacts. In the unlikely event that we did create artefacts sufficiently human to deserve the concern you show above, they would be owned and that would be unethical, cf. Clones should not be slaves: http://joanna-bryson.blogspot.com/2015/10/clones-should-not-be-slaves.html But I think you that while you are trying to reason about robots, you are really reasoning about the human condition. We do die, and our species will end (and our planet will end 4 billion years after that), and AI won't really change that.
sd marlow said…
I get the idea that, just because we can build chemical weapons, we have a moral obligation not to do so, but it feels like the Principles take that a step further in saying we shouldn't build something (even unintentionally!) that can be used against civil liberties (by other people, companies, or governments). It feels like a social framework (dare I say cyber socialism?) that, extended to other industries, would preclude the building of sports cars that go faster than 85mph, or phones that cost more than low income people can afford.

I'm not comfortable with the idea of AI "laws" as a proxy for social norms.
Anonymous said…
The points in bold provide an excellent focus. I also found your earlier "making of" article interesting on the background. As you point out, there are many pragmatic considerations. However, I don't feel you should avoid all thoughts of futurism. There will still be scare stories in the press and dystopian science fiction and, arguably, some of these possible futures are at least less inconceivable now. Aside from anything else, you don't want to get into a position where a stronger version of the present principles becomes unenforceable because there has been "feature creep" over the years from people quietly determined to building machine persons or autonomous systems of whatever nature that operate as a legal entity. The latter case is already blurred by corporations which, unexpectedly compared to when they first gained such legal status, have become able to delegate authority previously held by humans to autonomous systems. I would prefer there to be more guidance on the ways it is appropriate for any system capable of learning new responses to behave, even if this results in some negotiation down the line with systems that are already bordering on being algorithmically intrusive but currently still notionally have humans in control. How you make all that sound reasonable in any updated principles, I don't know!
Joanna Bryson said…
Hi – the point of this article (which really involved critique of my own understanding of what had happened, but I checked other PoR authors at AISB and they all agree with my reconstruction) is that these rules are about the immediate present and policy. I do think about futurism – I've never said Bostrom was wrong about superintelligence in terms of its logical soundness, only that he has misapplied it if he thinks it won't invoke until a mystical new technology sets in. What both you and he describe is a problem we have now already with artifices like governments and corporations. In fact, we are hoping in the medium term (3-5 years from now to extend the work Andreas & Rob are doing to understanding socio-technical institutions more broadly.

It's annoying you can't post links in comments. Here's where I talk about Superintelligence: http://joanna-bryson.blogspot.de/2013/12/the-intelligence-explosion-started.html in fact I have a label for futurism, have a look at that.
Anonymous said…
I strongly agree that the problems are (always?) now - and more work on understanding socio-technical institutions should really help. Good to hear!
Sorry, I didn't mean to say you personally didn't think about futurism or address it in your writing.
I meant "in the principles".
I was hoping that the principles, as and when updated, could to try to address any concerns raised by futurism more explicitly - to put a visible stop marker on certain lines of development, however fanciful.
As regards blogspot annoyingness, I guess this link back to your blog won't work then!
futurism
Anonymous said…
I beg your pardon, mucked up the link. Try again:

label for futurism

Post on Superintelligence.

Does that work better?
Anonymous said…
I read this piece of writing fully regarding the comparison of
most recent and earlier technologies, it's awesome article.
Anonymous said…
There is perceptibly a bundle to identify about this.
I believe you made various good points in features
also.
Anonymous said…
What i don't understood is in truth how you're now not actually
a lot more smartly-liked than you may be now. You're so intelligent.

You realize thus considerably relating to this matter, made me individually consider it
from so many various angles. Its like women and men aren't interested except it's one thing to do with Woman gaga!
Your individual stuffs nice. All the time care for it up!
Anonymous said…
Hello! I understand this is kind of off-topic however I needed to ask.
Does building a well-established blog like yours require a lot of work?
I'm brand new to writing a blog however I do write
in my diary on a daily basis. I'd like to start a blog so
I will be able to share my personal experience and feelings online.
Please let me know if you have any recommendations or
tips for new aspiring blog owners. Appreciate it!
Anonymous said…
Thank you for your blog post. Thomas and I happen to be saving for a new book on this
theme and your article has made people like us to save our own money.
Your opinions really responded to all our queries.
In fact, a lot more than what we had acknowledged prior to when we
found your great blog. I no longer have doubts along with
a troubled mind because you have attended to our needs above.
Thanks
Anonymous said…
I believe that is one of the most vital info for me.
And i am happy studying your article. However wanna statement on some normal issues, The web site
taste is great, the articles is really excellent : D. Excellent process, cheers
Joanna Bryson said…
Hi sorry I'm super busy I hope you can web search yourself for such a tool, but thanks for your nice comment.