Mishcon de Reya page structure
Site header
Main menu
Main content section

In conversation with Ryan Abbott

Posted on 11 February 2022

Ryan Abbott, author and Professor of Law and Health Sciences at the University of Surrey recently discussed his new book “The Reasonable Robot: Artificial Intelligence and the Law" with Sian Harding, Associate in Mishcon de Reya's Innovation department.

Ryan is also Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA.

He explains why he believes the law should not discriminate between AI and human behaviour, and proposes a new legal principle that will ultimately improve human wellbeing.

The Mishcon Academy Digital Sessions. 
Sian Harding

Okay well welcome everyone and thank you for joining this Mishcon Academy Session, the series of online events, videos and podcasts looking at the biggest issues faced by businesses and individuals today.  I am Sian Harding and I will be hosting today’s event.  So I am very excited to introduce my guest today, Professor Ryan Abbott.  Ryan is a Professor of Law and Health Sciences at the University of Surrey School of Law and Adjunct Assistant Professor of Medicine at the David Geffin School of Medicine at UCLA.  He is also a Cedr Accredited Mediator and a Fellow at the Chartered Institute of Arbitrators, he is a licenced Physician, Patent Attorney and Acupuncturist in the US and a Solicitor Advocate in England and Wales.  Thank you so much Ryan for joining me today.

Ryan Abbott

Thanks much for having me, very excited to be here.

Sian Harding

Thanks.  So we are going to be talking about AI and the law and how we should approach the regulation of AI so this is the topic of your book ‘The Reasonable Robot’.  So Ryan you are a lawyer, you are a physician, you’ve written a book about AI and the law.  Where does your interest in AI come from?  What brought you to write the book?

Ryan Abbott

Well I think my interest in AI academically came about in about 2013/2014 when I was working as a patent attorney for a biotech company and teaching inventorship law at law school and there were vendors who were basically advertising that you could have them do things like have machines go through large antibody libraries and pick out an antibody that went to whatever antigen you were studying and it got me thinking that you know, that’s interesting ‘cos if we have a person do that sort of thing it makes them a patent inventor so what if a machine does that and academics who kind of said, ‘well you really wouldn’t need a patent for that because a machine doesn’t care about a patent’ and I thought well that’s clearly right but on the other hand you know, the pharma companies that are building the AI and having the AI do this work need patents so maybe that’s not the right answer.  And that sort of set me off on a looking at ways in which machines do human sorts of things but the law treats those behaviours very differently.

Sian Harding

Before we get into the substance of your book, why don’t we just first of all establish what we are talking about when we talk about AI just so that everyone is on equal footing?

Ryan Abbott

It, it’s kind of sociologically fascinating if you think about it that in 60 years since we have been using the term AI, people still aren’t talking about the same thing.  You know, some people define AI you know, based on how someone is making an AI as in for example, machine learning versus conventional software.  You know, personally I don’t think that’s the right way to go about it.  I think about it very functionally you know, and AI is basically a machine that behaves you know the way a person does or in a way that you would describe cognitive abilities to a machine and for me that’s important because what we are really looking at with law and regulation is you know, what rules will best promote social wellbeing and that really comes down to what is it we are having machines do, not how are they structure.

Sian Harding

So the principle that you propose in your book as how we approach those issues is a principle of AI legal neutrality.  Could you talk to us a bit about what this is, what it’s aims are?

Ryan Abbott

You can have a machine generate an invention or a person generate an invention and the law will treat those two activities very differently in terms of for example, subsistence of intellectual property rights.  You know, in a few years we will have self-driving Ubers hopefully and you will be on your phone able to summon a self-driving or a human driven Uber but if you know, one or the other of those causes the same exact sort of accident, doing exactly the same sort of thing, the law treats those behaviours very differently.  What if Mishcon is able to replace you with a robot, you know the tax consequences of that are very different and the University of Surrey is constantly trying to replace me with a robot so the book is noticing that the law treats behaviours very differently by people and machines and finds that it tends to have negative consequences and so the book proposes that even though we don’t treat AI and people the same as legal actors, that we treat their behaviour functionally equivalently under the law.

Sian Harding

Why in your view is it important that AI can be acknowledged as an inventor and why is this not already a thing?

Ryan Abbott

The two reasons it is important are, one, the subsistence question which is the commercially important question, can I get a patent for this sort of thing?  Right.  You don’t want Pfizer investing 100 million dollars in building an AI to find new Covid antibodies to then say, ‘well sorry no patents for you’ right and then secondly, there is sort of the moral issue of ‘well who should be listed on the patent’ you know, if I own an AI that invents a whole bunch of stuff should I get to say, ‘oh that was me’ even if I haven’t done anything involving inventive skills and we’d answer, no.

Sian Harding

You’ve been involved in this kind of multi-jurisdictional attempt to have an AI machine recognised as an inventor in patent applications, I believe it is known as ‘DABUS’ or [unclear 5.15], device for autonomous bootstrapping of the unified Sentience.  Can you tell us about these test cases?  What is DABUS, what are you trying to do?

Ryan Abbott

So we filed two test cases for two AI generated inventions; one’s a fancy beverage container and one is a flashing light that could attract attention in an emergency situation.  We filed these in the UK and in Europe because we got expedited examinations and they were found to be substantively patentable and then we corrected it and said, well actually there was no human inventor, you know an AI invented this and the person who owned the AI should be mention and those applications were rejected from the UK and Europe and we filed them in fifteen other jurisdictions.  In July of last year, South Africa granted us a patent, that was the first patent for an AI generated invention.  South Africa doesn’t have a substance of examination system but they so formalities examination and everywhere we have been rejected it was on a formalities basis so, so we got that patent and three days later just a speech in the Federal Court of Australia ruled that that was the appropriate way to handle this, that there is no reason that as a matter of fact an AI couldn’t be listed as an inventor and at least in our case the AI’s owner had the clearest claim of entitlement to it.  That decision is under appeal.  Next week there will be a hearing before five members of the Federal Court of Australia.  We’ve also been rejected in the US, the UK, Europe and Germany. All of those are under appeal. The benchmark for getting a patent is based on what this hypothetical skilled person would find obvious which is like the reasonable person in Tort law and that person is explicitly not based on what an inventor would find obvious because inventors would find almost everything obvious.  But if the skilled person over time has sometimes become a skilled team of people and in industries where team-based approaches to invention are the norm.  Well I think it is probably already the case that the skilled person today is often the skilled person using an AI and that makes more obvious because using AI can help people do things they couldn’t do otherwise like recognise patterns in large data sets or have access to super human amounts of prior art in a way that’s actually useful to people.

Sian Harding

And so you also mention in your book the risk that development in AI may result in a large number of patents in the hands of a small number of companies and that can lead to potentially competition issues, market abuses.  Do… is there anything we can do or that we should do to mitigate that situation now?

Ryan Abbott

So that’s an interesting you know, aspect of this and about AI activity more generally.  I mean it’s hard to know for example, DABUS is owned by a small business in the mid-west of the United States but probably large businesses are going to have significant advantages when it comes to making and building these AI’s and if they are as productive as they are hoped to be you know, it may result in a lot more consolidation of an electrical property in their hands.  You know, I would first note that that is for better or worse the way we’ve organised our society and our IP system and that most patents are owned by large businesses and by a fairly small number of large businesses and you know, that’s how we generate innovation more than the romantic version of an adventurer who is tinkering in her garage late at night.  Say Facebook, well I shouldn’t use Facebook, we’ll use Facebook you know, has an AI that you presented a tissue sample from a patient, it goes through a trillion antibody library and it 3d prints an antibody to treat their cancer and thus Facebook invents the cure for cancer and you know, it puts every other cancer researcher and treatment out of business.  You know, I still think that’s a pretty good outcome because we’ve cured cancer and if you know, people are price gauging or not making access available there are things one can do under competition law or have things like compulsory licenses you know, and maybe that will be the time to finally invoking some of those things a little more aggressively.

Sian Harding

And so you mentioned briefly taxes there and so if we can just dive in to that a little bit.  In a world where kind of AI is starting to do the revenue, generating jobs that humans have historically always done, you suggest in your book that the tax system needs to be adjusted to make room for this and to ensure that companies aren’t financially incentivised to use AI instead of humans where otherwise isn’t necessarily beneficial.  So can you talk us through that and tell us what a better tax system looks like in your view?

Ryan Abbott

Sure.  So if we go back to the example of Surrey replacing me with a robot.  There’s a lot of reasons Surrey might want or might not want to do that but one of the reason it you know, it hasn’t been that widely appreciated is if they can automate me, they don’t have to make national insurance contributions for the work done by a robot and so without necessarily having intended to, the tax system encourages employers to automate even if it isn’t otherwise more efficient you know, if it’s somewhat of a close call.  You know the other underappreciated effect of this is almost all Government tax revenue at least in the United States comes from labour income in one way, shape or form.  Less than 10% of revenue comes from corporate taxes you know, it’s almost all from income taxes and form payroll taxes in the United States and so if I get replaced by a robot the Government loses a significant amount of tax revenue.

Sian Harding

So you look quite extensively in your book at Tort law and Tort liability and how we might approach that in the case of AI.  So in Tort law liability is based fundamentally in there being a duty of care.  Can we impose a duty of care on an AI robot?  How might we do this?

Ryan Abbott

You could create a system where they have the legal obligations not to cause harm and insurance policies or some form of security and so forth but again I think it just works much better in terms of ultimate liability to treat them as products.  I mean they are ultimately almost all going to be commercial products you know, and two of the areas where I think we are going to see a lot of this are self-driving cars which you know, are, are almost with us or you know, moderately with us already just not fully self-driving you know and AI being used to replace in some sense physicians.  Doing some things like diagnosing scans and pathology slides.  You know but right now if you have… let’s say you have a physician who misses a cancerous mole and an AI app that misses a cancerous mole.  They are both doing exactly the same thing you know, but the human physician will be held to a negligent standard and will say what would a reasonable doctor have missed whereas with the AI, although it differs a bit by jurisdiction you know, in the US we’d say you know, was there a defect with the product or with its marketing and if there was regardless of the presence of reasonable care there is liability.  And strict liability is largely held to be a stronger you know, standard of liability for the AI but I think it doesn’t make a lot of sense to have two agents doing exactly the same sort of thing you know, one held to a higher standard of care and the reason is that will you know, discouraged people from using it and we want people to use if it’s safer than a physician and so essentially if we have a uniform standard of care and said you know, you know, what would a reasonable physician have missed and if the AI outperformed the human physician you know, the manufacturer of the AI would not be liable for a mistake that they made.  You know, so that would have the benefit of promoting use of technologies that are safer than the current standard.

Sian Harding
 
And I mean how can clients approach that.  Is there a way we can regulate to make sure AI is as transparent and trustworthy as possible or is it kind of just going to be in each specific use case the person who has the AI or is using it needs to just be really careful and make sure they understand what the AI does, why it does it?

Ryan Abbott

Well a bit of both.  I mean firstly I would say there are a number of regulations that already come into bear here you know, in requiring AI transparency even though there aren’t AI specific regulations yet in the UK and I suspect we will see them as we will see them in Europe in some way, shape or form when these AI regulations you know, presumably finally enter into force you know, with some changes.  You know, but I would say for, for Mishcon lawyers that you know, now is the time to be working with clients on these sorts of things even if there aren’t you know, AI specific regulations because there are already you know, compliant risks under existing regulations but also you know, very importantly risks related to public opinion and investors and you know, bad regulatory rules coming down in response to some sort of adverse actions and how one does that I do think it has to be very well informed and very context specific you know, if you are having you know, an AI that’s diagnosing cancerous moles, you know, maybe a machine learning version gets 99% accurate and a conventional AI gets 65% accurate and you end up deciding you know, I am going to sacrifice transparency for accuracy you know, but I’ve done so in a very calculated way you know, because that has the right outcome.  If you have an AI that’s you know, hiring people from reviewing CV’s you know, but can’t tell you why it has selected someone you know maybe that is somewhere  you don’t want you know, where transparency is one of the most important things you get you know in part because there aren’t necessarily objectively right answers to you know, should candidate A or B get hired you know but there are some objectively wrong ones like, well you know I’ve decided that I don’t like you know, this category of person so I am not going to select their job applications you know, and then how one does that technically is having you know a multi-disciplinary group of people advising on this.  There is a lot one can do if clients are kind of aware of the risks in this space and have someone who can help guide them to solutions.

Sian Harding

It’s a fascinating area of law.  I mean there are so many sort of aspects of AI to think about and so many uses that need to be thought about and different approaches to regulation taken.  I think probably that’s all we are going to have time for, I am sorry to the people who are asking questions but thank you so much Ryan for joining us, it is fascinating, I really enjoyed your book and yeah, thank you very much.

Ryan Abbott

Thanks so much and happy to chat with anyone any time if you’d like to reach out.

Sian Harding

Thanks’ very much Ryan.

Ryan Abbott

Cheers.

The Mishcon Academy Digital Sessions.  To access advice for businesses that is regularly updated, please visit mishcon.com.

The Mishcon Academy offers outstanding legal, leadership and skills development for legal professionals, business leaders and individuals. Our learning experts create industry leading experiences that create long-lasting change delivered through live events, courses and bespoke learning.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

I'm a client

I'm looking for advice

Something else