The latest medical data sharing controversy to attract the interests of regulators and the press, the arrangements between Royal Free London (RFL) and DeepMind, involved the transfer of approximately 1.6 million identifiable patient records, without explicit patient consent, for clinical testing of DeepMind's Streams application, relating to acute kidney injury.
We reported on the data protection issues arising from this arrangement in Digital Health Legal in June 2017, and have since published a follow up, also in Digital Health Legal, in July 2017, on the ICO's report following its investigation, requiring RFL to give an Undertaking in respect of various breaches of the Data Protection Act. In summary:
- Principle 1: requires that data controllers notify individuals as to how their data will be processed and for what purposes. RFL had not provided an appropriate level of transparency to patients about the use of their personal data.
- Principle 3: requires that personal data shall be adequate, relevant and not excessive in relation to the purpose/s for which they are processed. The ICO found that providing 1.6 million partial patient records was neither necessary nor proportionate.
- Principle 6: requires data controllers to provide sufficient information about the proposed processing. RFL had failed to do so.
- Principle 7: requires data controllers to put in place a contract with third party processors. The arrangements between RFL and DeepMind did not go far enough, with the Commissioner being particularly concerned that no Privacy Impact Assessment was drawn up before the project began.
Given the seriousness of these breaches, it is perhaps surprising that the Commissioner chose to require an Undertaking rather than to impose a fine close to the maximum permitted of £500,000.
Shortly after the ICO's RFL/DeepMind decision, the Government issued its Response (entitled "Your Data: Better Security, Better Choice, Better Care") to the National Data Guardian (NDG) and Care Quality Commission (CQC) 2016 reviews on data security standards. The Government has confirmed that it plans to implement the recommendations from those reviews, including the 10 data security standards recommended by the NDG. Further, a new national opt-out will be implemented from March 2018 (existing type 1 opt-outs will be honoured until 2020) with the aim of supporting people to make informed choices about how their information is used and protected in the health and social care system (the concept of the opt-out, rather than an opt-in, is controversial, and caused the Care. Data proposed service to be suspended).
We also recently attended the Intelligent Patient Data Conference, hosted by The Digital Catapult Centre, which provided a fascinating insight into the use of artificial intelligence (AI) and machine learning within the healthcare industry. Unsurprisingly, attendees noted data protection as a key concern. Data is key to any AI device, with developers needing to collect and analyse data from various sources in order to improve their AI algorithms. It quickly became apparent from the day's discussions that putting patients' welfare at the heart of any AI products and ensuring transparency with respect to how patient data is used, are two aims shared by healthcare and artificial intelligence stakeholders.
The event also covered additional points to consider for creators of healthcare artificial intelligent devices such as: the importance of gaining and maintaining patient and doctor trust in your AI device; adhering to NHS, clinical commission and security standards; the dangers of using artificial intelligent devices within the healthcare industry; and how the quality of underlying data and clinical trials are fundamental to the success of any AI device.
The conference followed the European Parliament's resolution with recommendations to the European Commission on civil law rules on robotics published earlier this year. The House of Commons Science and Technology Committee also published its own report on robotics and AI last year, and so it is becoming clear that legislators are starting to tackle the complex area of AI. With approximately 1.7 million robots already existing worldwide, AI is increasingly part of our lives, rendering it more likely that countries will soon begin regulating these devices. Any technology provider dealing with AI devices will therefore need to keep a close eye on the legal developments within this industry.