top of page
Jessica Hirst

Legal and ethical concerns in health-related artificial intelligence


The world is amidst change; in this interconnected world, we are seeing more ways through which technology is impacting different aspects of our lives, and healthcare is no different. The growing use of artificial intelligence in digital healthcare brings several benefits to the sector, many of which will change the way we treat patients in the future.

The Departments of Health and Social Care launched a public consultation on ‘Advancing our Health: Prevention in the 2020s’, setting out its aspirations that the 2020s will be the decade to curate a more preventative model of care.[1] NHS leaders, supporting this line of the research, argue that increasing artificial intelligence in this manner will enable a shift from reactive medicine, initiated after a patient has become ill, to a more preventative model of care.[2] Artificial intelligence has the potential to change the way we deliver healthcare in the NHS, and across the world.

In 2021, Health Education England published the first roadmap regarding the use of artificial intelligence in the NHS and identified that diagnostic technology, such as using imaging, pathology, and endoscopy often used by radiologists, was the most common use of artificial intelligence in healthcare. This is not surprising due to the rich data sets, and an enormous amount of imagery radiologist’s use.[3]


Artificial intelligence is growing influence in our everyday lives, but there are concerns about the newer technologies becoming a new source of inaccuracy, and data breaches arising as a result of its use. There are various legal and ethical concerns which need to be considered before implementing artificial intelligence, especially in healthcare, to ensure the patient is never the victim of error.


Artificial intelligence and disease diagnosis

Artificial intelligence in diagnostic technology allows us to considerably increase the chances of an accurate diagnosis. Diagnostic technology is built through the use of Medical Learning Classifiers and data held from electronic health records through the decades; pairing these two allows artificial intelligence to scan records in search of previous cases with similar symptoms.[4] This, in turn, allows physicians to directly compare current cases to previous symptoms and conditions, increasing the likelihood of an accurate diagnosis.

An area that demonstrates some of the most compelling advances in artificial intelligence is stroke management. In 2018, it was found that stroke treatment was viable for 24 hours rather than the previously thought 6 hours.[5] The impact of this finding was that the stroke market went from $3Bn to $10Bn overnight.[6] A study carried out by Titano et al demonstrated the effectiveness of CNNs (Convolutional Neural Networks) through a controlled trial, which showed that a deep learning-based system could detect acute neurological events in cranial imaging, 150 times faster than a radiologist could.[7] The human mind is only capable of dealing with a limited amount of information when making a decision, whereas artificial intelligence algorithms are not restricted to such limits. The need for early detection, accurate diagnosis, and timely treatment has promoted the increased use of artificial intelligence technologies in stroke care. It can easily be integrated into existing workflows, providing actionable information and allowing healthcare providers to use imaging information smartly, which ultimately reduces the costs of care while improving diagnostic confidence and patient outcomes.


Automation


Intelligent automation in healthcare will be instrumental in not only the growth and development of the NHS, but the overall success of caring for its patients. The pandemic has accelerated the use of automation within the NHS, evidenced by a surge in patients using the NHS login accounts to access digital health care and services. The app has grown from 2.2 million users to 28 million users in a year, and quickly became the most downloaded free app in England.[8] NHS Trusts across England have started to see the real benefit of automated systems. Health Call collaborated with seven NHS Trusts in the North East and North Cumbria to create a new booking app system to help with appointment reminders and consent about the vaccines.[9] Apps of this nature were instrumental in helping the vaccine rollout, and the overwhelming amount of administrating requirements the NHS was dealing with.

Despite the accelerations and use of apps since the pandemic, healthcare has barely touched the surface of what can be achieved with the right use of intelligent automation. Intelligent automation will be able to make a huge impact on healthcare systems; the current systems are focused on inflexible processes, which can be prone to human error. The systems need to be replaced with a long-term solution that is underpinned by automated technologies. RPA (Robotic Process Automation) is used to automate simple, repetitive, administrative tasks using low-code, a software development approach that requires little to no coding to build applications.[10] This is becoming a predominant automation option for businesses across the world.[11] Implementing automation throughout healthcare systems for more back-office administration and operations across various departments will increase efficiency and help alleviate patient backlogs.

AI and Telemedicine

Another use of automation technology is Telemedicine: the distribution of health-related services via electronic information and telecommunication technologies for patients who can be treated at home.[12] Patients who do not need to be admitted to the hospital can receive care from home through wearable devices, sensors, smartphone apps, video appointments, and text messaging tools.

Telemedicine is not only used within the United Kingdom. In the UK, it helps relieve pressure from the NHS, but in other countries, the use of telemedicine has been one of the first opportunities to receive quality healthcare.[13] For example, the increased digital penetration in rural parts of India has helped the introduction of telemedicine and patients have the opportunity to have in-depth disease information and communications with doctors from their own homes, making healthcare more accessible.[14]


Legal Implications



Whilst there are many benefits to integrating artificial intelligence into healthcare, there are some legal and ethical issues that hold artificial intelligence back from being implemented. Fundamental issues such as privacy, ethical considerations and bias need to be considered carefully to ensure the best results for patients.



Privacy

Many benefits of artificial intelligence come as a result of mass data collection and rich datasets. This causes privacy concerns, especially within health data and the way it is used. Commercial implementations of artificial intelligence in healthcare can be managed to protect privacy, but that introduces competing goals.


In the United States, privacy is increasingly becoming a problem. A 2018 survey, involving four thousand American adults, found that 11% were willing to share health data with tech companies versus 72% who claimed that they would share such information just with physicians.[15] Only 31% were ‘somewhat confident’ in tech companies’ data security. This has not prevented some states’ hospitals from sharing patient data that is not fully anonymised with companies including Microsoft and IBM.[16] Corporations may not be sufficiently encouraged to always maintain privacy protection if the legal penalties are not high enough to offset such behaviour. This can lead to a lack of public trust and heighten public scrutiny, or even litigation, against commercial implementations of healthcare in artificial intelligence.[17]

Another compelling concern with mass data use is the external risk of privacy breaches from highly sophisticated algorithmic systems. Whilst Healthcare data breaches are not widely targeted by criminal hackers, they have risen around the world, including the United States and Europe. Artificial intelligence is a contributing factor to a growing inability to protect health information, with numerous recent studies showing how emerging computational strategies can be used to identify individuals in health data repositories managed by public and private institutions, even if the information has been anonymized and scrubbed of all identifiers.

Ethical implications

Similar to many other forms of medical intervention, artificial intelligence can cause harm; hence, its use requires ethical consideration across healthcare professionals. Therefore, health care professionals need to make themselves aware of soft law and standards of practice, such as the NHS Code of Conduct for data-driven healthcare technology, to ensure its safe use and that those standards are followed.[18]

There is a considerable amount of mythology surrounding artificial intelligence technology, such as claims that ‘the algorithm is always right’.[19] Algorithms are socio-technical constructs, embedded and created through human developers, meaning they are still prone to error. If the deployment of technology is used in a different context than to the one it was designed for, it is likely that it will generate incorrect outputs.[20] It is therefore imperative that healthcare professionals receive the correct training and feel able to question the suggestions made by artificial intelligence systems in clinical practice. The NHS has already started working to ensure that healthcare professionals understand the full impact of artificial intelligence; it recently released a report on ‘Understanding healthcare workers’ confidence in AI’, exploring the factors influencing healthcare workers’ confidence in AI-driven technologies and how their confidence can be developed through education and training.[21]

Bias

Artificial intelligence is most effective when it is reliant on large, rich datasets to extract information from. A concern in healthcare data collection is that several demographics that are historically underrepresented would be absent from this data, which in turn, could cause unwanted bias. It is relatively common knowledge that artificial intelligence systems can exhibit biases that stem from their programming and data sources.[22] Machine learning software could be trained on data that underrepresents a particular gender or ethnic group.

It is imperative that there is a complete understanding of bias, and must take into account systemic and human biases. Systemic biases result from institutions operating in ways that disadvantage certain social groups.[23] Human biases can relate to how people use data to fill in missing information in certain situations, such as a neighbourhood of residence influencing how likely authorities would consider the person to be a crime suspect. When human and systemic biases combine, they can form a pernicious mixture, especially when explicit guidance is lacking to address the risks associated with using artificial intelligence systems.[24]

If the data collected and being used is prejudiced in favour of a particular demographic, then this could have severe consequences for the output of the data, leading to inaccurate diagnoses for minority demographics who may require a different set of treatment than the overrepresented demographic.

Thoughts

The future of healthcare technology is expanding in many different directions and we are starting to see progression. However, this progression is slow compared to other industries that are accelerating through technology development and the use of artificial intelligence, from automated insurance claims handling to proactive notification in delivery and services.

Artificial intelligence can help the healthcare industry in numerous ways and the use of this aided technology will bring patients and healthcare professionals closer, propelling the NHS forward into a more stable future. Nonetheless, artificial intelligence within healthcare still faces serious privacy challenges, ethical considerations and unwanted bias. There are still significant concerns around accessing and controlling personal medical information, as it is among the most private and legally protected forms of data. In order to address these concerns, a more innovative approach needs to be considered, as there will be a need for new and improved forms of data protection and anonymisation. In addition, there should be a regulatory component to ensure the private custodians of data are using safe methods of protecting patient privacy.[25]




Endnotes

[1] Professor Dame Sue Hill, 'Genomics and the prevention revolution' (NHS England, 2 September 2019) accessed 17 November 2022. [2] Ibid. [3] NHS Health Education England, 'Health Education England publishes roadmap into use of AI in the NHS' (10 February 2022) accessed 17 November 2022. [4] The Lawyer Portal, ‘Artificial Intelligence In Healthcare Explained’ (30 May 2022) accessed 18 November 2022. [5] Blackford , 'Cutting through the hype – Part 2' (30 December 2021) accessed 1 December 2022. [6] Ibid. [7] Lingling Ding and others, 'Incorporating Artificial Intelligence into Stroke Care and Research' [ 2020] 51 (12) Stroke accessed 29 November 2022. [8] NHS Digital, 'Around half of people in England now have access to digital healthcare' (25 October 2021) accessed 29 November 2022. [9] Health Call, ‘Health call launches staff covid-19 vaccine booking app’ (16 March 2022) accessed 29 November 2022. [10] Maria DiCesare, ‘Robotics Process Automation (RPA) & Low code Process automation: Use Cases, Benefits & More’ (Mendix, 21 April 2021) accessed 3 January 2023. [11] Mckinsey Global Insitutue, ‘How will automation affect economics around the world?’ (14 February 2018) accessed 3 January 2023. [12] The Lawyer Portal (n4) 3. [13] Chioma Obinna, ‘AXA Mansard reaffirms commitment to quality heathcare in Nigeria’ (Vanguard, 3 January 2023) accessed 3 January 2023. [14] Linda Luxon, 'Infrastructure – the key to healthcare improvement' [2015] 2(1) Future Healthcare Journal accessed 29 November 2022. [15] Blake Murdoch, 'Privacy and artificial intelligence: challenges for protecting health information in a new era' [2021] 22(122) BMC Medical Ethics accessed 29 November 2022. [16] Ibid. [17] Ibid 122. [18] Jessica Morley, 'AI in the NHS: what do health professionals need to know?' (Genomics Education Programme, 25 August 2020) accessed 29 November 2022. [19] Ibid. [20] Ibid at 3. [21] Ibid at 1. [22] The Lawyer Portal (n4) 6. [23] NLST, 'There’s More to AI Bias Than Biased Data, NIST Report Highlights' (16 March 2022) accessed 1 December 2022. [24] Ibid. [25] Blake Murdoch, 'Privacy and artificial intelligence: challenges for protecting health information in a new era' [2021] 22(122) BMC Medical Ethics accessed 29 November 2022.

Comentários


bottom of page