In this space I explore monthly topics, from concepts to technologies, related to the necessary steps to build Digital Healthcare Systems. For the month of January 2021, I look onto liability issues of artificial intelligence (AI) use in hospitals; next month we will explore why a ‘Societal Health’ perspective is key in the maturation and resilience building for healthcare.

 

AI technology is considered to have the potential to revolutionise health and care; however, one must ensure proper protection to all relevant legally protected interests is set in place. I have argued for a need to look at hospitals, professionals and healthcare organisations in general as KIWIs (Knowledgeable, Intelligent, Wise and Interoperable). In a recent podcast I explain this concept and how it helps to look at the future of hospitals and health. One of its dimensions is the use of intelligence, both natural and organisational as well as artificial.

 

You might also like:Digital Healthcare Focus: Integrating Telerehabilitation in Rehabilitation Care

 

As private and public administrations set themselves to use AI tools, it is worth recalling an ancient and often-quoted maxim: the possession of great power necessarily implies great responsibility. While high expectations exist for societal benefit from AI technology, there are anticipated risks and potential harm. AI-based actions/interventions in human life is not without their risks and hence, their harm.

 

When risk materialises into damage and harm happens to a subject (be it a human or a non-human subject), there could be a right to claim restoration of that damage and/or to compensation. The regimes under which such compensation can be sought are multiple, but in the first instance it is the hospital or other healthcare organisation that will have to deal with legal challenge, defending their healthcare staff, IT staff, administrative staff, or even board decisions, all of which can be implicated in an AI case.

 

There can be material and moral damage. Legal grounds for suing are multiple and regarding AI, an information technology which is characterised by high level of data usage by a product (software), there are two main sources of law worthy of reference:

 

            1. Data usage-related law – mostly concerns the rights of citizens about the use of their personal data, heavily influenced by the General Data Protection Regulation (GDPR). These products use and explore data, which in the case of health are not just personal data, they are health data, a specially protected type of personal data under the GDPR. Risks to privacy and harm to other rights protected by the GDPR, namely in relation to the use of automated decisions and profiling, are always major issue but become even more salient in health.

 

            2. Product liability law – mostly related to the protection of individuals against defective products.

 

As we frame law around AI usage in health, this analysis intersects healthcare-related legal dimensions. In particular:

 

            3. Medicinal products/medical devices law – such is concerned with market entry authorisations, safety but also effectiveness of medicines and medical devices (in which AI-based products can be included), as well as adequacy (to certain conditions, criteria) and post-market surveillance.

 

Finally, as most healthcare providers are public in nature or operate under public interest (for example, PPPs), this analysis requires intersection with additional sources of law:

 

            1. Laws regulating the extra-contractual civil liability regimes and those of the liability of the state or public administration. This is because most healthcare institutions are public, and some national-level health interventions (under the public health domain) are done by public administration exclusively.

            2. Law regulating contractual responsibility, including aspects of the public procurement. Most AI-based technologies cannot be built by public administration but are almost always bought via public procurement, and thus the contractual and precontractual relationship between public administration and AI companies fall under the scope of heavily regulated Public Procurement Laws.

            3. Medical/Nursing responsibility law. Most AI systems in health today serve in advisory or support functions to professionals; these are semi-autonomous actors vis-à-vis the patients’ health and potential harm. They operate their effects through what health professionals do with them/under their suggested actions. Therefore medical/nursing responsibility regimes are key as they are mostly concerned with regulating the practice of medicine/nursing and the circumstances under which patients can seek responsibility from harm/damage inflicted by malpractice. Of all discussed regimes, these are potentially the least regulated and not at all harmonised in the EU. This means that to some extent organisations and their management need to legally secure against the consequences of the risks resulting from medical/nursing malpractice due to defective AI suggestions/decisions, or due to defective use of these AI-based tools.

 

My experience and research have shown that AI technologies can offer tremendous benefits for health and care, but they are accompanied by significant risks, and especially new types of risks. Complex ecosystems of interconnects actors, both public and private, are necessary for the co-creation, deployment and surveillance of these technologies in healthcare.

 

Liability regimes in existence can accommodate the use of AI in (public sector) health. However, on an analysis of the case for AI use in hospital triage, there are many ambiguous aspects in the application of these protection regimes and especially fault-based liability. Hospitals benefit from a priori technical-legal analysis of their AI ‘projects’.

 

New laws, better laws and more guidance can create trust and confidence spaces that will mean more investment in AI in health. They can lead to harvesting its potential while avoiding or compensating for harm and pushing towards much-needed Digital Healthcare Systems.

 

AI technologies can offer tremendous benefits for health and care. They come with significant risks, and especially new types of risks. Complex ecosystems of interconnected actors, both public and private, are necessary for the co-creation, deployment and surveillance of these technologies in hospitals and other healthcare settings.

 

This article is a contribution to thinking about AI use in health, in public administration, the risks and potential harms associated with such use to citizens; but also to healthcare managers and their organisations, who are advised to not be stopped from exploring this NEW WORLD, but to broaden their legal cautionary measures from just ‘GDPR compliance’ to other liability regimes under which hospitals and other organisations can be sued for AI misuse or mal-use.

 

«« Machine Learning and COVID-19 Management


Four Steps to Make COVID-19 Vaccinations Efficient »»



Latest Articles

digitalhealthcarefocus Digital Healthcare Focus: Liability Issues of AI Use in Healthcare