First of all, what is an Oracular Artificial Intelligence (OAI) system? Like any oracle, we describe it as any decision aid capable of providing its users with very accurate responses (ie, no more than one error out of 20) and little or no explanation for that advice. Deep learning systems that analyse masses in mammograms are an example of OAI.

 

The pros of an OAI system seem clear to many and are often mentioned to justify the great interest that their development is receiving in many medical specialities. They are said to improve diagnostic accuracy and treatment appropriateness if not effectiveness; increase the service throughput; improve information retrieval and retention of evidence and past cases; minimise turnover effects; and to (perhaps) reduce costs, just to mention a few pros (thus purposely neglecting any positive ripple effect on the clinical workforce, like motivation, abnegation and sense of belonging).

 

However, any advantage notwithstanding, we should not underestimate the potential cons too. We say potential because, to our knowledge, no real OAI is currently used in real-world settings and on real-world data (that is either inaccurate or incomplete data on which any cleansing task would be unfeasible in the long run) to treat any patient who could walk into a hospital, on a daily basis, that is long enough for any stable effect to be observable and become object of serious inquiry.

 

Even if we lack settings to see how things could go wrong (or well), after the acquisition of a full-fledged OAI system, we can nevertheless adopt a socio-technical approach to interpret the AI potential in this delicate domain. This approach reminds us that real equipment (ie, any technical system) is always socially embedded. It is put to use in work practices that have been developed over time in a particular cultural and social environment. The new socio-technical system resulting from the hybridisation of these practices and new ones of appropriation is a new socio-technical system presenting clear characteristics of complexity. The main characteristic of any such a complex system is unpredictability. Recognising this is just a necessary but not sufficient precondition to be duly prepared for the worse - while hoping for the best of course.

 

In a recent paper we have begun to shed light on the possible shortcomings that an OAI system could exhibit in real medical settings. This is done by focusing on the impact that this latter technology could have on the social component of the new system, that is the care givers, especially the MDs.



For reasons of brevity here we recall just automation bias. This bias occurs whenever somebody's thinking or action is relevantly affected by automation. This bias acquires a macroscopic effect when a vicious circle occurs. First, users put too much trust on their automated aid with overconfidence or compliance.  This is a belief that the technology will never be wrong, fail or break down. This leads to users using the system more than actually needed – a tendency known as overreliance - because it's faster, easier, or more accurate. In the long run this can lead to overdependence, that is a situation in which using the technology has deskilled the users with respect to some competence and know-how, which obviously they cannot any longer exhibit if or when the system becomes unavailable or is wrong. It is reasonable to believe that the risks of overdependence could be repressed with an increase in overconfidence, so that automation bias gets reinforced. In medicine, automation bias can lead to several and still under-researched consequences. An example is a lower sensitivity for what cannot be easily represented in terms of codes, numbers and words (the demise of context in favour of text) and the mentioned deskilling, which can also regard a weakening of MDs in their will to face uncertainty and take some risks for the good of the patient (think of defensive medicine in light of the AI alibi). Thinking of these and other unintended consequences is not a Cassandra's game. Rather it is a way to consider the development of OAI in medicine necessary, as much as it is not failing to comprehend the grave implications of this development and the obligation to assess its overall and practical value in real clinical settings.

 

Zoom On

 

What is your top management tip?

Do not underestimate the effort, costs and time that are necessary to properly train the users of any OAI system, and have these users become committed in making the best use of the system and in contributing to its continuous improvement. In short, consider the most important human factor, which are the human actors themselves.

 

What would you single out as a career highlight?

Having a viewpoint published in JAMA. It is not just the academic achievement per se; rather it is the satisfaction as a computer engineer of having succeeded in creating a bridge between the computer scientists and the medical community, in having medical doctors understand the implications of taking AI seriously instead of heedlessly, and in contributing in making some concepts more popular It was also satisfying to see the article become one of the most discussed and shared ones on the social media was rewarding.

 

If you had not chosen this career path you would have become a…?

An epidemiologist. But since I am afraid of getting sick (just by thinking of disease incidence!), I became a computer engineer, to design computers that could help MDs cure diseases.

 

What are your personal interests outside of work?

Listening to good music and reading good essays - and doing good research inspired by good essays while I'm listening to good music. This latter thing is just my job so I can admit that my work encompasses my personal interests, and I know to be a lucky person in that respect.

 

Your favourite quote?

"Perfect is the enemy of good" Voltaire

«« Dedicated CISO: Worth It?


Cyber Attack Insurance: The Facts »»



Latest Articles