While adoption of artificial intelligence (AI) in healthcare may be on the rise, a new report indicates the lack of readiness of organisations in dealing with societal and liability issues arising from actions or decisions taken by AI systems on their behalf. 

In a recent survey conducted by consulting firm Accenture, 80 percent of health executives agree that within the next two years, AI will work next to humans in their organisation, as a co-worker, collaborator and trusted advisor. 

However, 81 percent of health executives say their organisations are not prepared to face the societal and liability issues that will require them to explain their AI-based actions and decisions, should issues arise, according to Accenture’s Digital Health Technology Vision 2018 report.

With AI's increasing role in healthcare decision-making, organisations need to ensure that their systems act accurately, responsibly and transparently, contends Accenture. In line with this, it's important that the data used to inform AI solutions are created without any embedded bias.

“If the users don’t understand what was behind the AI (decision), we think that’s going to be a real limitation on its adoption,” says Kaveh Safavi, MD, head of Accenture’s global health practice. “Think about a healthcare use case where there’s a recommendation about using a service and you don’t know whether or not the person making that recommendation is economically motivated. That’s really about responsibility and transparency.”

Of note, the Accenture survey found that the vast majority of health executives (86 percent) have not yet invested in capabilities to verify the data that feeds into their systems, opening the door to inaccurate, manipulated and biased data — and therefore results.

“The artificial intelligence is only as good as the training data," Safavi explains. “If that data is limited or biased because of the way it was obtained, both of those scenarios could result in inaccurate or incorrect training that potentially could lead to people choosing not to trust AI technology.”

Safavi believes that the issues of explainability, transparency and veracity of data are critical, especially as AI increasingly touches the end-to-end care experience. 

Accenture’s survey also found that 73 percent of health executives are planning to develop internal ethical standards related to the use of AI to ensure their systems are designed to act responsibly.

Image Credit: Pixabay

«« Can a bot take over primary care?


Disappointing and time-consuming, can EHRs be made useful? »»



Latest Articles

Artificial Intelligence, AI, AI systems While adoption of artificial intelligence (AI) in healthcare may be on the rise, a new report indicates the lack of readiness of organisations in dealing with societal and liability issues arising from actions or decisions taken by AI systems on their beh