HealthManagement, Volume 19 - Issue 2, 2019

img PRINT OPTIMISED

Machine ethics: A case for human-centric Artificial Intelligence

With the development of AI comes the question of ethics, especially in the human-centric healthcare setting.

Artificial Intelligence (AI) is one of the most transformative forces of our time and presents a great opportunity to increase prosperity and growth. Over the last decade, major advances have been realised due to the availability of vast amounts of digital data, powerful computing architecture, and advances in AI techniques such as machine learning and deep learning. Major AI-enabled developments in autonomous vehicles, healthcare, home/service robots, education and cybersecurity are improving the quality of our lives every day. Furthermore, AI is key for addressing many challenges facing the world, such as global health and well-being, climate change, reliable legal and democratic systems and others. Having the capability to generate tremendous benefits for individuals and society, AI also gives rise to certain risks that should be properly managed. Given that, on the whole, AI’s benefits outweigh its risks. We must ensure that we follow the road that maximises the benefits of AI while minimising its risks. To ensure that we stay on the right track, a human-centric approach to AI is needed, forcing us to keep in mind that the development and use of AI should not be seen as a means in itself, but as having the goal to increase human well-being. 
 
Human-centric AI or machine ethics is a new field of research at the interface of computer science and philosophy that aims to develop moral machines. It's all about creating machines that can make moral decisions based on computer technology. 

Existing paradigms


Let us take a look at the example of autonomous driving. Even fully-automated vehicles face moral choices. In unavoidably dangerous situations, the protection of human life should take precedence over harm to property and animals. Of particular difficulty are the moral dilemmas that may be encountered in this area of application, such as the need to decide whether to sacrifice a small number of lives to save a larger number, if unavoidable.

In self-driving cars the algorithm decides in an emergency whether the vehicle is driving, eg in a group of pedestrians, into a mother with a child or against a wall. There are heated philosophical and legal debates about this “algorithm of death,” for it is certain that the autonomous car is coming, ahead of autonomous weapons and mechanical pets. The development is politically intentional, as it is rightly assumed that autonomously-driving cars will not only drastically reduce travel costs and energy consumption, but also the number of accidents. The computer processes much more information much faster than the human being, never gets tired, never drives drunk and does not text SMS messages behind the steering wheel.

Since, at the same time, it is certain that there will continue to be accidents in which fatalities are unavoidable but can, at most, be selected, the question arises with which decision ethics one equips the corresponding algorithms? Philosophy thus becomes an important element in the production chain of an automobile.

As big as the ethical dilemma of the death algorithms is, not giving a direction at all would be no solution. Even more immoral than to coolly and quickly weigh the life of a child against that of a senior citizen would be to block a technology that prevents tens of thousands of deaths per year. Statistics provide the killer argument in favour of the new technologies, no matter what ethical dilemmas they bring with them. Even the use of military drones is justified as they cause less collateral damage.

If the algorithms of autonomous cars were really programmed according to survey results, the ethical dilemma would be dealt with quantitatively. If one considers that these algorithms are ultimately nothing more than complex arithmetic operations, one suspects that here, across systems, a bizarre feedback of the mathematical comes about. The ethical problems that result from the success of the mathematics system become even more mathematical.

An international standard of values


Of course, it would also be strange to programme the algorithms differently, so that, depending on the ethical self-understanding and majority decision, certain countries prescribe self-sacrifice, others act in a strictly coherent way, and others give priority to rescue operations, no matter what the risks. Of course, one could easily recode the algorithms via GPS to the locally-applicable ethical norms in order to enforce the different values internationally. At the same time, however, the question arises whether and how it would be possible to give technology an ethical standard that is globally binding. Can a transcultural understanding be reached, beyond different values?

One healthcare application for moral machines is the care of the elderly. Due to demographic change, the proportion of people in need of care will increase sharply in the coming decades. Artificial systems are repeatedly brought into play as a means to counteract the nursing calamity. However, systems to be used in this context face moral choices such as how often and insistently does a care system remind people to eat and drink and to take medication? When should a care system inform the relatives or call the medical service if someone does not move for a while? Should the system monitor the user around the clock and what should be done with the data collected?

The way ahead


Can artificial systems act morally? The development of increasingly intelligent and autonomous technologies inevitably leads them to confront morally-problematic situations. Therefore, it is necessary to develop machines that have a degree of autonomous moral decision-making. It is unclear on what ethical basis artificial systems should make decisions. This also depends on the field of application and should be the subject of a social discourse, especially in those areas of application that require generally-binding rules. It’s timely that there are plenty of research groups and initiatives, both in academia and in the healthcare industry, that are starting to think about the relevance of ethics and safety in AI. 

Key Points

  • AI benefits are already demonstrating value to society
  • It is worth overcoming ethical risks AI presents
  • Machine ethics aim to develop moral machines
  • Philosophy is a key element in AI algorithms
  • A code of global ethical values may be necessary
  • We need to develop machines that comprise autonomous, moral decision-making
 

Next Article: Why embracing artificial intelligence is beneficial for all

«« Sustainable healthcare: The Danish model that's streets ahead


Experts and novices: getting the leadership balance right »»