As AI systems are becoming increasingly sophisticated and taking over many of the tasks of humans, the idea is surfacing that AI systems might make doctors obsolete. However, when we consider the wisdom required in healthcare and the socio-cultural role of the doctor, we could also come to the conclusion that AI will only make the doctor more important than ever. Moreover, by reformulating the role of the doctor, a whole new normative approach to health and care can be articulated.
AI systems are increasingly being tested in clinical contexts, and some of their results in medical diagnostics are impressive. In a broader sense, deep learning could increasingly guide us in our behavior and make our decisions in the future, something we’ve described as “technological decisionism” before. Think of Netflix or Spotify automatically playing a new episode or song, but also of call-center work with virtual assistants answering our questions, autonomous vehicles that do the steering for us, or algorithms making investment decisions. The fear exists that when AI systems are implemented in healthcare, they will render doctors obsolete and will fully automate the medical decision-making process. But before jumping to this conclusion, it is worthwhile to consider the type of rationality involved in healthcare.In the medical context, a decision consists of three components: the subject making the decision (i.e. the doctor in consultation with the patient), the object that is acted upon (i.e. the patient that is treated), and the deliberate process of decision-making: (i) diagnosis of the symptoms, ii) analysis of the patient’s current state and conditions, iii) identifying possible treatments, iv) choosing an actual treatment). AI systems (currently) focus on one part of this decision-making process: the diagnosis part. However, it’s also becoming clear that other intellectual moments are required in the medical decision-making process.In his Ethica Nicomachea (book VI), Aristotle defined five intellectual virtues that constitute “perfected intelligence”, the characteristics that make a decision morally virtuous. The first is techne: productive knowledge, related to craftsmanship. The second is phronesis: practical wisdom and knowing how to live a virtuous life that is guided by reason (i.e. the general idea of the Good Life). The other three are theoretical: nous (the intuitive insights into the first principles and self-evident truths) and episteme (related to scientific knowledge), which are combined in Sophia (theoretical wisdom in the nature and purpose of reality). For Aristotle, phronesis – practical wisdom – has the form of a practical syllogism: applying the general laws of an abstract idea of the Good Life (major) to concrete situations (minor), which renders the premises and maxims for acting well or the wisdom to do the right thing in various situations.The role of AI in healthcare resembles the ideal of episteme: detecting fundamental patterns and correlations in reality. However, phronesis is also very important to doctors for a few reasons. First, as AI systems always carry a certain bias (e.g. because they were trained on a specific dataset or because of the general ambiguity of concepts used in decision-making), it is the doctor who should interpret the conclusions presented by the AI system. This “hermeneutic task” of the doctor is also a necessary requirement for the patient to give informed consent. Lastly, it can be argued that medical decisions are decisions that should explicitly comprise a moral dimension, that medical treatments should be chosen in light of a general idea of the Good Life, as the consequences of these decisions reach far beyond the medical domain alone. As such, the doctor should be able to take the wishes and context of the patient into account (minor), which often involves moral and religious considerations, in order to provide the proper treatment (major) that suits the patient. This requires practical wisdom: reflection on the goals to be achieved in healthcare from a broader perspective of the Good Life in order to make the treatments morally acceptable treatments. As such, it’s clear that medical intelligence is a multidimensional concept and involves much more than episteme, such as practical insight, emotional intelligence as well as moral sense.This touches upon moral and even spiritual dimensions, which transcend the medical practice. This is where we find the actual “caring” part of healthcare. Phenomenologically, sickness and disease are experienced as a “negative force” that happens to people, overtaking their bodily functions and taking away their freedom (e.g. a broken ankle that prevents me from playing football, an allergy that prohibits the consumption of certain products, up until the absolute negativity of death, which absolutely negates the human life). Doctors therefore have a “healing role” in the sense that they should help people to reconcile with their disease, in order to help them live with their illness and regain autonomy and freedom in their own life. Again, these tasks explicitly relate to ethical, spiritual and philosophical questions; indeed, bringing us to the domain of sophia: wisdom on the purpose and insight into the role and nature of human beings in this world. This is what makes medical decisions so radically different from many decisions we make and thus from AI systems suggesting a new song or episode, as an ethical and spiritual dimension are explicit additions to the epistemological act of decision-making.These are tasks or moments of intelligence that AI systems cannot take over from real doctors. But what, then, might the future role of the doctor look like, as it is also clear that AI systems will take over some of the cognitive (epistemic) functions of doctors? Inspiration could be drawn from chess. When Garry Kasparov, the world’s best chess player, lost to IBM’s Deep Blue in 1997, he claimed that it was an unfair match because the AI system had access to a huge database of billions of chess moves. He suggested that human players should have access to similar AI, creating a new type of chess player: a “centaur”, a team of human plus AI. Since then, this centaur has been the best chess player, beating other AI chess systems. Similarly, we can imagine the rise of medical centaurs: human doctors supported by AI systems, but with the final responsibility lying with the doctors, both epistemologically as well as ethically.