One of the most influential contemporary philosophers is Daniel Dennett. Based on Darwin’s theory of evolution, he has developed a new notion of free will, responsibility and consciousness. These new perspectives can help us navigate the debate on responsibility in AI, especially since some designs are inspired by the workings of the human brain. Questions and concerns we have regarding AI and responsibility appear to show great similarities with those regarding humans, and Dennett offers us tools to think them through.
Daniel Dennett’s model of free will and responsibility is rooted in an unconscious process of biology and laws of physics that are determined by the mechanic rules of cause and effect. He claims that the system that underlies everything that exists and happens combines countless cases of causes with necessary effects that together cause numerous other necessary effects, and so on. This system is so extensive that we will only be able to comprehend little bits and pieces of its ingenuity at best. We and everything we do are necessary effects of this system too. Free will in the traditional sense is therefore an illusion, since ultimately, our actions are always the necessary effect of a combination of the countless events that happened before. He compares this illusion with stage magic, which is in fact a combination of greatly performed tricks. In our case, Dennett argues that it is our brain that tricks us into believing all sorts of things, among which the existence of free will.For Dennett, this does not mean we can’t have responsibility. Although free will in the more traditional sense is indeed impossible, free will in the sense of superior evolutional biology is possible. In the course of billions of years, greater competences have evolved and the cognitive human competences exceed those of other lifeforms. This not only makes it possible to respond in a superior manner to situations, it also gives us the opportunity to experience a sense of reason or meaning to our (re)actions. Whether these reasons are ultimately accurate or not is irrelevant here. Their function is to guide us, not to give insight in the true workings of our brain. Our highly developed cognitive competences and sense of meaning have resulted in culture, science, art, and so on.If we accept Dennett’s emphasis on the importance of the extensive and dazzling path of cause and effect that brought us to this point of sophistication, an agent can be responsible when two conditions are fulfilled: it has developed highly cognitive abilities and it holds a set of beliefs that generate a sense of meaning (e.g. reasons why we act and live). The belief in a higher power, for example, has served an important purpose in our evolution, according to Dennett. Fragile as we humans are, it is vital for us to work together. With the emergence of human intelligence, however, we became capable of “cheating” in the game of survival by lying to one another. To illustrate: I can say that I went out hunting with no success, while in fact, I have secretly been enjoying a lazy day. This could jeopardize the survival chances of my tribe. However, my belief in a higher power causes me to feel that my misbehavior will be noticed and punished one way or another, which directs me to more responsible behavior.How does this idea translate to AI and responsibility? Is AI able to fulfil the two conditions of responsibility according to Dennett’s philosophy? Deep learning has proven to be able to quickly evolve its impressive calculation abilities, which could result in a set of highly developed “cognitive” competences. Impressive competences are already emerging: in many specialized domains, they outperform human experts (e.g. detecting cancer). The second condition, implementing a set of beliefs that add a sense of meaning, also seems to be possible. This was already imagined in science fiction when AI was still in its infancy. In Steven Spielberg’s movieA.I., for example, a robot child who is programmed to love the woman who turns on the switch of eternal motherlove, which determines all of his actions from then on. Experiments with the implementation of human-like convictions have already been done with, for example, self-driving cars to make them better drivers.This would mean that AI, at least in principle, is a good candidate for accountability according to Dennett’s perspective. However, Dennett’s philosophy also implies that the way our responsibility came about is impossible to duplicate because of its origin in a unique process of cause and effect. Human responsibility emerged from billions of years of evolution and from animated bodies. The way our brains came about therefore differs profoundly from the way AI programs are made. Even if AI could exercise responsibility, its particular variant might never turn out as we want it to. He argues that the biggest danger in AI is that we are so impressed with its calculating powers, we forget that the workings of our own brain are infinitely more ingenious. So, although the workings of the brain and the workings of AI show great similarities, it is our extensive evolutional history that sets us apart. Dennett concludes that in the foreseeable future, AI may offer intelligent tools which can be highly optimized, but we shouldn’t hold our breath to see them become our new colleagues.