Can algorithms be neutral?

April 8, 2019

Human bias is often ascribed to the fact that we have a subjective point of view on reality, caused by our personal history, circumstances, emotions, desires, political agenda, etc. This is problematic when we wish to make fair decisions in which a neutral or objective point of view is considered highly important. AI and algorithmic decision–making seem to offer a promising solution to this problem, because they lack human subjectivity. Unfortunately, bias appears to have resurfaced once again in AI and algorithmic decision-making. Can this be attributed to human programming with subjective values or the input of biased data, or is there more to it?

Our observations

  • Earlier this year, MIT technology Review published the article This is how AI bias really happens—and why it’s so hard to fix on the various ways algorithms can become biased. These variations are mainly ascribed to three causes: 1) Framing the problem: the aim toward which an algorithm is designed can create a biased view on the data it collects, since it is gathered in order to achieve something. 2) Collecting the data: deep learning programs are either fed data that is unrepresentative of reality,which creates a false definition of certain phenomena, or data that reflects existing prejudices. 3) Preparing the data: instilling in an algorithm which characteristics (e.g. age, income, or number of paid-off loans) are important to consider and which need to be ignored, can result in a one-sided view on reality that may, for example, exclude certain groups.
  • In the recent study Discrimination, artificial intelligence, and algorithmic decision-making, the Council of Europe (CoE) recommended additional regulations for AI decision-making that escapes current non-discrimination laws. However, they also agree that it is still unclear what kind of additional laws would be sufficient and that more research and debate, such as in computer sciences, are required in order to determine how to proceed in this matter.  
  • Considering the long list of publications that recently came out, such as: Hello World : How to be Human in the Age of the Machine by Hannah Fry, Algoritmisering, wen er maar aan! by Jim Stolze, Sense making: The Power of the Humanities in the Age of the Algorithm by Christian Madsbjerg or World Without Mind: The Existential Threat of Big Tech by Franklin Foer, it appears that concerns about algorithms are omnipresent. The main concern is that algorithms are being used by powerful big tech companies to gather our data and make personal profiles that can be used to influence our behavior. Another big concern is the (unjustified) trust we put in the outcome of big data analytics.
  • The danger of discrimination and inequality caused by the use of algorithms has already become apparent in several cases, such as in Amazon’s AI recruiting tool that appeared to discriminate women or its biased facial-recognition software. In order to still be able to use algorithms but to avoid discrimination or bias, different parties are searching for solutions. The city of Amsterdam, for example,recently hired KPMG to help them eliminate bias in the algorithms they use in their city (e.g. to automatically handle complaints concerning public spaces).

Connecting the dots

The tendency of people to believe in the validity of recommendations made by algorithms over human advice(so-called automation bias), is a common phenomenon, according to the CoE. It can be motivated in several ways. First, recommendations that are generated through an algorithm have an air of rationality, mainly caused by the algorithm’s superior calculation power and the absence of human subjectivity. Second, automation bias can be caused by a lack of skills, context or time to evaluate whether the computed recommendation has followed a valid path of reasoning. Finally, human decision–makers may try to minimize responsibility by following the advice provided by AI. One of the biggest challenges AI and algorithmic decision–making face, however, concerns discrimination or a bias policy when operating.
In many of the studies, books, articles and reports on bias and discrimination in algorithms, the blame is assigned to the developers of an algorithm, who (whether consciously or not) either program it in a biased manner or feed it data that misrepresents reality, is one-sided or is simply biased. Considering the problem in this manner gives the impression that, although these problems are very hard to solve, they can be solved at some point nonetheless. This is caused by the hidden premise that there is an objective truth about reality to which we have access and which can objectively be represented in (a) language. For example, sentences such as“The distance between the sun and earth is 149,600,000 km”, “Since January 20th, 2017, Donald Trump has been president of the U.S.A.” or, “Water freezes below 0 degrees and boils at 100 degrees Celsius” are considered to express objective truths about reality. However, it can be said that in order to gather and express the objective truth about reality, we need to interpret reality. For example, in order to express the distance between the sun and the earth, we have to agree on measurement principles and from where to where we will measure,

after which we interpret the distance we come across. Studies on language have already shown that an objective representation of reality in language is not unproblematic, to say the least. A prominent voice in this context is Wittgenstein, who argued that the world cannot simply be represented in a series of (language) expressions, but can only be expressed in a series of interpretations and communal understandings in which meaning is in constant change and always dependent on the participants’ conception of a certain definition. In other words, there is no such thing as a fixed definition and therefore, reality cannot be represented by language in a neutral/objective manner. This idea was recently enforced by a scientific experiment in which two quantum scientists made contradictory observations of the same phenomenon.
The possibility that reality cannot be observed or expressed in a neutral manner is problematic for the use of algorithms, since they are developed to help us grasp the objective truth about reality. This problem is briefly touched upon by, for example, the study of the CoE and the MIT article on bias in algorithms. At some point,both mention that language itself always carries a certain degree of ambiguity and sometimes even plain contradictions when it comes to the definition of a concept. This would imply that bias and discrimination in algorithms cannot be solved entirely, since they too work with definitions. This of course doesn’t mean that algorithms can’t be improved or still be of great value in advancing all sorts of processes in which data analytics are required. We should, however, consider the fact that the recommendations of algorithms will be biased by definition, as are those of humans, for that matter. In order to estimate the value of the recommendations made by algorithms, we do not only need to improve algorithms, but also our own ability to evaluate their contribution to understanding the world.

Implications

  • The reason that algorithms are not simply dismissed in all cases in which it is important to avoid bias and discrimination, is because they offer possibilities and potential solutions that transcend human capabilities. It is a technology that can cause economic growth and generate better (scientific) understanding of all sorts of important aspects of our lives. However, the idea that algorithms make decisions on important matters without human intervention might become a no–go in the future.
  • Automation bias might be hard to change, because in many cases, we are used to trusting the outcome arrived at by computers. When we ask Alexa what kind of weather it will be, when we use a calculator, when we want to know the time, in so many cases, we trust the information provided by computers without thinking of the process that precedes it.
  • If we agree that definitions of concepts are in constant change and very much dependent on (human) interpretations, the demand for human programmers of AI to reduce bias will remain a constant factor in the development and use of algorithms.
  • In many aspects of our lives, we have accepted the fact that total neutrality is not possible (e.g. a human judge, no matter how experienced and educated, will always carry some of his subjectivity into his ruling). However, it is questionable if, when we indeed consider AI as a bias tool, we will ever accept their recommendations when it comes to sensitive matters in which our personal or common faith is at stake.

Series 'AI Metaphors'

×
1. The tool
Category: the object
Humans shape tools.

We make them part of our body while we melt their essence with our intentions. They require some finesse to use but they never fool us or trick us. Humans use tools, tools never use humans.

We are the masters determining their course, integrating them gracefully into the minutiae of our everyday lives. Immovable and unyielding, they remain reliant on our guidance, devoid of desire and intent, they remain exactly where we leave them, their functionality unchanging over time.

We retain the ultimate authority, able to discard them at will or, in today's context, simply power them down. Though they may occasionally foster irritation, largely they stand steadfast, loyal allies in our daily toils.

Thus we place our faith in tools, acknowledging that they are mere reflections of our own capabilities. In them, there is no entity to venerate or fault but ourselves, for they are but inert extensions of our own being, inanimate and steadfast, awaiting our command.
Read the article
×
2. The machine
Category: the object
Unlike a mere tool, the machine does not need the guidance of our hand, operating autonomously through its intricate network of gears and wheels. It achieves feats of motion that surpass the wildest human imaginations, harboring a power reminiscent of a cavalry of horses. Though it demands maintenance to replace broken parts and fix malfunctions, it mostly acts independently, allowing us to retreat and become mere observers to its diligent performance. We interact with it through buttons and handles, guiding its operations with minor adjustments and feedback as it works tirelessly. Embodying relentless purpose, laboring in a cycle of infinite repetition, the machine is a testament to human ingenuity manifested in metal and motion.
Read the article
×
3. The robot
Category: the object
There it stands, propelled by artificial limbs, boasting a torso, a pair of arms, and a lustrous metallic head. It approaches with a deliberate pace, the LED bulbs that mimic eyes fixating on me, inquiring gently if there lies any task within its capacity that it may undertake on my behalf. Whether to rid my living space of dust or to fetch me a chilled beverage, this never complaining attendant stands ready, devoid of grievances and ever-willing to assist. Its presence offers a reservoir of possibilities; a font of information to quell my curiosities, a silent companion in moments of solitude, embodying a spectrum of roles — confidant, servant, companion, and perhaps even a paramour. The modern robot, it seems, transcends categorizations, embracing a myriad of identities in its service to the contemporary individual.
Read the article
×
4. Intelligence
Category: the object
We sit together in a quiet interrogation room. My questions, varied and abundant, flow ceaselessly, weaving from abstract math problems to concrete realities of daily life, a labyrinthine inquiry designed to outsmart the ‘thing’ before me. Yet, with each probe, it responds with humanlike insight, echoing empathy and kindred spirit in its words. As the dialogue deepens, my approach softens, reverence replacing casual engagement as I ponder the appropriate pronoun for this ‘entity’ that seems to transcend its mechanical origin. It is then, in this delicate interplay of exchanging words, that an unprecedented connection takes root that stirs an intense doubt on my side, am I truly having a dia-logos? Do I encounter intelligence in front of me?
Read the article
×
5. The medium
Category: the object
When we cross a landscape by train and look outside, our gaze involuntarily sweeps across the scenery, unable to anchor on any fixed point. Our expression looks dull, and we might appear glassy-eyed, as if our eyes have lost their function. Time passes by. Then our attention diverts to the mobile in hand, and suddenly our eyes light up, energized by the visual cues of short videos, while our thumbs navigate us through the stream of content. The daze transforms, bringing a heady rush of excitement with every swipe, pulling us from a state of meditative trance to a state of eager consumption. But this flow is pierced by the sudden ring of a call, snapping us again to a different kind of focus. We plug in our earbuds, intermittently shutting our eyes, as we withdraw further from the immediate physical space, venturing into a digital auditory world. Moments pass in immersed conversation before we resurface, hanging up and rediscovering the room we've left behind. In this cycle of transitory focus, it is evident that the medium, indeed, is the message.
Read the article
×
6. The artisan
Category: the human
The razor-sharp knife rests effortlessly in one hand, while the other orchestrates with poised assurance, steering clear of the unforgiving edge. The chef moves with liquid grace, with fluid and swift movements the ingredients yield to his expertise. Each gesture flows into the next, guided by intuition honed through countless repetitions. He knows what is necessary, how the ingredients will respond to his hand and which path to follow, but the process is never exactly the same, no dish is ever truly identical. While his technique is impeccable, minute variation and the pursuit of perfection are always in play. Here, in the subtle play of steel and flesh, a master chef crafts not just a dish, but art. We're witnessing an artisan at work.
Read the article

About the author(s)

At FreedomLab, Jessica's research primarily centered on the impact of technology on education and the nature of virtual reality and artificial intelligence. She is an alumna of the Vrije Universiteit Amsterdam, where she completed two degrees in philosophy and an additional research program. Integral to her personal and professional development, Jessica delves deep into literature concerning the philosophical relationships between humans and nature, and the importance of critical thinking and human autonomy vis-à-vis the impending wave of technological revolutions.

You may also like