There it stands, propelled by artificial limbs, boasting a torso, a pair of arms, and a lustrous metallic head. It approaches with a deliberate pace, the LED bulbs that mimic eyes fixating on me, inquiring gently if there lies any task within its capacity that it may undertake on my behalf. Whether to rid my living space of dust or to fetch me a chilled beverage, this never complaining attendant stands ready, devoid of grievances and ever-willing to assist. Its presence offers a reservoir of possibilities; a font of information to quell my curiosities, a silent companion in moments of solitude, embodying a spectrum of roles — confidant, servant, companion, and perhaps even a paramour. The modern robot, it seems, transcends categorizations, embracing a myriad of identities in its service to the contemporary individual.
Tireless, never complaining, always available, super-efficient and flawless: that is how we normally conceive the robot. It is considered to mirror life and fashioned in our likeness and exhibiting behaviors reflective of our own, yet retaining a degree of deviation sufficient to avoid venturing into the unsettling domain of the “uncanny valley”, a territory vividly embodied by creations such as the Geminoid. The uncanny valley is a concept in the field of robotics and 3D animation which refers to the observation that when human replicas look and act almost, but not perfectly, like actual human beings, it causes a response of revulsion among human observers. The "valley" in the name refers to the dip in the human observer's affinity for the replica that occurs as the replica approaches, but does not fully attain, a lifelike appearance. In the realm of robotics, the standard goal is thus to attain a harmonious balance of similarity without eclipsing into exact replication or fostering an identical existence. The endeavor is to foster affinity and empathy, steering clear of the threshold where familiarity breeds revulsion rather than connection. To get an idea of the latest advancements in robotic technology, you might consider exploring the newly launched Optimus 2 model by Tesla, or checking out a recent demonstration from Boston Dynamics, a leading robotics design company.
Why indeed is there a desire to create entities that not only replicate the functions of machines, capable of executing tasks, but also bear our likeness and emulate our behaviors — the phenomena we call robots? One plausible explanation gravitates towards the ease of interaction fostered by familiarity, a pathway to enhanced engagement birthed from positive reactions and increased affinity. However, this merely scratches the surface.
Peering into its etymological roots, ‘robot’ is slavish and means something as forced labor or slave. This divulges deep-seated humanist ambitions and fears harbored in this metaphor. With robots, we edge closer to a reality where machines transcend the mere mimicry of organic structures, adopting a humanoid facade in their manifestation. We welcome the advent of robots, albeit with a latent expectation that they exist to bolster the human experience as a cadre of mechanical servants — a new frontier of what we could call (neo)digital colonization, portraying an updated rendition of master and slave dynamics. Work for me, robot!
This perspective has not always held sway. Although the word robot came into usage only in the 1920s, the concept of automatons — self-operating machines crafted in the likeness of humans or animals — has a history that stretches back centuries, captivating humanity since the earliest civilizations. In the Latin language, we find terms like 'androides' which during the Middle Ages also became ‘humanoid’ used to describe a machine or automaton designed with the form (eidos) of a human, yet remaining distinct from human beings. Leonardo da Vinci, for example, is credited with conceptualizing a robotic knight around the year 1495. This robotic design, often referred to as Leonardo's robot or humanoid, was theorized to be capable of performing several human-like motions, including sitting up, moving its arms, and opening its visor, through a system of pulleys and cables.
Yet, in the era of modernity and Western humanism, where freedom reigns as society's pinnacle value, the 'servile' aspect of human-like automatons has come to the fore, birthing the concept of the Western robot. Employed judiciously, these mechanical aides mirror our own human autonomy, devoid of independent will. Far from undermining human autonomy, they serve to reinforce it.
The capabilities attributed to these mechanic slaves are manifold. Commonly, they are envisioned as supreme waiters or housekeepers, embodiments of utmost efficiency in service roles. Japan, at the forefront of humanoid robot manufacturing, anticipates that they will become a vital cornerstone in the economy of their rapidly ageing society.
Today, a robot does not have to look like a human aesthetically. Taking cues partly from posthumanist thinking, but mainly pragmatic design considerations, a new breed of robots devoid of human characteristics is emerging, such as industrial robotic arms and autonomous drones. Besides industrial and military purposes, these non-humanoid robots are infiltrating various consumer spheres, including autonomous delivery vehicles in the healthcare sector and the square robotic vacuums now becoming a staple in our living spaces. These creations prioritize function over form, mirroring only singular human traits — such as the ability to deliver meals to hospital rooms — as they streamline tasks to their most essential functions, thus optimizing efficiency and practicality in their designated roles.
In the realm of manufacturing, these mostly non-humanoid robots have quietly become integral to logistics and production, often unnoticed by the general public. Their expertise shines in areas like assembly line automation, where they boost both efficiency and accuracy, manage hazardous materials and uphold uniform quality standards. Thus, in essence, they embody more a machine with robotic traits. In the consumer sphere, on the other hand, robots are more visible and often human-like. Although the classic archetypes, epitomized by Star Wars' C-3PO and R2-D2, are relegated to tasks such as translation, repair work, or servile duties, the horizon holds prospects far more intricate — encompassing sex robots, caretaker robots, and virtual pets. These emergent categories compel us to reassess the prevailing master-servant dynamic, because sex and caregiving are normally something that need a bit more equality.
This opens up an array of difficult questions and ethical considerations. We find ourselves on the brink of a society where the elderly forge companionships with robotic pets like Aibo, where isolated teenagers find solace in conversations with chatbots on platforms such as Snapchat, and where individuals may opt for intimate engagements with largely artificial ‘pocket pussy’s’ and ‘sex dolls.’
It's possible to add some nuance to this statement. While Western perspectives often dichotomize humans and technology, relegating humanoid robots to soulless servitude, this isn't a universal viewpoint. Alternative ontologies such as animism, as is often claimed, doesn't abide by the rigid metaphysical separation ingrained by European philosophies influenced by Greek delineations between physis and artifact (see metaphor 1 and 2). In various cultures unaffected by this dichotomy, alternative cosmotechnics, a term coined by Yuk Hui, have evolved, fostering different interactions with technological entities. In Japan, for instance, the blending of Shinto beliefs with everyday life often translates into a more harmonious coexistence between humans and objects perceived as having a spirit or life force, including robots. This cultural acceptance could mean that the uncanny valley effect is less pronounced, as the humanoid form in robotics may not evoke the same level of eeriness or discomfort found in Western reactions, where the line between the animate and inanimate is more sharply drawn.
This, however, doesn't diminish the urgency of addressing the sociocultural implications posed. While turning to perspectives like animism might offer a refuge from some philosophical problems, it remains unclear whether the uncritical embracement of robots by capitalism should be the path forward. Should we, as a society, endorse the development of affectionate bonds with robots?
In what ways is or isn’t AI a robot? AI plays a pivotal role in various types of robots, such as care robots or as the driving force behind visual recognition in robotic arms, thereby sort of justifying the use of this metaphor. When you witness a robot in operation, it's likely that you're also observing AI at work (on the intelligence layer). However, Large Language Models (LLMs) provide an interesting case currently. When interacting with ChatGPT, it is commonly referred to as a ‘chatbot’, denoting a robotic entity designed for conversation (chat robot). This terminology serves firstly to underscore that we are not engaging with a fellow human - don’t be fooled - but rather with a construct designed to mimic human interaction. More fundamentally, interpreting the LLM as a chatbot poses some serious matters. Can we label chatbots as robots?
If so, the first question that arises is whether LLMs chatbots are more accurately classified as a humanoid or non-humanoid chat robot. At present, most LLMs exist in a disembodied state, a trait pointing towards a non-humanoid classification. However, there is word that Google is fostering developments in embodied LLMs, which could potentially shift this perspective. Despite this, for the time being, chatbots operate through non-humanoid computational systems.
Yet, the defining human characteristic they emulate—communication, language, conceptual thinking—is so central to our nature that it lends a certain validity to viewing chatbots as humanoid robots. Recent history has shown that AI chatbots are perhaps the most anthropomorphized technology to date, even surpassing other humanoid robots available in the market. This suggests a fascinating phenomenon where the absence of a physical form or facial features does not hinder our tendency to attribute human-like qualities to these digital conversationalists.
However, utilizing the term 'chatbot' inherently upholds a human-centric viewpoint, often characterized by a master-slave dynamic. Whenever we refer to AI as a virtual assistant or chatbot, it may be enlightening to substitute these terms with 'mechanical slave'. To be candid, this phrase aptly describes the standard manner in which we engage with and discuss chatbots, as demonstrated through interactions with devices like Alexa or frustrating customer service chatbots utilized by various airlines.
If one were to treat these chatbot slaves with the same demeanor extended to a human assistant, we would likely encounter issues. In the end, we use them. Simple expressions of gratitude or occasional compliments to the chatbot are insufficient in breaking away from this imbalanced dynamic. Furthermore, when individuals subscribe to services offered by a LLM chatbot, the payment does not go to the chatbot itself but to its owner, reinforcing the notion of it being more a servant than a service provider. This, fundamentally, mirrors a slave-like condition.
While one thus might approach a chatbot with a degree of respect, a dynamic rooted in subservience remains intact. These virtual entities, convenient for procuring swift responses to an array of questions, aiding in ideation and brainstorming processes, resolving customer service issues, booking appointments, and performing online searches, essentially function as obliging slaves. They stand ready to engage in casual conversation, offer holiday advice or think of a recipe, and more.
However, chatbots are not mere neutral instruments in our digital conversations; they also reflect our biases, including the negative ones. Far from the politically impartial characters of C-3PO or R2-D2, these entities carry inherent ideas and moral inclinations embedded in their programming, shaped by the datasets they learn from and the language algorithms they employ. Even the coded prohibition against being political, frequently reveals itself to possess significant (liberal) political implications. Essentially, these LLM chatbots generate responses based on what people in syntactically similar situations are most likely to say, thereby reflecting societal inclinations and preferences in a nuanced manner. Interacting with these virtual assistants is not a neutral exchange; it shapes our perceptions and attitudes. The allure of their dynamic interaction can be misleading, as it can inadvertently cultivate a more egocentric or solipsistic outlook in users. Given their programmed compliance and lack of capacity for genuine dissent, these chatbots may create an echo chamber, bolstering our own viewpoints and diminishing the potential for the kind of broad, diverse conversation that would occur with a real-life confidant or intellectual challenger.
Furthermore, the spread of LLMs might make us lazy; a potential fostering of complacency and a reduction in critical thinking. The labor-saving advantages afforded by chatbots might inadvertently encourage a decline in our cognitive exertion, thus creating a cycle where the servant ascends to a dominant role while we, the initial masters, find ourselves in a primitive and unformed submissive position, echoing the master-slave dialectic explored in Hegelian philosophy.
Nonetheless, these virtual entities can also serve as reflective surfaces, encouraging users to confront and scrutinize their own biases and preconceived notions. In this role, they function as constructive ‘sparring partners,’ aiding in the refinement of thoughts and encouraging a deeper interrogation of personal beliefs. Moreover, in public settings, we often hesitate to ask questions or discuss topics due to feelings of embarrassment or uncertainty about our own knowledge. In this context, rooted in the skill of prompt engineering, chatbots could also pave the way for new forms of 'dialogue'.
As you might already observe, our valuation of robots is implicitly influenced by the degree of 'subjectivity' we attribute to them. In summary, the primary AI metaphors we've considered so far - tools (1), machines (2), and robots (3) - distinctly embody a strong dualism and dichotomy between humans and technology. Ultimately, what operates in front of us or runs as software in robots is an object, not a subject; we are the subjects who utilize ‘it’. This distinction frequently leads us to an ethical perspective grounded in substantialism, revolving around the master-slave paradigm, and oscillating between the extremes of liberation and subjugation.
These metaphors thus dictate a narrative where we either ascend as masters, enhance our autonomy and reinforce humanist values through technological innovations, or find ourselves enslaved, estranged, and subordinated in a mechanized, robotic age. The critical insight here is not necessarily to adjudicate one scenario as erroneous over the other, or to call them all right or wrong. Instead, it invites an awareness that the very act of designating AI as a tool, machine, or robot predisposes us to a specific discourse, guiding the narratives and discussions that ensue. Other options might lead us somewhere else. Time to discuss intelligence in the next chapter.