Essays

Tooling and feeling

On our relationship to the tools that shape us, how machine intelligence understands, and the value in the human capacity for feeling.
Last updated 9 min read

Bruno Latour famously said that ‘technology is society made durable’. Humans are intertwined with technology; it does not sit alongside us. It shapes humans as we shape it, facilitates the distribution of agency and information, and extends perception. Improved healthcare, more efficient transportation, increased access to information… innovations that stack on innovations to realise new worlds.

But this idea omits the many ways technology has made society precarious. Stability and longevity appear to be threatened now more than ever, even as humans continue to make breakthrough technological advancements in engineering, energy production, conservation, and healthcare.

Enter into the conversation artificial intelligence—a technology or set of technologies that have been around for the best part of eight decades, yet only recently have entered everyday discourse. The spectacle encourages us to look at the neural networks and the shocking fidelity of their outputs that will likely lead to widespread automation of jobs and huge societal ramifications. It is as if artificial intelligence is something inevitable and organically emerging. But this sleight of hand distracts from the constancy of human nature. Societies waver and the metaphors with which we frame our worlds evolve but—as is so frequently noted—the generational repetition of trends reflects the cyclical way in which social structures reproduce. Whilst cultures are repeatedly reshaped by technology, the underlying human drives and desires change very little and very slowly, like the patterns that emerge amidst the ceaseless flow of water in a stream. As Bryan Norton explains, “The more ubiquitous that digital technologies become in our lives, the easier it is to forget that these tools are social products that have been constructed by our fellow humans”.

Is artificial intelligence just another tool? If we trace the history of (human) tooling back to the earliest known examples, we find the sharpened stones discovered in Kenya estimated to be over three million years old. These stones were likely used for slicing and chopping. They may also have been used as weapons. Whilst tools extend the human capacity for creativity and compassion, they also extend our capacity for cruelty.

Tools have always been a form of prosthesis. Technologies amplify power and make of humans more than their innate physical and cognitive abilities. But they do not themselves feel, and they do not think. As Meghan O’Gieblyn suggests, “Today, as AI continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel”.

The anxiety many observers feel as they witness the seemingly exponential improvement of machine learning and generative artificial intelligence stems less from the idea of the material destruction of the human species but the erasure of humans as the dominant species on the planet. To the secular mind, being cognitively superseded by a greater intelligence is akin to extinction. Humans are sovereigns who fear dethronement.

We define ourselves in opposition to non-human animals by the richness of our cultures and the anthropic capacity for abstract thinking and creativity. Centuries of enlightenment thinking have reinforced the idea of the human mind as superior to all other minds. But as Jennifer Ackerman explains in The Genius of Birds, intelligence is a slippery concept: “While our neurons were sorting themselves into cortical layers to generate complex behavior, birds were devising another neural architecture altogether, different from a mammal’s but—in some ways, at least—equally sophisticated”. There is no one-size-fits-all approach to intelligence, humans are clearly in no way superior to non-human animals, and we have no threat to fear from our avian overlords (I hope).

But neural networks are created with the logic of trying to simulate the human brain in order to undertake human tasks such as calculation, writing, and data analysis at speeds, scales, and levels of accuracy that greatly outstrip human abilities. The impact on jobs aside, this is not what throws our collective psyche into an existential tailspin. As Karl Sharro remarks, "Humans doing the hard jobs on minimum wage while the robots write poetry and paint is not the future I wanted”. When the computer makes its own music, to what extent is the human capacity for music-making undermined? Every year millions flock to museums, concerts, poetry readings, and raves—through art and the stories artists tell we feel a sense of connection and communion with the rest of our species.

Anything artificial intelligence ‘dreams up’ is nothing more than a simulacrum, the chance mashings-up of the human outpourings gathered in the data sets upon which the machines learn.

But machines have no story to tell. They have no experience with which others can relate. They do not feel. Anything artificial intelligence ‘dreams up’ is nothing more than a simulacrum, the chance mashings-up of the human outpourings gathered in the data sets upon which the machines learn. Generative AI can dream but it has no imagination. Meaning is flattened, collapses. And regardless of how transformer models improve, the notion that swathes of future cultural artefacts could be churned out by such unthinking, unfeeling tools—at the expense of genuine human creativity—is a deadly dystopian prophecy.

But what does it mean to ‘feel’?

In your bones

Intuition—what Melanie Mitchell identifies as intuitive biology, intuitive physics, and intuitive psychology—is that which constitutes “the foundation for human cognitive development, underpinning all aspects of learning and thinking”. From these bodies of intuitive experience, we construct our constellations of interrelated concepts that scale, fold into each other, and form anew.

These immaterial forms, to lean into Plato’s terminology, merge and mutate in the act of creativity. They produce new forms rendered material through art and music and philosophy. Architecture. Technology. Plato believed that the abstract plane of thought and ideas was superior to the world of the senses and materiality, which expressed representation of those forms. Descartes, similarly drawing a divide between the mind and body, considered instead the ephemera of thought to be representations of reality. Somewhere between these two views, we find a paradigm wherein the world outside our bodies both shapes and is shaped by our subjectivities. Amidst the edifice of symbols that determine subjectivity lurks intuition. Common sense. From experience, instinct.

If artificial intelligence can be said to ‘possess’ knowledge, what about intuition? Melanie Mitchell explains that the mental models we construct of the world and that represent—in the Cartesian sense—how the world works, are based on our knowledge of “physical and biological facts, cause and effect, and human behaviour”. Pierre Bourdieu’s concept of habitus extends this idea: "The habitus, as the word implies, is that which one has acquired, but which has become durably incorporated in the body in the form of permanent dispositions. So the term constantly reminds us that it refers to something historical, linked to individual history, and that it belongs to a genetic mode of thought”. These models we construct out of intuition and habitus allow us to mentally simulate situations. And out of these simulations, understanding. From understanding, empathy.

We should be wary of anthropomorphising the machine. To suggest that artificial intelligence can understand is to misunderstand what it means to understand. Other species possess ways of knowing beyond our ken—take for example avian cognition: “What kind of intelligence allows a bird to anticipate the arrival of a distant storm? Or wind its way to a place it has never been before, though it may be thousands of miles away?” (Ackerman). That incredibly powerful transformer models like GPT-4 exhibit phenomena like the ‘reversal curse’ and hallucinations is one thing, but that the engineers of these models don’t quite understand how model parameters collaborate to generate these confused outputs demonstrates how the ghost in the machine eludes its creators.

This is unsurprising not only because of the incomprehensible scale of convolutional and transformer neural networks but also because we don’t fully understand how our own brains intuit the world in the first place. The phenomenological and kinaesthetic experiences that shape our sensations and sentiments—intuition, in other words—are unique to each mind and body as it exists in the world. The spiritual side of me wonders why we strive to understand this. Cognition is one thing; consciousness is another entirely. But in machine learning, Meghan O’Gieblyn suggests, “the hope persists that if we put the pieces together the right way—through either ingenuity or sheer accident—consciousness will simply emerge as a side effect of complexity. At some point nature will step in and finish the job”.

Many, myself included, hope for one reason or another that it will not. And even if order emerges from chaos, what autopoietic consciousness will we encounter? Many of the fears expressed about the emergence of intelligence that surpasses humans are about whether or not it will be benevolent—will humans be forgiven for their sins or will Frankenstein’s monster turn on its creators? But what if such an intelligence were utterly incomprehensible—if the concept of benevolence (or human concepts more generally) were outside the ken of its ‘mind’?

In his essay on the philosopher Bernard Stiegler, Bryan Norton reminds the reader that our course is not predetermined—although technology shapes our sense of who we are as individuals and a species, when we are dazzled by the spectacle of the tools themselves, rather than how they are developed and harnessed in the world, we lose perspective on reality: “We become trapped, merely describing the technological world on its own terms and making it even harder to untangle the effects of digital technologies and our everyday experiences”.

We should instead ask not what artificial intelligence will do to us, but what it can do for us.

The good, the mundane, and the ugly

John McCarthy, who coined the term artificial intelligence back in the fifties, said that “as soon as it works, no one calls it AI anymore”. Like so many of the tools we interact with daily and take for granted, the majority of ways in which AI will be implemented will cut through noise, reduce unnecessary repetitive effort, speed up discoveries, and hopefully enable new forms of individual and group empowerment.

We are already seeing incredible advances in healthcare and science. And in terms of creativity, AI-powered tools offer the option to democratise and turbocharge ideation and production. Machine learning extends and amplifies human potential in ways and at scales we’ve never before witnessed. Although there are serious concerns around IP protection and the monopolisation of these technologies that need to be addressed, rather than see AI replace human creativity, we may see it flourish.

But we are perhaps sharpening our tools far faster than we can develop our ability to safely wield them. The cuts will be many, and they will be deep.

But we are perhaps sharpening our tools far faster than we can develop our ability to safely wield them. The cuts will be many, and they will be deep. As AI researcher Pedro Domingos quips, “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world”. There’s no clearer example of this than the impact of smartphones and social media on the cognitive development of teenagers and mental health in general. Content-selection algorithms, as Stuart Russel argues, are not particularly intelligent, but are nonetheless in a position to influence billions of people due to the scale at which they are deployed in the service of human greed.

Technology in the twenty-first century has served only to consolidate power in the hands of the few, and accelerate the spread of propaganda, deepfakes, and fabricated academic research. Institutional media outlets people used to turn to for reliable journalism risk collapse. Without swift and well-considered regulation and education, AI will further fuel the erosion of trust in authorities and expertise that impacts civil rights, democracy, and the ability for people to trust their realities. The social contract dissolves.

There are grassroots responses to this. Movements like Feminist AI and Slow AI demonstrate decolonial approaches that champion different epistemologies and ways of worlding that attempt to dismantle the tools of the oppressor before they’ve had a full chance to wield them. Regulation is in the works, though it will be too late and likely not enough. The pace of change is unnerving, but beware the utopians and doomers alike—machine learning will not solve all the world’s problems, but neither does it necessarily pave the way to dystopia.

We cannot hope for another society or mourn lost futures. I don’t disagree that “we must plan for the possibility that machines will far exceed the human capacity for decision making in the real world” (Russel), but for the foreseeable future, it is humans making the decisions that shape our world.

And only a bad workman blames his tools.

Further reading

  • Ackerman, Jennifer. The Genius of Birds. Corsair, 2017.
  • Bourdieu, Pierre, and Richard Nice. Sociology in Question. Sage, 1993.
  • Mitchell, Melanie. Artificial Intelligence: A Guide for Thinking Humans. Picador USA, 2020.
  • Norton, Bryan. “Our tools shape our selves.” Aeon, https://aeon.co/essays/bernard-stieglers-philosophy-on-how-technology-shapes-our-world. Accessed 14 Apr. 2024.
  • O’Gieblyn, Meghan. God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning. Anchor Books, 2022.
  • Russell, Stuart J. Human Compatible: Artificial Intelligence and the Problem of Control. Penguin Books, 2020.

More essays

    Thinking without feet

    How spatial metaphors and cognitive processes rooted in our embodied experience of the physical world help us comprehend and navigate digital worlds.

    5 min read

    The only constant

    Reflecting on the authorship of designed artefacts that live a life beyond the hands of those who crafted them.

    5 min read