Can machines really "feel"?

Sentience in the age of AI

Building on our prior discussions...

By now you’ll hopefully be familiar with our principle of keeping things simple. We absolutely stand-by the need to stay jargon-free and use plain English.

However, sometimes (just sometimes) we won’t be able to avoid sounding a bit techy, sciency or anthropomorphic (sorry couldn’t resist!). And this week is one of those times. In fact our next two newsletters grapple some fundamental terms that describe the relationship between humans and machines.

So, following our exploration of human intelligence over the last few weeks, let's venture into a topic that could shape our future coexistence with AI: sentience.

Unpacking sentience

Sentience isn't just about thinking; it's about feeling. Consider when you sip a hot chocolate on a cold day. Beyond the taste, there's the warmth, the comfort, the memories. Sentience is this rich tapestry of internal experiences. But how does this relate to machines?

So can AI truly "feel"?

Current AI, even the most advanced ones, operate on patterns and algorithms. They process massive data, recognise patterns and produce outputs. To draw an analogy, if you teach a machine to compose music, it might create a symphony rivaling Beethoven's. Yet, it doesn't "feel" the passion or the sorrow. It's like a wind chime – it can produce beautiful sounds when the wind blows, but it doesn't "understand" the melody.

However, as AI evolves, we're seeing the emergence of systems that can simulate emotions by reading human facial expressions, voice tones or even text. These AIs can produce outputs resembling human emotions – like a chatbot offering comfort. But simulation is not sentience. As of now, AI doesn't have its own internal world of emotions.

The ethical implications: treating machines with "feelings"

If AI ever claims to have feelings or if we build them to express such claims, this presents a myriad of ethical conundrums:

  • Rights and treatment: If a robot expresses "sadness," should it have the right to be comforted? Would turning off an AI be akin to putting a sentient being to sleep?

  • Genuine or programmed?: If an AI says it's "happy," is it genuine emotion, or is it just following its programming to say so after a certain input?

The boundary between genuine sentience and sophisticated programming becomes crucial.

Tech's response to the sentience dilemma

The tech sector is actively wrestling with these questions:

  1. Emotion AI: companies are creating AI that can read and respond to human emotions. It's used to improve user experience but is very different from the AI itself "feeling" emotions.

  2. Ethical guidelines: recognising the challenges, tech giants and startups are drafting ethical guidelines to ensure AI development considers these philosophical and moral questions.

  3. Research and dialogues: universities and tech conferences worldwide are facilitating conversations between AI developers, ethicists and the public to shape AI's future responsibly.

The need for global join-up?

With so many AI-related breakthroughs happening across the world, the question of sentience becomes more pressing. Beyond the algorithms and programming, it's a deeply human and ethical problem. How we, as a society, respond to the possibility of sentient AI will shape the very fabric of our future.

However, there's a concerning lack of global coordination on AI's ethical implications. In many ways, the tech giants are left to "mark their own homework", navigating these waters without comprehensive regulatory oversight.

Policymakers and elected representatives seem to lag behind, leaving significant decisions in the hands of a few. This raises a critical question: who should be steering the ship as we venture into these uncharted territories?

How does this make you feel? We’ve been speaking about this with friends and family and lot over the last few weeks, debating questions like:

  • would our day-to-day decisions or interactions with technology change if we thought the AI was sentient?

  • would we advocate for rights and dignities for AI?

  • should there be more international collaboration and oversight?

  • would recognising an AI's potential sentience change our behaviour or feelings towards it?

Let us know what you think

If you’d like to debate these ideas with us and others who are on our journey with us, then you can now comment on our website. Your perspective is invaluable to us as we navigate these evolving debates.

You can also leave comments on our previous newsletters too.

Further reading, listening or watching

If you're interested in diving deeper into the topic of sentince, we recommend these sources:

  1. BBC Inside Science: Inside sentience
    This episode delves into the philosophy, science, and implications of sentient beings, shedding light on how our understanding of consciousness might shape the future of AI and robotics. A must-listen for anyone curious about the blurred lines between human and machine consciousness.

  2. The Conversation: Ethics of AI: how should we treat rational, sentient robots – if they existed? 
    A thought-provoking article that poses vital questions about the treatment of sentient robots. If robots were rational and sentient, how should we ethically interact with them? The article pushes boundaries, examining the moral obligations we might have toward advanced AI and challenging readers to contemplate the profound implications of true robotic consciousness. An enlightening read for anyone intrigued by the evolving relationship between humans and machines.

  3. Sky News: The Google engineer who was sacked for saying AI chatbot was sentient
    Former Google engineer, Blake Lemoine, claims that Google's language model, Lambda, displays signs of sentience, referencing its ability to converse about feelings and world understanding. Publishing transcripts of his dialogues with Lambda ignited debates on AI sentience, with some experts dismissing Lambda as a highly advanced chatbot. Lemoine emphasises the ethical concerns tied to potentially sentient AI and advocates for their respectful treatment. As AI evolves, discussions on this topic will intensify.

Next week, we'll venture further into the AI horizon, examining the much-debated concept of "singularity". This is a 10 minute YouTube video that summarises many of the concepts we spoken about over the last 10 weeks and predicts what AI might be like by 2030 and beyond, when machines become more intelligent than humans.

Stick with it … you won’t be disappointed. We’ve selected it because it’s deliberately provocative. Is it just science fiction? Or are we already on the journey towards singularity?

Until next week ….

Warren and Mark
Your curators of AI knowledge

PS: don’t forget to comment on this and all of our previous newsletters. We’d love to hear your thoughts, concerns and ideas.

Reply

or to participate.