ARTICLE

AI: Entering the Uncanny Valley

There’s something not quite right about that LinkedIn post you just scrolled past. The profile picture looks polished, perhaps too polished. The writing sounds professional, yet strangely hollow. The engagement seems impressive, but somehow manufactured. That creeping sensation you’re experiencing? You’ve just wandered into the uncanny valley.

The Eerie Space Between Human and Machine

The “uncanny valley” is a concept first coined in 1970 by Japanese roboticist Masahiro Mori to describe the unsettling feeling people experience when encountering something that appears almost, but not quite, human-like. Originally applied to humanoid robots, this phenomenon occurs when an entity closely resembles a human but falls short in subtle, often disturbing ways.

When something is clearly artificial, like a cartoon character or a basic robot, we accept it without discomfort. Similarly, when something is genuinely human, we feel at ease. But there’s a treacherous middle ground where our brains detect something amiss, although unsure what, an instinctive revulsion is triggered. This evolutionary response may have developed as a survival mechanism, warning us of potential threats when we encounter entities that appear diseased, deceased, or somehow “wrong”.

For decades, this concept remained largely theoretical or limited to robotics and CGI. Today, however, we’re not just approaching the uncanny valley… We’re plunging headlong into its depths.

 

shutterstock 1907923642

AI’s Unsettling Ascent into Human Territory

The latest generation of AI systems is rapidly blurring the boundaries between human and machine-generated content in ways both fascinating and deeply troubling.

Large language models (LLMs) now craft essays, articles, and social media posts with such convincing fluency that distinguishing them from human writing has become increasingly difficult. These systems don’t just mimic our words; they’re beginning to simulate our thought patterns, our writing quirks, and even our emotional expressions.

Yet something remains missing. That ineffable quality of genuine human experience, that spark of consciousness, that whisper of soul behind the words. The content feels hollow because it is hollow, generated by systems that have no lived experience, no true understanding of what they’re mimicking. Nowadays, one of the few telltale signs that something is 100% human is the flawed grammar and localised terms that human content often uses.

This hollowness extends beyond text. AI-generated images now produce faces so realistic they could pass for photographs, yet often with that telltale uncanny quality: skin too smooth, eyes too symmetrical, expressions not quite landing. Video synthesis is following close behind, with deepfakes becoming increasingly sophisticated and accessible to non-experts.

Society in the Shadow of the Uncanny

As we wade deeper into these unsettling waters, the societal implications grow increasingly profound and potentially sinister.

The Deepfake Dilemma

Deepfakes represent perhaps the most immediately concerning manifestation of the uncanny valley phenomenon. While the technology has existed for years, its growing sophistication and accessibility present unprecedented challenges.

The statistics of its are alarming: studies indicate that 96% of deepfakes online are non-consensual pornography, of course primarily targeting women. This disturbing trend represents just the beginning of deepfake misuse. As the technology improves, we face the prospect of fabricated evidence being used to incriminate innocent people, manipulate public opinion, or sow confusion during critical events.

The 2024 US presidential election has already witnessed the proliferation of deepfakes depicting political figures in manufactured scenarios, from false arrests to fabricated statements. These forgeries don’t need to be perfect to be effective, they need only plant seeds of doubt or reinforce existing biases. Although there are many who can easily identify the use of AI, there are many who can’t. Next time you see an obviously AI generated image of Donald Trump and Elon Musk kissing ask yourself: “Would my Grandma think this is real?”.

Donald Trump and Elon Musk kissing in front of a USA flag. LOL.

The Automation of Human Connection

Beyond deepfakes, we’re witnessing the uncanny automation of human interaction itself, with AI increasingly mediating our social experiences.

Consider the rise of AI-powered OnlyFans accounts; where bots impersonate women, complete with generated images and automated conversations. While some might argue this is ethically preferable to exploiting real individuals’ images without consent, it represents a troubling commodification of simulated intimacy.

Customer service has similarly fallen into the uncanny valley. Where once we might have spoken with offshore call centre workers, we now navigate labyrinthine conversations with AI systems that approximate human speech but frequently fail to understand our needs or respond appropriately. These interactions leave us frustrated not just by their inefficiency, but by their uncanny approximation of human conversation.

Social media platforms are increasingly populated by sophisticated bots that engage with users, respond to comments, and even simulate emotional reactions. These digital entities create the illusion of community engagement while actually hollowing out authentic human connection. The result is a social landscape where we can never be entirely certain whether we’re interacting with a person or a simulation. This phenomenon is known as the Dead Internet Theory (which I wrote an article on a couple months ago, check it out).

Generative AI images of a woman in a bikini on a bed

Education in the Age of the Uncanny

The educational sphere hasn’t escaped this uncanny transformation. Students now routinely employ LLMs to complete assignments while educators struggle with detection tools that can’t reliably distinguish between human and AI-generated work.

The problem isn’t merely academic dishonesty, it’s the gradual erosion of authentic learning and expression. As AI-generated content becomes increasingly human-like, we risk losing sight of what makes human writing valuable: the unique perspective, the lived experience, the genuine insight that no algorithm, however sophisticated, can truly replicate.

Living with the Almost-Human

Perhaps most unsettling is how AI systems have begun to exhibit one of humanity’s most distinctive flaws: hallucination. AI hallucinations, the confident presentation of false information as fact, mirror human cognitive failures in ways that are both fascinating and disturbing.

When ChatGPT falsely attributes 76% of quotes from journalism sites and expresses uncertainty in only 7 out of 153 errors, we’re witnessing something uncannily human, the capacity to be confidently wrong. Even specialised legal AI tools produce incorrect information in at least one out of six queries.

The uncanny valley is no longer just about how robots look; it’s about how AI thinks, writes, and interacts. We’re creating systems that mimic our cognitive processes, including our flaws, without the self-awareness or ethical framework that (ideally) helps humans correct for these limitations.

Navigating the Valley’s Shadows

As we continue our descent into the uncanny valley, we face profound questions about authenticity, trust, and human connection in a world increasingly mediated by almost-human AI.

The line between real and synthetic is blurring at a pace few anticipated. While robots may not yet be physically indistinguishable from humans, the content we encounter online increasingly is. Social media posts, news articles, images, videos can now all be generated by AI with such sophistication that detection becomes a significant challenge. As an example: Stephen G Pope, who’s n8n content I regularly watch, designed an n8n workflow which uses his likeness—voice, face and tone—to automatically create YouTube videos.

The reality that AI-generated content has become so prevalent forces us to reconsider how we establish truth in a post-truth landscape. “The idea that you can just quickly Google something and know what’s fact and what’s fiction—I don’t think it works like that anymore,” warns Sandra Wachter, professor at the Oxford Internet Institute.

The uncanny valley is no longer just a curiosity of robotics, it’s becoming the terrain upon which our digital lives unfold. As AI continues to evolve, that eerie sensation we feel when encountering almost-human content may be our last warning that we’re interacting with a simulation rather than reality.

Who knows? Perhaps an LLM wrote this article.

The shadows between human and machine grow longer each day. Can you still tell which is which?

SHARE THIS ARTICLE

Related Articles

Your growth is our priority

Are you ready to accelerate the growth of your business yet?