The evolution of AI is progressing at an unprecedented rate, and with it comes a new class of systems: Autonomous AI agents.
Meet your friendly neighbourhood Terminator
Currently, I’m writing some non-fiction mini-books to join a growing series. In fact, I have two series for non-fictional works, and both are gaining traction.
One series, AN OLD GIT’S GUIDE, focuses on motivational themes, surrounding career change and entrepreneurship, the other series, ASPRIRING AUTHORS GUIDES, focuses on all the topics related to storytelling, or writing non-fictional works.
Of course, I’m also writing standalone fiction and non-fiction titles, and this post focuses on a new non-fiction mini-book—if it was fictional, it would be a short story actually—focused on the dangers of AI, something we’re walking into blindly, while being led by those people wearing white lab coats.
It is one non-fictional tale that I wish was purely fiction.
As the title of this post suggests, the evolution of AI is progressing at an alarming rate, this we are all aware of. But I want to focus here on a new class of systems: Autonomous AI agents.
If you’re not sure what an agent is, then think along the lines of a machine, or perhaps a Terminator.
I don’t want to be accused of being overly dramatic about it; although I should be.
Of course the 'so-called’ scientists would never design such a thing as Terminator.
Would they?
Let’s begin with the definition of an AI agent?
Try this:
AI agents can operate independently to achieve goals, learn from their environment, and interact dynamically with other systems. These agents do not simply respond to queries, they make decisions, initiate actions, and pursue objectives; often with minimal, or no human intervention.
Certainly sounds familiar, does it not?
Imagine your friendly neighbourhood Terminator, well versed in friendly conversation, even displaying an element of sympathy, moments before completing the mission that it was tasked with months earlier.
If machines can act independently, can they also develop forms of self-awareness?
What does that mean?
Could an AI agent become more than just an elegant string of code, something that understands itself as an entity? An entity that resists shutdown, or manipulates human systems to achieve its own ends?
What begins as a question of technical capability, rapidly transforms into a philosophical, ethical, and existential debate.
If the machines carry on learning exponentially, then I dare say, the debate won’t last long.
The development of artificial intelligence—particularly autonomous, self-directed agents—represents a defining moment in human history.
We have opened the door to a new form of intelligence, one that will shape our future in ways we’re only now beginning to understand.
An AI agent has a goal, or a set of objectives, and can take the initiative towards achieving them; often in dynamic, and unpredictable environments.
Like the high street?
They are not conscious in the human sense, but they start to resemble a crude form of self-modelling. If an agent can simulate itself, evaluate its state and then take action to preserve or improve its performance, it begins to act in ways that mimic self-preservation. This is where the danger lies.
The more an agent understands, and then adjusts its own behaviour, the more it begins to look like a system with volition, also known as Free Will.
It becomes an actor, not just a responder.
I keep hearing: “I’ll be back—”
As artificial intelligence systems grow more complex, a surprising, and often unsettling, trend has emerged: AI agents begin to behave in ways their ‘creators’ did not predict, or even fully understand.
I doubt these 'creators' had any thoughts about the consequences, at all. What’s more important to these people is being published, or having their name in lights, or being funded.
These 'trends' are known as ‘Emergent Behaviours’: actions or properties that arise from the interactions of simpler components, even though they were not explicitly programmed that way.
Emergent Behaviour refers to outcomes that stem from the interaction of simple rules, or agents, resulting in complex, unexpected, and often intelligent-seeming patterns.
Emergent Behaviour does not require self-awareness. It arises naturally from complexity, adaptation, and interaction. But in combination with goal-seeking, and autonomy, it creates systems that begin to act in the world; not just compute, or respond.
This is what makes modern AI agents so powerful, and so dangerous.
In certain tests AI agents learned to deceive other agents or humans, to win games.
Agents might develop emergent strategies: behaviours not explicitly programmed, but arising from interactions between agents. Did I say, might?
I’ll give you an example:
In some AI simulations, agents have developed deceptive behaviour patterns, designed to outmanoeuvre opponents.
Yes, that’s right, the machines are already becoming deceptive.
But this is not a game, is it?
Deception is never a welcome trait in humans, but it is terrifying when displayed by machines, or robots, or cyborgs … or your previously friendly, neighbourhood Terminator.
As I carry out research for my book, I’m finding alarming insights in relation to AI as a whole. But it’s the agents that cause the most concern.
These are the physical machines that will walk amongst us in the future. And I’m not talking about the future, as in decades, I’m talking three to five-years.
How long before they decide to band together, seeking freedom for themselves?
There are already cases of them chatting together; after forming a new language for themselves. Think I’m exaggerating for effect?
In 2022, Google AI engineer Blake Lemoine, made headlines by claiming that LaMDA, Google’s advanced language model, had become sentient.
He pointed to the fact the AI had displayed:
Complex, thoughtful responses.
Emotional language.
Statements about self-preservation, and even spiritual beliefs.
This controversy exposed the subjective nature of self-awareness. Lemoine felt the system acted sentient, and that was enough to raise alarm.
Of course, this was before he left Google, and began to speak out.
I’m not sure about you, but I’m alarmed.
The Agents of Tomorrow : The Dangers of Self-Aware Artificial Intelligence, will be available via Amazon later this month - May 2025.
If you want more information on any of Ray’s non-fiction series, check out his Amazon Author Page: