When people feel threatened or hurt, it’s normal human nature to view the source of that pain as an adversary. Given what advancing AI has showcased recently, examining it through the lens of fear seems justified. After all, valid concerns abound: endangered jobs, erosion of privacy, even doomsday predictions. Yet even tech leaders acknowledge AI’s disruptive potential in the famous letter to pause AI development.
However, we will inevitably embrace AI due to its innate ability to streamline tasks beyond biological limits. For example, AI-optimised processes are slowly implemented but highly appreciated in corporations.
Similarly, we are yet to see how AI will occupy our leisure time with customised content and AI games. In the focus of this article is one of these games that actually employs generative agents that mimic human actions and interactions. For those unfamiliar, these agents are programmes or algorithms that can generate content such as text, images, and audio. In this case, they grow up in the virtual environment, unaware of the concept that humans have complete control.
But beneath these apparent threats lurks a deeper, subtler fear than fearing that AI will terminate us all. What if gaining control of advancing AI would not be the end of humans but the end of the essence of being a human — our humanity? What if the struggle with advancing AI changes our minds so that we gradually become aggressors?
Is This Candy From a Stranger?

On the surface, concerns around AI seem rational; machines now perform tasks beyond human intuition. This is triggering instinctual unease, as the function of our brain is to make predictions. If it can’t, the current situation is already dangerous for our wellbeing.
In this case, there is no need to feel bad about your brain. Not even the scientists expected the results derived from the “black boxes” of LLMs.
Surprisingly, the creators of these models sometimes feel the same way. Since co-creating the black box of OttCT (being in awe of its results), I can surely say that getting the results feels like getting candy from a stranger.
However, even as kids, we learn to accept candy from parent-approved entities — the first KYC we perform in our lives.
Thankfully, transparency removes anxieties better than restrictions. It is because even right now, there is a possibility to understand the black boxes with symbolic regression. To simplify, this is the type of search in which mathematical expressions will give the same results as those in the black box. If there are endless possibilities, we can train AI to calculate all these options.
Interesting, right?
Another way to solve black boxes is through vector databases and documenting and explaining our models to the wider public. If all AI creators take this approach before the regulation gives us the proper guidelines, we will manage to take control of advancing AI.
Sure, it would require more work, stopping the race to develop the first AGI. If we as a society are willing to contribute, to learn, and to change our work habits, we can control AI and make it work for us. But the real question is, do we want to?
What Would You Do With The Full Control?

But what if some of the future AI would give us full, even individual, control? In the angst of losing jobs and breaking the markets, would the feeling of being a victim turn us into the aggressor?
By doing so, can we harm ourselves even further?
One experiment that might probe this is the abovementioned simulation of the town of generative agents. For a while, its’ open-source version 2 opened up the examination of how humanity handles complete AI authority.
Within this virtual setting, 25 cartoon avatars powered by GPT-4 converse and interact much like humans. Moreover, they develop their own identities regardless of invisible human influence. They understand and generate natural language, express emotions, make decisions, and adapt.
In this case, adaptation means that they take feedback, remember it, and learn based on the input they receive. For example, it means that they can take the action of inviting others to their Valentine’s Party. Furthermore, they can also have wants, like speaking with the specific agent, besides the hardcoded need to sleep, eat, and wash their teeth.
Basic simulation game, right?
But there is a big difference. These agents can develop their own emotions and needs (that we can justify or not) and are not aware of the existence of the human controlling them.
Since it is open source and can be hosted on one’s own computer, it is completely under human control.
So, what would humans do when given full control?
Doing Like Humans Do

In this case, everyone can regulate their own AI, and at the point of writing, there were 1.100+ forks. In this case, every fork means that someone is influencing or running their own simulation with full control to guide the system.
Surely, mindful humans should ensure that these agents produce accurate and appropriate behaviour. We can also assume that all forks are made for that purpose.
But what happens when the fear kicks in? What happens when “us versus them” takes place?
What responses might surface from those seeing technology not as an enabler but as a source of their disempowerment?
Revenge against even these cartoonish creatures is within the realm of possibilities.
So, can we imagine what the reaction of a person who just lost their job due to advancing AI would be? Or witnessing some other consequences in the world caused by malicious people using AI for bad.
Well, we can expect the action and the retaliation from those who are able to react differently. It is because it is hard not to walk the path of revenge. To forgive. To understand that these 25 little cartoonish characters encapsulated in your server didn’t do you any harm.
For those who won’t be able to overcome these urges, the worst fear of humanity will take place.
That fear is the fear that our behaviour towards AI might remove humanity from ourselves.
Conclusion

Seemingly, there are many ways that AI can harm humans. But let’s linger on how advancing technologies will reshape human nature itself.
In the following years, we need to define whether we will go on the path of empowerment through shared progress. In other cases, perpetual victimisation will define relationships with 21st-century advancements.
We already witnessed how we can harm other human beings on social media with laptops as our weapons.
But the real fear of advancing AI is that it might enhance this human trait. By harming human-like objects, there is a thin line when we could extend our actions to fellow humans.
Since what we do to others is actually what we do to ourselves, would we be able to live with the choices we make seamlessly? In this case, there would be no need for the implants to transform us. We would already be transformed.
Instead of doing so, let’s create some Pleasantville with 25 characters and put up the control not to let them stray away. Let’s learn from these examples to create the principles that will guide AI to do good and also remind us what good is.
In our hands is the choice and the tool that can amplify our humanity or diminish our compassion and empathy. Making us more or less human than the machines we created.