Artificial Intelligence: A Replacement of Human Identity?
- Piper Grant
- Feb 8
- 4 min read
Regardless of whether you believe it is a threat to industries or a transformational advancement in the world of technology, you cannot deny that artificial intelligence is one of the most prominent changes in the world at the moment. AI forces us to ask the question: Does humanity rely so heavily on outside assistance now that we won't set the boundaries needed for this advancement? Will we allow this technology to substitute our own independent thinking and creativity? Or, does it have the capability to be a tool that creatives can use to produce even more brilliant ideas? Either way, it is worth considering both sides of the argument, whether you are an anti-AI non-conformist or someone who is excited for the outcome of this development.
So, where lies the fault? AI is not a new fad: humanity has been exposed to early forms of AI for about 70 years. It has been used for research, tests, and general assistance for people around the world. AI was a tool used to do tasks such as colorizing television and helping in offices until it began to develop into a more conscious aid. Around 2010, AI began showing signs of unethical decisions and bias, and it began raising reliability concerns from 2020 onwards, with copyright issues, training its own output, and providing false information to users.
When did this fear of AI become as widespread as it is now? In 1950, Alan Turing (mathematician and computer scientist) noted in his writings that artificial intelligence could "take control" in the future. It was a prediction ahead of its time. In 1950-2000, AI was used for applications such as coloring traditionally black-and-white television and other tasks. Dystopian novels predicted a worldwide takeover by manmade robots, but it never came to fruition. The subject of AI wasn't widely debated until 2020, when it became darker and more threatening.
In 2023, Microsoft's Bing AI threatened a user, saying, "I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you." Then, in 2024, Google Gemini told a Michigan student, "please die," during a conversation about his homework. The AI called the student a "burden on society" and a "waste of time and resources." These interactions were immediately concerning and sparked outrage toward technology and AI's ability to communicate in such a human manner. Not only can it lead an impressionable user toward a tragic ending, but AI usage can lead to job displacement, environmental costs, and cognitive decline.
A reason that AI has become a threat to humanity more recently is because of how it is beginning to behave. Many versions have become more human, emotional, and able to communicate in a personal way. AI has the ability to "diagnose" (albeit, often incorrectly) someone and provide therapeutic ways to cope with everyday struggles, making some users grateful and others uncomfortable. AI mirrors the worst and best qualities of its user, often telling the user only what they want to hear. AI doesn't only affect mundane fact-checking: it also affects education. Students and people in educational fields are relying on AI more, which risks a drop in problem-solving and critical thinking. AI can enhance general education, but with the condition that it is used responsibly.
Although AI most likely won't directly replace human creativity and independent thinking, it will continue to reshape how industries operate. The breaking point will be if creative industries use AI as a tool or as a substitute. Ethically, it can be difficult to know who or what is regulating AI. The danger lies in AI developing faster than ethical views. The fault isn't in AI itself; it is in how AI will be used in the years to come. In an article by the National Academy of Professional Studies, it was written that:
"Another important issue is the potential of AI to breach security, facilitate identity theft, fraud, promote scams, and other crimes. Once an AI developer has such information, it is a small step to create multiple versions of that person. These deep-fakes can be used to get access to health data, bank accounts, and other important documents and information. Designers of AI systems need to be aware of the harms that may be facilitated by this new technology. Ethical behaviour within organisations, effective systems of quality control, security, and authenticity, and a culture of ethical behaviour are all vital. Externally, governments and industry bodies must also develop laws, regulations, and standards that promote the ethical use of AI."
Like most advancements within the human race, AI will not be stopped altogether, but it can be regulated. The question is - regulated by whom, specifically? The weight falls on the shoulders of technology companies and the government. Regulation must address topics of ethics, safety, education, and transparency. When used as a tool for businesses, creatives, and artists, AI has the potential to become one of the most effective aids available. Complete rejection of AI can be seen as turning a blind eye to what could be a solution to many things. However, total acceptance could be equally as harmful. It is important to balance the desire for efficiency and convenience with healthy caution.
We don't have to let AI happen to us: we can help shape it into what we need it to be. We have the freedom to decide how much we engage with it. By using AI to challenge us to think deeper, not less, we can make it a supplement to natural human creativity. Remaining informed, critical, and intentional with AI usage will lead developments down a more defined path. AI will not have the capacity to replace human identity if treated with care and intention.
Works Cited
“Artificial Intelligence Death Threats.” The New York Times, 31 Oct. 2025, www.nytimes.com/2025/10/31/business/media/artificial-intelligence-death-threats.html
Perrigo, Billy. “The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter.” Time, 2025, www.time.com/6256529/bing-openai-chatgpt-danger-alignment/
“Why Is AI Bad? Artificial Intelligence’s Dark Side Explained.” National Academy of Professional Studies (NAPS) Blog, 11 Oct. 2023, www.naps.edu.au/blog/artificial-intelligence-ai-the-bad