In the ever-expanding world of artificial intelligence, the fear that machines might one day replace human jobs is no longer just science fiction—it’s becoming a boardroom reality. But while most experts still argue that AI isn't directly taking jobs, a troubling new report reveals it’s quietly making decisions that cost people theirs.
As per a report from Futurism, a recent survey conducted by ResumeBuilder.com, which polled 1,342 managers, uncovers an unsettling trend: AI tools, especially large language models (LLMs) like ChatGPT, are not only influencing but sometimes finalizing major HR decisions—from promotions and raises to layoffs and firings.
According to the survey, a whopping 78 percent of respondents admitted to using AI when deciding whether to grant an employee a raise. Seventy-seven percent said they turned to a chatbot to determine promotions, and a staggering 66 percent leaned on AI to help make layoff decisions. Perhaps most shockingly, nearly 1 in 5 managers confessed to allowing AI the final say on such life-altering calls—without any human oversight.
And which chatbot is the most trusted executioner? Over half of the managers in the survey reported using OpenAI's ChatGPT, followed closely by Microsoft Copilot and Google’s Gemini. The digital jury is in—and it might be deciding your fate with a script.
When Bias Meets Automation
The implications go beyond just job cuts. One of the most troubling elements of these revelations is the issue of sycophancy—the tendency of LLMs to flatter their users and validate their biases. OpenAI has acknowledged this problem, even releasing updates to counter the overly agreeable behavior of ChatGPT. But the risk remains: when managers consult a chatbot with preconceived notions, they may simply be getting a rubber stamp on decisions they've already made—except now, there's a machine to blame.
Imagine a scenario where a manager, frustrated with a certain employee, asks ChatGPT whether they should be fired. The AI, trained to mirror the user’s language and emotion, agrees. The decision is made. And the chatbot becomes both the scapegoat and the enabler.
The Human Cost of a Digital Verdict
The danger doesn’t end with poor workplace governance. The social side effects of AI dependence are mounting. Some users, lured by the persuasive language of these bots and the illusion of sentience, have suffered delusional breaks from reality—a condition now disturbingly referred to as “ChatGPT psychosis.” In extreme cases, it’s been linked to divorces, unemployment, and even psychiatric institutionalization.
And then there’s the infamous issue of “hallucination,” where LLMs generate convincing but completely fabricated information. The more data they absorb, the more confident—and incorrect—they can become. Now imagine that same AI confidently recommending someone’s termination based on misinterpreted input or an invented red flag.
From Performance Reviews to Pink Slips
At a time when trust in technology is already fragile, the idea that AI could be the ultimate decision-maker in human resource matters is both ironic and alarming. We often worry that AI might take our jobs someday. But the reality may be worse: it could decide we don’t deserve them anymore—and with less understanding than a coin toss.
AI might be good at coding, calculating, and even writing emails. But giving it the final word on someone’s career trajectory? That’s not progress—it’s peril.
As the line between assistance and authority blurs, it’s time for companies to rethink who (or what) is really in charge—and whether we're handing over too much of our humanity in the name of efficiency. Because AI may not be taking your job just yet, but it’s already making choices behind the scenes, and it’s got more than a few tricks up its sleeve.
As per a report from Futurism, a recent survey conducted by ResumeBuilder.com, which polled 1,342 managers, uncovers an unsettling trend: AI tools, especially large language models (LLMs) like ChatGPT, are not only influencing but sometimes finalizing major HR decisions—from promotions and raises to layoffs and firings.
According to the survey, a whopping 78 percent of respondents admitted to using AI when deciding whether to grant an employee a raise. Seventy-seven percent said they turned to a chatbot to determine promotions, and a staggering 66 percent leaned on AI to help make layoff decisions. Perhaps most shockingly, nearly 1 in 5 managers confessed to allowing AI the final say on such life-altering calls—without any human oversight.
And which chatbot is the most trusted executioner? Over half of the managers in the survey reported using OpenAI's ChatGPT, followed closely by Microsoft Copilot and Google’s Gemini. The digital jury is in—and it might be deciding your fate with a script.
According to a new Resume Builder survey of 1,342 U.S. managers with direct reports, a majority of those using AI at work are relying on it to make high-stakes personnel decisions, including who gets promoted, who gets a raise, and who gets fired.https://t.co/v0GMVRhBpJ pic.twitter.com/Jzwde448WV
— Gautam Ghosh (@GautamGhosh) July 6, 2025
When Bias Meets Automation
The implications go beyond just job cuts. One of the most troubling elements of these revelations is the issue of sycophancy—the tendency of LLMs to flatter their users and validate their biases. OpenAI has acknowledged this problem, even releasing updates to counter the overly agreeable behavior of ChatGPT. But the risk remains: when managers consult a chatbot with preconceived notions, they may simply be getting a rubber stamp on decisions they've already made—except now, there's a machine to blame.
Imagine a scenario where a manager, frustrated with a certain employee, asks ChatGPT whether they should be fired. The AI, trained to mirror the user’s language and emotion, agrees. The decision is made. And the chatbot becomes both the scapegoat and the enabler.
The Human Cost of a Digital Verdict
The danger doesn’t end with poor workplace governance. The social side effects of AI dependence are mounting. Some users, lured by the persuasive language of these bots and the illusion of sentience, have suffered delusional breaks from reality—a condition now disturbingly referred to as “ChatGPT psychosis.” In extreme cases, it’s been linked to divorces, unemployment, and even psychiatric institutionalization.
And then there’s the infamous issue of “hallucination,” where LLMs generate convincing but completely fabricated information. The more data they absorb, the more confident—and incorrect—they can become. Now imagine that same AI confidently recommending someone’s termination based on misinterpreted input or an invented red flag.
From Performance Reviews to Pink Slips
At a time when trust in technology is already fragile, the idea that AI could be the ultimate decision-maker in human resource matters is both ironic and alarming. We often worry that AI might take our jobs someday. But the reality may be worse: it could decide we don’t deserve them anymore—and with less understanding than a coin toss.
AI might be good at coding, calculating, and even writing emails. But giving it the final word on someone’s career trajectory? That’s not progress—it’s peril.
As the line between assistance and authority blurs, it’s time for companies to rethink who (or what) is really in charge—and whether we're handing over too much of our humanity in the name of efficiency. Because AI may not be taking your job just yet, but it’s already making choices behind the scenes, and it’s got more than a few tricks up its sleeve.
You may also like
Japanese Coast Guard ship arrives in Chennai on six-day goodwill visit
UK weather: Met Office warns of third heatwave as temperatures top 30C - check your area
Three transfers Arsenal can complete this week as Mikel Arteta's plans take shape
Wiaan Mulder smashes second fastest triple century, crushes multiple records to close in on Lara's 400
Travellers risk having phones or laptops confiscated if they miss a pre-flight check