In all the conversations, debates and shouting matches about AI that continue to dominate the internet, there is much talk about the insidious danger of anthropomorphising AI. There is something chilling about the deliberate stumbles, inflections and hesitations that are put into AI communications, to try and convince people that they are talking to a sentient being. Explanations of AI deliberately use language such as ‘the model understands’ to make us believe that AI is developing a human level of learning, as that is more appealing than saying ‘the algorithm predictions are expanding’ (and neatly glosses over the increasing error rates and hallucinations). However, in amongst all the noise, I’m paying less attention to how AI is seemingly becoming more human, and more attention to how we are using AI to become robot-like.
Generative AI is now being used to communicate between humans in some of the most intimate and nuanced situations. Take dating apps. Your opening gambit in speaking to someone online is a way to represent yourself as you are, and to uncover more about the person you have connected with. However, with people using AI to craft their bios, tweak photos and construct messages, this isn’t so much about human connection, as two chat bots flirting with each other. Whilst this may appear harmless, the world of online dating is already fraught with challenges and there are ethical fears over the impact of such disengaged behaviour.
Is it right that you could be communicating with an algorithm doing its absolute best to manipulate you into liking the human behind it? If someone types your interests into ChatGPT with the prompt ‘write something I can use to flirt with this person’, are you getting any insight into their character, other than the fact they are incapable of initiating an authentic conversation? If you knew this of course, it’d have red flags all over it, however the whole point of AI currently, is that you’d have no idea.
The problem isn’t only a romantic one. We are turning to AI to avoid any difficult or challenging emotion. If we have a tricky client to respond to, a sensitive email to write or a weighty report that we can’t be bothered to wade through, we are using AI to ‘solve’ the issue. Whilst this may seem efficient, all we are doing is rapidly deskilling and setting ourselves up for future failure. It is human nature to want to avoid difficult emotions and many of us would do anything to circumvent conflict and swerve distress. However, we cannot AI ourselves out of our own lives. We need to be able to have difficult conversations, we gain confidence when we stand up for ourselves or overcome challenges. Our brain’s neuroplasticity and reward centres are engaged when we do hard things and so we need to do them, not delegate them out to technology.
More than a feeling
There is also the issue of emotional disruption to consider. I have watched the rise of griefbots in horror, especially those aimed at children. Grief is a complex process bringing with it many challenges and difficulties. Of course, we would all wish to avoid saying goodbye to someone that we love and for many of us, losing a partner or family member is our worst nightmare. However, as the saying goes, there are only two certainties, and one of them is death. We have to be able to process a death, and griefbots, designed to mimic the person who has died and allow us to continue interacting with ‘them’, are dangerous and disruptive. Whilst the thought of keeping that person in our lives is overwhelmingly tempting, we have to remember that’s not what griefbots are doing. They are algorithms that use probability to mimic the language and output of an individual who is no longer alive.
You are not keeping a loved one close, rather a programme of code and algorithms. The potential for exploiting the vulnerable is immense. How long before that griefbot charges a hefty subscription, or that relative you lost encourages you to buy an expensive product, because the company behind the bot just engaged in a lucrative collaboration? If we keep someone ‘alive’ via digitally-enabled death avoidance, when is it ok to let them go? Will we need to grieve twice, once for the human and once for the bot?
We seem ever-determined to remove ourselves from the human experience, delegating painful emotions to AI. But we need to keep hold of our emotions and our human experience in order to retain control and grow our skillset. In a wildly unregulated and unsafe environment, AI simply cannot be trusted to take over, nor should we want it to. Our incredible spectrum of emotions is a uniquely human experience and we need to feel it all.
Part of good mental health management is learning how to cope with challenging experiences and learning that we can get through tough times. These allow us to build coping strategies and handle the future with confidence and resilience. People often cite a mental health crisis but using AI is not the answer. If we continue to hand over the ‘tough stuff’ to technology then we are not utilising AI, we are relying upon it, whilst diminishing our own capabilities. The way out of our crisis is to face our emotions, with evidence-based support should we need it, so that we evolve through our experiences. We need adaptation and growth not algorithms.
There’s no doubt AI is impressive, but it is not a patch on the human brain. Our brains are capable of incredible nuance, subtlety, learning and connection. We are hard wired to communicate and bond with others. So perhaps we stop trusting billionaires with our emotions and instead trust the >86 billion neurons making over 100 trillion connections to each other in our brains. Let’s embrace a truly human experience.
Dr Stephanie Fitzgerald is an experienced Clinical Psychologist and Health and Wellbeing Consultant. Stephanie is passionate about workplace wellbeing and strongly believes everyone can and should be happy at work. Stephanie supports companies across all sectors to keep their employees happy, healthy, safe and engaged. Follow her on Instagram @workplace_wellbeing
December 30, 2025
AI isn’t turning robots into humans, it’s turning humans into robots.
by Stephanie Fitzgerald • AI, Comment, Wellbeing
Generative AI is now being used to communicate between humans in some of the most intimate and nuanced situations. Take dating apps. Your opening gambit in speaking to someone online is a way to represent yourself as you are, and to uncover more about the person you have connected with. However, with people using AI to craft their bios, tweak photos and construct messages, this isn’t so much about human connection, as two chat bots flirting with each other. Whilst this may appear harmless, the world of online dating is already fraught with challenges and there are ethical fears over the impact of such disengaged behaviour.
Is it right that you could be communicating with an algorithm doing its absolute best to manipulate you into liking the human behind it? If someone types your interests into ChatGPT with the prompt ‘write something I can use to flirt with this person’, are you getting any insight into their character, other than the fact they are incapable of initiating an authentic conversation? If you knew this of course, it’d have red flags all over it, however the whole point of AI currently, is that you’d have no idea.
The problem isn’t only a romantic one. We are turning to AI to avoid any difficult or challenging emotion. If we have a tricky client to respond to, a sensitive email to write or a weighty report that we can’t be bothered to wade through, we are using AI to ‘solve’ the issue. Whilst this may seem efficient, all we are doing is rapidly deskilling and setting ourselves up for future failure. It is human nature to want to avoid difficult emotions and many of us would do anything to circumvent conflict and swerve distress. However, we cannot AI ourselves out of our own lives. We need to be able to have difficult conversations, we gain confidence when we stand up for ourselves or overcome challenges. Our brain’s neuroplasticity and reward centres are engaged when we do hard things and so we need to do them, not delegate them out to technology.
More than a feeling
There is also the issue of emotional disruption to consider. I have watched the rise of griefbots in horror, especially those aimed at children. Grief is a complex process bringing with it many challenges and difficulties. Of course, we would all wish to avoid saying goodbye to someone that we love and for many of us, losing a partner or family member is our worst nightmare. However, as the saying goes, there are only two certainties, and one of them is death. We have to be able to process a death, and griefbots, designed to mimic the person who has died and allow us to continue interacting with ‘them’, are dangerous and disruptive. Whilst the thought of keeping that person in our lives is overwhelmingly tempting, we have to remember that’s not what griefbots are doing. They are algorithms that use probability to mimic the language and output of an individual who is no longer alive.
You are not keeping a loved one close, rather a programme of code and algorithms. The potential for exploiting the vulnerable is immense. How long before that griefbot charges a hefty subscription, or that relative you lost encourages you to buy an expensive product, because the company behind the bot just engaged in a lucrative collaboration? If we keep someone ‘alive’ via digitally-enabled death avoidance, when is it ok to let them go? Will we need to grieve twice, once for the human and once for the bot?
We seem ever-determined to remove ourselves from the human experience, delegating painful emotions to AI. But we need to keep hold of our emotions and our human experience in order to retain control and grow our skillset. In a wildly unregulated and unsafe environment, AI simply cannot be trusted to take over, nor should we want it to. Our incredible spectrum of emotions is a uniquely human experience and we need to feel it all.
Part of good mental health management is learning how to cope with challenging experiences and learning that we can get through tough times. These allow us to build coping strategies and handle the future with confidence and resilience. People often cite a mental health crisis but using AI is not the answer. If we continue to hand over the ‘tough stuff’ to technology then we are not utilising AI, we are relying upon it, whilst diminishing our own capabilities. The way out of our crisis is to face our emotions, with evidence-based support should we need it, so that we evolve through our experiences. We need adaptation and growth not algorithms.
There’s no doubt AI is impressive, but it is not a patch on the human brain. Our brains are capable of incredible nuance, subtlety, learning and connection. We are hard wired to communicate and bond with others. So perhaps we stop trusting billionaires with our emotions and instead trust the >86 billion neurons making over 100 trillion connections to each other in our brains. Let’s embrace a truly human experience.
Dr Stephanie Fitzgerald is an experienced Clinical Psychologist and Health and Wellbeing Consultant. Stephanie is passionate about workplace wellbeing and strongly believes everyone can and should be happy at work. Stephanie supports companies across all sectors to keep their employees happy, healthy, safe and engaged. Follow her on Instagram @workplace_wellbeing