The launch of ChatGPT in late 2022 marked a milestone moment for artificial intelligence, bringing what was once a niche technology firmly into the mainstream. Suddenly, AI and especially Generative AI, wasn’t just the prerogative of data scientists and tech developers. It was a feature in everyday conversations, a presence in business strategies, and a catalyst for innovation across industries. In the months since, platforms like Gemini and Perplexity have emerged, pushing the boundaries of what AI can achieve while expanding its role in the workplace.
For those of us focused on delivering exceptional employee experiences and enabling smarter ways of working, AI offers exciting opportunities. Yet, alongside its promises come significant challenges. If we’re not thoughtful in how we approach this technology, AI could exacerbate inequalities, entrench biases, and erode trust, rather than enhance efficiency, drive innovation, and help us build fairer, more inclusive workplaces.
To understand the stakes, we need to start by demystifying what AI is and what it isn’t. AI refers to computer systems designed to perform tasks that typically require human intelligence, such as reasoning, learning, and problem-solving. But the term is often used loosely, lumping together everything from simple automation tools to systems capable of “learning on the job” and delivering on their objectives in the most efficient way.
Throwing the term AI at anything vaguely techy has muddied the waters. This lazy labelling fuels both overhype and over-trust. Just because it’s got ‘intelligence’ in the name doesn’t mean it’s intelligent. And let’s be clear: intelligence comes in many shades. Some of the cleverest people on the planet are psychopaths. Plus, being “smart” doesn’t always make you trustworthy, thoughtful, or even useful.
This tendency to anthropomorphise AI makes it even more critical to approach it cautiously. It is not a friend or a foe. It is a tool. A good starting point is psychologist Howard Gardner’s theory of multiple intelligences. While AI excels in logical reasoning and data processing, it falls short in areas like empathy, creativity, and interpersonal understanding – qualities that underpin effective workplace management. This is where humans, particularly workplace professionals, come in.
Garbage in, garbage out
Large Language Models (LLMs), like ChatGPT, can churn out polished reports, spark ideas, and assist with internal communications. At first glance, they save time and boost efficiency. But dig deeper, and their limitations become clear. Their outputs rely entirely on the data they’ve been trained on – data that often carries biases, inaccuracies, or lacks context. Sure, an LLM can draft a snappy workplace policy, but it won’t anticipate how that policy might land with a diverse workforce. It can write an impressive sustainability report, but it won’t pause to question the authenticity of green claims.
The phrase “garbage in, garbage out” among AI thought leaders encapsulates the issue. If AI is fed flawed, biased, or inaccurate data, it will spit out equally flawed results. This is where humans step in, not as dustmen, but as gardeners. We prepare the soil (input data), plant the seeds (algorithms and frameworks), and tend the garden (monitor and refine the system). Like gardening, managing AI requires care, foresight, and a commitment to removing weeds, whether they’re errors, biases, or irrelevant inputs.
Alternatively, think of humans as curators, carefully selecting and organising the data and decisions that shape AI outputs. Like museum curators choosing pieces of truth and quality, we ensure AI’s work reflects accuracy, ethics, and relevance. Whether gardeners or curators, humans remain responsible for overseeing, refining, and aligning AI’s outputs with real-world values. And where best to put that to the test than in the workplace.
Back to hybrid
AI’s role in workplace design and management is a complementary one. It can enhance human capabilities but cannot replace the empathy, cultural understanding, and critical thinking that organisations need to thrive. Effective leadership, employee engagement, and sustainability depend on these qualities – qualities that AI, for all its power, cannot replicate. Yet.
A hybrid approach is the way forward. AI should handle the heavy lifting – data analysis, pattern recognition, and basic content generation – while humans take the lead in interpretation, decision-making, creation, and ethical oversight. This partnership balances efficiency with humanity, ensuring workplaces evolve in a way that remains inclusive, innovative, and sustainable.
Say “Aye” to AI
AI is here to stay, and that’s a good thing – if we use it wisely. It can help us work smarter, achieve more, and innovate in ways we’re only beginning to explore. But let’s not treat it as a silver bullet. AI isn’t infallible, and its integration into workplace practices demands both excitement and caution. Transparency, accountability, and ongoing learning are essential to ensure it delivers on its promises without compromising the values that make workplaces thrive.
Our recent research reinforces this point. The CheatGPT? Generative Text AI Use in the UK’s PR and Communications Profession report, conducted by Magenta Associates in partnership with the University of Sussex, highlights both the opportunities and challenges posed by AI. While 80 percent of content writers use generative AI tools to support their day-to-day activities, most are doing so without their managers’ knowledge. And here lies the issue: most organisations lack formal training or guidelines for responsible AI use. Without these guardrails, ethical dilemmas, transparency gaps, and legal uncertainties remain unaddressed.
The findings serve as a reminder: generative AI can enhance creativity and efficiency, but only when it’s managed responsibly. Formal training, clear ethical standards, and open dialogue are essential. AI should be a tool for progress, not an excuse for sloppy shortcuts or missteps.
The future of AI in the workplace isn’t about replacing humans – it’s about enhancing what we do best. It’s not enough to adopt the technology; we must shape its use thoughtfully, inclusively, and… intelligently.
For more insights into the ethical and operational challenges of AI, download Magenta’s latest white paper, CheatGPT? Generative Text AI Use in the UK’s PR and Communications Profession.
November 28, 2024
These are very early days in our relationship with Generative AI
by Jo Sutherland • AI, Comment
The launch of ChatGPT in late 2022 marked a milestone moment for artificial intelligence, bringing what was once a niche technology firmly into the mainstream. Suddenly, AI and especially Generative AI, wasn’t just the prerogative of data scientists and tech developers. It was a feature in everyday conversations, a presence in business strategies, and a catalyst for innovation across industries. In the months since, platforms like Gemini and Perplexity have emerged, pushing the boundaries of what AI can achieve while expanding its role in the workplace.
For those of us focused on delivering exceptional employee experiences and enabling smarter ways of working, AI offers exciting opportunities. Yet, alongside its promises come significant challenges. If we’re not thoughtful in how we approach this technology, AI could exacerbate inequalities, entrench biases, and erode trust, rather than enhance efficiency, drive innovation, and help us build fairer, more inclusive workplaces.
To understand the stakes, we need to start by demystifying what AI is and what it isn’t. AI refers to computer systems designed to perform tasks that typically require human intelligence, such as reasoning, learning, and problem-solving. But the term is often used loosely, lumping together everything from simple automation tools to systems capable of “learning on the job” and delivering on their objectives in the most efficient way.
Throwing the term AI at anything vaguely techy has muddied the waters. This lazy labelling fuels both overhype and over-trust. Just because it’s got ‘intelligence’ in the name doesn’t mean it’s intelligent. And let’s be clear: intelligence comes in many shades. Some of the cleverest people on the planet are psychopaths. Plus, being “smart” doesn’t always make you trustworthy, thoughtful, or even useful.
This tendency to anthropomorphise AI makes it even more critical to approach it cautiously. It is not a friend or a foe. It is a tool. A good starting point is psychologist Howard Gardner’s theory of multiple intelligences. While AI excels in logical reasoning and data processing, it falls short in areas like empathy, creativity, and interpersonal understanding – qualities that underpin effective workplace management. This is where humans, particularly workplace professionals, come in.
Garbage in, garbage out
Large Language Models (LLMs), like ChatGPT, can churn out polished reports, spark ideas, and assist with internal communications. At first glance, they save time and boost efficiency. But dig deeper, and their limitations become clear. Their outputs rely entirely on the data they’ve been trained on – data that often carries biases, inaccuracies, or lacks context. Sure, an LLM can draft a snappy workplace policy, but it won’t anticipate how that policy might land with a diverse workforce. It can write an impressive sustainability report, but it won’t pause to question the authenticity of green claims.
The phrase “garbage in, garbage out” among AI thought leaders encapsulates the issue. If AI is fed flawed, biased, or inaccurate data, it will spit out equally flawed results. This is where humans step in, not as dustmen, but as gardeners. We prepare the soil (input data), plant the seeds (algorithms and frameworks), and tend the garden (monitor and refine the system). Like gardening, managing AI requires care, foresight, and a commitment to removing weeds, whether they’re errors, biases, or irrelevant inputs.
Alternatively, think of humans as curators, carefully selecting and organising the data and decisions that shape AI outputs. Like museum curators choosing pieces of truth and quality, we ensure AI’s work reflects accuracy, ethics, and relevance. Whether gardeners or curators, humans remain responsible for overseeing, refining, and aligning AI’s outputs with real-world values. And where best to put that to the test than in the workplace.
Back to hybrid
AI’s role in workplace design and management is a complementary one. It can enhance human capabilities but cannot replace the empathy, cultural understanding, and critical thinking that organisations need to thrive. Effective leadership, employee engagement, and sustainability depend on these qualities – qualities that AI, for all its power, cannot replicate. Yet.
A hybrid approach is the way forward. AI should handle the heavy lifting – data analysis, pattern recognition, and basic content generation – while humans take the lead in interpretation, decision-making, creation, and ethical oversight. This partnership balances efficiency with humanity, ensuring workplaces evolve in a way that remains inclusive, innovative, and sustainable.
Say “Aye” to AI
AI is here to stay, and that’s a good thing – if we use it wisely. It can help us work smarter, achieve more, and innovate in ways we’re only beginning to explore. But let’s not treat it as a silver bullet. AI isn’t infallible, and its integration into workplace practices demands both excitement and caution. Transparency, accountability, and ongoing learning are essential to ensure it delivers on its promises without compromising the values that make workplaces thrive.
Our recent research reinforces this point. The CheatGPT? Generative Text AI Use in the UK’s PR and Communications Profession report, conducted by Magenta Associates in partnership with the University of Sussex, highlights both the opportunities and challenges posed by AI. While 80 percent of content writers use generative AI tools to support their day-to-day activities, most are doing so without their managers’ knowledge. And here lies the issue: most organisations lack formal training or guidelines for responsible AI use. Without these guardrails, ethical dilemmas, transparency gaps, and legal uncertainties remain unaddressed.
The findings serve as a reminder: generative AI can enhance creativity and efficiency, but only when it’s managed responsibly. Formal training, clear ethical standards, and open dialogue are essential. AI should be a tool for progress, not an excuse for sloppy shortcuts or missteps.
The future of AI in the workplace isn’t about replacing humans – it’s about enhancing what we do best. It’s not enough to adopt the technology; we must shape its use thoughtfully, inclusively, and… intelligently.
For more insights into the ethical and operational challenges of AI, download Magenta’s latest white paper, CheatGPT? Generative Text AI Use in the UK’s PR and Communications Profession.
Jo Sutherland is Managing Director of Magenta Associates.