As someone who works at the intersection of communications and responsible AI, I spend a lot of time thinking about how emerging technologies are explained, sold, feared, embraced and misunderstood. Nowhere is that more palpable than in conversations about AI and the future of work, where certainty is sometimes projected before it’s earned. Over the past few months alone, taking part in debates at both the Westminster Employment Forum and the University of Cambridge, I’ve been struck by just how wide the spectrum of opinion still is. Depending on who has the floor, AI is framed either as a magical productivity fix or an existential threat to jobs. The reality probably lies somewhere in the middle.
At the Westminster Forum, one tech leader boasted about AI had slashing his team’s research time by 40 percent. A union rep quickly challenged that 95 percent of firms see “zero returns” on such investments (he even cited a company that had to rehire staff after an automation project flopped). The union rep in question also argued that AI isn’t so much replacing jobs as downgrading them, an excuse to cut costs and push employees into precarious shift patterns.
Framed this way, the ‘future of AI and work’ discourse sometimes makes it feel as though you’re required to pick a side: for or against, optimist or sceptic, technophile or luddite. I say the danger is assuming something this complex can be reduced to a binary choice.
Slow down
Academic voices contribute to a more nuanced perspective. At the Westminster Forum, Professor Joanna Bryson argued that fears of mass AI-driven unemployment are overblown. Historically, automation creates new jobs or markets in its early phases. The real issue, she said, is who benefits. If AI’s productivity gains aren’t shared, the technology could widen inequality and spark social unrest.
Trinh Tu, managing director of public affairs at research firm Ipsos Mori, highlighted that UK employees are far more “nervous than excited” about AI’s workplace impact. About 60 percent of the public want the government to slow down AI development to safeguard jobs, among their other concerns. Tu suggested this anxiety comes from people feeling unprepared – companies teach employees how to use AI (well, some do), but not when to doubt or question it. We need to cultivate “skills for understanding” AI, not just skills for using it, she stressed.
Laura Hawksworth, head of policy & impact at The Careers & Enterprise Company (CEC), the national body focused on careers education and improving young people’s career readiness and employability, considered the future workforce. She highlighted the severity of the UK’s digital skills gap, estimated to cost the economy £63 billion a year, alongside employers’ concerns that many new entrants lack advanced technical capabilities. Hawksworth called for much earlier, real-world exposure to technology in schools so young people enter work AI-literate. The same emphasis, I would argue, is needed for those already well established in their careers.
Heave-ho
At my Cambridge residential this month, a review of a House of Commons report on AI’s labour market seemed to both enhance and undermine what I’ve heard so far. Far from offering reassurance (sorry!), the House of Commons Library paints a picture of a labour market that has been slowing since 2022, with unemployment rising to 5.1 percent and youth unemployment reaching 16 percent. The report frames this downturn as a broader trend rather than a direct consequence of AI adoption. In effect, AI is positioned as a future problem layered onto a labour market that is already under pressure. Not so much the storm itself but another cloud in a dark sky.
In that vein, the report categorises roles according to their potential exposure (risk) to AI. Clerical and secretarial occupations are identified as among the most exposed. By contrast, professions requiring high levels of creative judgement, professional accountability and complex decision-making are classified as facing much lower exposure. Architecture, for example, is listed as a profession with minimal AI risk.
Minimal AI risk? Adrian Malleson, head of research at the Royal Institute of British Architects, offered a different perspective. He noted that AI use within architecture has already accelerated, speeding up aspects of design work while simultaneously raising serious concerns about the erosion of entry-level roles that traditionally serve as training grounds for the profession. While Malleson was clear that no AI can replicate an architect’s creative judgement (or take on legal responsibility, for that matter!), he was equally clear that the profession is already being reshaped by AI.
I’ll admit it made my eyes twitch when the House of Commons concluded that AI is not in itself currently having an adverse effect on employment, because that is not how it feels and certainly not what I’m hearing from the organisations I work with. Perhaps the AI excuse functions less as the cause than the justification. It sounds better to say “we’re innovating” than “we’re not making enough money”. Add to that a steady stream of headlines about job losses and “AI taking over” and no wonder we’re a ticking anxiety bomb. Official data, media narratives and lived experience are all pulling in different directions.
Every cloud
Back at the Forum’s close, a chairperson implored us not to “lose sight of the opportunity” AI presents. That optimism – the idea that AI will free us from drudge work and unleash creativity – has merit. But veer too far into techno-utopianism and it becomes easy to overlook the very real, present-day pitfalls. Those are already emerging, from biased hiring algorithms to workforce monitoring. Not to mention the growing volume of bland LinkedIn posts.
Given so many conflicting viewpoints, the most sensible response seems to be a mix of curiosity and caution. As one professor put it, there is “too much certainty in all the wrong places”. Perhaps the more useful starting point is admitting what we don’t yet know and being a little more sceptical about what we think we do. We still don’t really know what makes people tick, so working out what will replace them or not is a step too far.
The good news is that some things are still within our control. We can take practical steps to ensure AI helps us do our jobs better. That starts with improving AI literacy. We don’t all need to be data scientists, but we should have a basic understanding of how these systems work, where they can add value, and where they can fail. It also means upskilling and experimenting. If a tool might make your work easier, it’s worth trying it out, presuming your privacy policy is honoured, particularly if it buys back time for the parts of the job where the human bit matters.
The future of work isn’t preordained. It will be shaped by how we choose to adopt and govern this technology. And that makes me optimistic. AI may be a disruptive force, but if we stay informed, adaptable and mentally switched on, we have a chance to ensure this technology works for not against us.
February 23, 2026
AI will either save work or destroy it. Apparently.
by Jo Sutherland • AI, Comment
At the Westminster Forum, one tech leader boasted about AI had slashing his team’s research time by 40 percent. A union rep quickly challenged that 95 percent of firms see “zero returns” on such investments (he even cited a company that had to rehire staff after an automation project flopped). The union rep in question also argued that AI isn’t so much replacing jobs as downgrading them, an excuse to cut costs and push employees into precarious shift patterns.
Framed this way, the ‘future of AI and work’ discourse sometimes makes it feel as though you’re required to pick a side: for or against, optimist or sceptic, technophile or luddite. I say the danger is assuming something this complex can be reduced to a binary choice.
Slow down
Academic voices contribute to a more nuanced perspective. At the Westminster Forum, Professor Joanna Bryson argued that fears of mass AI-driven unemployment are overblown. Historically, automation creates new jobs or markets in its early phases. The real issue, she said, is who benefits. If AI’s productivity gains aren’t shared, the technology could widen inequality and spark social unrest.
Trinh Tu, managing director of public affairs at research firm Ipsos Mori, highlighted that UK employees are far more “nervous than excited” about AI’s workplace impact. About 60 percent of the public want the government to slow down AI development to safeguard jobs, among their other concerns. Tu suggested this anxiety comes from people feeling unprepared – companies teach employees how to use AI (well, some do), but not when to doubt or question it. We need to cultivate “skills for understanding” AI, not just skills for using it, she stressed.
Laura Hawksworth, head of policy & impact at The Careers & Enterprise Company (CEC), the national body focused on careers education and improving young people’s career readiness and employability, considered the future workforce. She highlighted the severity of the UK’s digital skills gap, estimated to cost the economy £63 billion a year, alongside employers’ concerns that many new entrants lack advanced technical capabilities. Hawksworth called for much earlier, real-world exposure to technology in schools so young people enter work AI-literate. The same emphasis, I would argue, is needed for those already well established in their careers.
Heave-ho
At my Cambridge residential this month, a review of a House of Commons report on AI’s labour market seemed to both enhance and undermine what I’ve heard so far. Far from offering reassurance (sorry!), the House of Commons Library paints a picture of a labour market that has been slowing since 2022, with unemployment rising to 5.1 percent and youth unemployment reaching 16 percent. The report frames this downturn as a broader trend rather than a direct consequence of AI adoption. In effect, AI is positioned as a future problem layered onto a labour market that is already under pressure. Not so much the storm itself but another cloud in a dark sky.
In that vein, the report categorises roles according to their potential exposure (risk) to AI. Clerical and secretarial occupations are identified as among the most exposed. By contrast, professions requiring high levels of creative judgement, professional accountability and complex decision-making are classified as facing much lower exposure. Architecture, for example, is listed as a profession with minimal AI risk.
Minimal AI risk? Adrian Malleson, head of research at the Royal Institute of British Architects, offered a different perspective. He noted that AI use within architecture has already accelerated, speeding up aspects of design work while simultaneously raising serious concerns about the erosion of entry-level roles that traditionally serve as training grounds for the profession. While Malleson was clear that no AI can replicate an architect’s creative judgement (or take on legal responsibility, for that matter!), he was equally clear that the profession is already being reshaped by AI.
I’ll admit it made my eyes twitch when the House of Commons concluded that AI is not in itself currently having an adverse effect on employment, because that is not how it feels and certainly not what I’m hearing from the organisations I work with. Perhaps the AI excuse functions less as the cause than the justification. It sounds better to say “we’re innovating” than “we’re not making enough money”. Add to that a steady stream of headlines about job losses and “AI taking over” and no wonder we’re a ticking anxiety bomb. Official data, media narratives and lived experience are all pulling in different directions.
Every cloud
Back at the Forum’s close, a chairperson implored us not to “lose sight of the opportunity” AI presents. That optimism – the idea that AI will free us from drudge work and unleash creativity – has merit. But veer too far into techno-utopianism and it becomes easy to overlook the very real, present-day pitfalls. Those are already emerging, from biased hiring algorithms to workforce monitoring. Not to mention the growing volume of bland LinkedIn posts.
Given so many conflicting viewpoints, the most sensible response seems to be a mix of curiosity and caution. As one professor put it, there is “too much certainty in all the wrong places”. Perhaps the more useful starting point is admitting what we don’t yet know and being a little more sceptical about what we think we do. We still don’t really know what makes people tick, so working out what will replace them or not is a step too far.
The good news is that some things are still within our control. We can take practical steps to ensure AI helps us do our jobs better. That starts with improving AI literacy. We don’t all need to be data scientists, but we should have a basic understanding of how these systems work, where they can add value, and where they can fail. It also means upskilling and experimenting. If a tool might make your work easier, it’s worth trying it out, presuming your privacy policy is honoured, particularly if it buys back time for the parts of the job where the human bit matters.
The future of work isn’t preordained. It will be shaped by how we choose to adopt and govern this technology. And that makes me optimistic. AI may be a disruptive force, but if we stay informed, adaptable and mentally switched on, we have a chance to ensure this technology works for not against us.
Jo Sutherland is Managing Director of Magenta Associates.