One unintended but welcome result of the new fixation with AI is that many of the people who became experts on the workplace in 2020 are now experts on AI. You’ll find them on social media and they’ll have written a book about it by May to sit on the shelf alongside the one about hybrid working and The Great Resignation. So, if you want some certainty about where generative AI taking us, go talk to one of them because people who know about the subject seem to have little or no idea or raise even more questions.
One of the people behind the most talked about AI of all, ChatGPT, which quickly became the most rapidly adopted technology in history last year was still working things out as it went to market. In a Time Magazine interview, OpenAI Chief Technology Officer Mira Murati admitted she had been taken aback by the surge of interest in the app and conceded the firm weren’t even sure whether they should release it, because it is in the habit of making up convincing sounding facts and they haven’t yet worked out its ethical consequences.
“This is a unique moment in time where we do have agency in how it shapes society” she said. “And it goes both ways: the technology shapes us and we shape it. There are a lot of hard problems to figure out. How do you get the model to do the thing that you want it to do, and how you make sure it’s aligned with human intention and ultimately in service of humanity? There are also a ton of questions around societal impact, and there are a lot of ethical and philosophical questions that we need to consider. And it’s important that we bring in different voices, like philosophers, social scientists, artists, and people from the humanities.”
These doubts have been there for a long time. When asked in 2019 about its business model, OpenAI CEO Sam Altman had this to say:
Connie Loizos, Silicon Valley Editor of TechCrunch, asks Sam Altman, CEO of OpenAI (the company behind ChatGPT), in 2019, about OpenAI's business model.
Maybe he was being cute in some way, but there something very Deep Thought about this response. He can’t provide an answer but the machine might.
Despite this level of doubt from the people who know most about the tech, we already have people providing answers to where we are going with this stuff, when we clearly don’t even know what the questions are. And we’re talking about it all when we still haven’t got a grip on social media and the Internet.
What now?
The challenges are already apparent. This piece in Wired unpicks some of them, notably how we are likely to be lulled into believing we are interacting with an intelligence rather than a probability machine trying to please us. By offering up what it thinks we want to hear based on what it can find, it is likely to offer us various forms of misinformation, bias and unpleasantness.
Some AIs are already running into trouble for plagiarism. There is a more general problem I raised in a recent article about how its initial impact will be to proliferate but flatten out content because it is creating based on what already exists. This is something described by Mary Harrington as Human Centipede culture in this article which argues that we have taken this path ourselves already, without a technology to massively accelerate it.
It is already having a retrograde and perverse impact on some aspects of our working lives, according to Karen Levy of Cornell. In this article, she argues that AI often incentivises the wrong activities and routinely passes the burdens of work from employer to employee.
“Across many industries and workplaces, workers’ productivity is increasingly tracked, quantified and scored. For example, a recent investigative report from The New York Times described the rise of monitoring regimes that surveil all kinds of employees, from warehouse workers to finance executives to hospice chaplains. Regardless of the quite different kinds of work, the common underlying premise is that productivity monitoring counts things that are easy to count: the number of emails sent, the number of patient visits logged, the number of minutes that someone’s eyes are looking at a particular window on their computer. Sensor technologies and tracking software give managers a granular, real-time view into these worker behaviours. But productivity monitoring is rarely able to measure forms of work that are harder to capture as data – such as a deep conversation about a client’s problem, or brainstorming on a whiteboard, or discussing ideas with colleagues.
“Firms often embrace these technologies in the name of minimising worker shirking and maximising profit. But in practice, these systems can perversely disincentivise workers from the real meat of their jobs – and also results in them being tasked with the additional labour of making themselves legible to tracking systems. This often takes the form of busy work: jiggling a mouse so it’s registered by monitoring software, or doing a bunch of quick but empty tasks such as sending multiple emails rather than deeper but less quantifiable engagement. One likely result of AI monitoring is that it encourages people to engage in those sometimes frivolous tasks that can be quantified. And workers tasked with making their work legible to productivity tracking bear the psychological burdens of this supervision, raising stress levels and impeding creativity. In short, there’s often a mismatch between what can be readily measured and what amounts to meaningful work – and the costs of this mismatch are borne by workers.”
It may not even increase productivity, according to this piece by Eli Dourado which sets out why the technology may have a huge impact on our lives while having no impact on the economy. He unpicks four key sectors in which you would expect AI and automation to have an impact – housing, transportation, health and energy and argues the effects will be minimal.
Even in an area where it will massively increase output – the amount of content online – people are used to editing down an already unimaginable amount of information to what they need or what will confirm their biases and sometimes yearning for misinformation. Supply already outstrips demand and demand for content won’t be increasing however much is created. Most of what will be produced will be created and consumed by AI.
“I expect we’ll soon have AI-authored newsletters, virtual celebrities, algorithmically generated movies, and more. We will be swimming in content,” he writes. “There are those who think that more content is a bad thing. We will waste more time. We will be more distracted. But even putting those issues aside, we may be reaching diminishing marginal returns to media production. When I lived in Portugal as a child in the late 1980s, we had no Internet and two TV channels. I don’t know how much more content I have access to today, but it is perhaps a million times more (Ten million? More? I’m not even sure of the order of magnitude.)
“That increase in content is life changing, but if the amount of content increased by another factor of a million because of AI, it’s not clear my life would change at all. Already, my marginal decision is about what content not to consume, what tweeter to unfollow, and more generally how to better curate my content stream.”
This piece was originally published in February 2023.
Mark is the publisher of Workplace Insight, IN magazine, Works magazine and is the European Director of Work&Place journal. He has worked in the office design and management sector for over thirty years as a journalist, marketing professional, editor and consultant.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behaviour or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
May 22, 2024
Some questions about AI, a world drowning in content and the human centipede of creativity
by Mark Eltringham • AI, Comment, Technology
One unintended but welcome result of the new fixation with AI is that many of the people who became experts on the workplace in 2020 are now experts on AI. You’ll find them on social media and they’ll have written a book about it by May to sit on the shelf alongside the one about hybrid working and The Great Resignation. So, if you want some certainty about where generative AI taking us, go talk to one of them because people who know about the subject seem to have little or no idea or raise even more questions.
One of the people behind the most talked about AI of all, ChatGPT, which quickly became the most rapidly adopted technology in history last year was still working things out as it went to market. In a Time Magazine interview, OpenAI Chief Technology Officer Mira Murati admitted she had been taken aback by the surge of interest in the app and conceded the firm weren’t even sure whether they should release it, because it is in the habit of making up convincing sounding facts and they haven’t yet worked out its ethical consequences.
“This is a unique moment in time where we do have agency in how it shapes society” she said. “And it goes both ways: the technology shapes us and we shape it. There are a lot of hard problems to figure out. How do you get the model to do the thing that you want it to do, and how you make sure it’s aligned with human intention and ultimately in service of humanity? There are also a ton of questions around societal impact, and there are a lot of ethical and philosophical questions that we need to consider. And it’s important that we bring in different voices, like philosophers, social scientists, artists, and people from the humanities.”
These doubts have been there for a long time. When asked in 2019 about its business model, OpenAI CEO Sam Altman had this to say:
Maybe he was being cute in some way, but there something very Deep Thought about this response. He can’t provide an answer but the machine might.
Despite this level of doubt from the people who know most about the tech, we already have people providing answers to where we are going with this stuff, when we clearly don’t even know what the questions are. And we’re talking about it all when we still haven’t got a grip on social media and the Internet.
What now?
The challenges are already apparent. This piece in Wired unpicks some of them, notably how we are likely to be lulled into believing we are interacting with an intelligence rather than a probability machine trying to please us. By offering up what it thinks we want to hear based on what it can find, it is likely to offer us various forms of misinformation, bias and unpleasantness.
Some AIs are already running into trouble for plagiarism. There is a more general problem I raised in a recent article about how its initial impact will be to proliferate but flatten out content because it is creating based on what already exists. This is something described by Mary Harrington as Human Centipede culture in this article which argues that we have taken this path ourselves already, without a technology to massively accelerate it.
It is already having a retrograde and perverse impact on some aspects of our working lives, according to Karen Levy of Cornell. In this article, she argues that AI often incentivises the wrong activities and routinely passes the burdens of work from employer to employee.
“Across many industries and workplaces, workers’ productivity is increasingly tracked, quantified and scored. For example, a recent investigative report from The New York Times described the rise of monitoring regimes that surveil all kinds of employees, from warehouse workers to finance executives to hospice chaplains. Regardless of the quite different kinds of work, the common underlying premise is that productivity monitoring counts things that are easy to count: the number of emails sent, the number of patient visits logged, the number of minutes that someone’s eyes are looking at a particular window on their computer. Sensor technologies and tracking software give managers a granular, real-time view into these worker behaviours. But productivity monitoring is rarely able to measure forms of work that are harder to capture as data – such as a deep conversation about a client’s problem, or brainstorming on a whiteboard, or discussing ideas with colleagues.
“Firms often embrace these technologies in the name of minimising worker shirking and maximising profit. But in practice, these systems can perversely disincentivise workers from the real meat of their jobs – and also results in them being tasked with the additional labour of making themselves legible to tracking systems. This often takes the form of busy work: jiggling a mouse so it’s registered by monitoring software, or doing a bunch of quick but empty tasks such as sending multiple emails rather than deeper but less quantifiable engagement. One likely result of AI monitoring is that it encourages people to engage in those sometimes frivolous tasks that can be quantified. And workers tasked with making their work legible to productivity tracking bear the psychological burdens of this supervision, raising stress levels and impeding creativity. In short, there’s often a mismatch between what can be readily measured and what amounts to meaningful work – and the costs of this mismatch are borne by workers.”
It may not even increase productivity, according to this piece by Eli Dourado which sets out why the technology may have a huge impact on our lives while having no impact on the economy. He unpicks four key sectors in which you would expect AI and automation to have an impact – housing, transportation, health and energy and argues the effects will be minimal.
Even in an area where it will massively increase output – the amount of content online – people are used to editing down an already unimaginable amount of information to what they need or what will confirm their biases and sometimes yearning for misinformation. Supply already outstrips demand and demand for content won’t be increasing however much is created. Most of what will be produced will be created and consumed by AI.
“I expect we’ll soon have AI-authored newsletters, virtual celebrities, algorithmically generated movies, and more. We will be swimming in content,” he writes. “There are those who think that more content is a bad thing. We will waste more time. We will be more distracted. But even putting those issues aside, we may be reaching diminishing marginal returns to media production. When I lived in Portugal as a child in the late 1980s, we had no Internet and two TV channels. I don’t know how much more content I have access to today, but it is perhaps a million times more (Ten million? More? I’m not even sure of the order of magnitude.)
“That increase in content is life changing, but if the amount of content increased by another factor of a million because of AI, it’s not clear my life would change at all. Already, my marginal decision is about what content not to consume, what tweeter to unfollow, and more generally how to better curate my content stream.”
This piece was originally published in February 2023.
Mark is the publisher of Workplace Insight, IN magazine, Works magazine and is the European Director of Work&Place journal. He has worked in the office design and management sector for over thirty years as a journalist, marketing professional, editor and consultant.
The image for this article was created by DALL-E.