We might think of online and digital solutions such as AI as more sustainable and eco-friendly. We have email signatures reminding us to think before hitting print and are encouraged to send e-cards with seasonal greetings to save the trees. Our overall preference to switch to all things online means that we rarely question, or even consider, the environmental impact of our computers. It’s also how the truly enormous impact of AI has, thus far, stayed off our radar.
It is one of those concepts that isn’t obvious until it is pointed out and then it is SO obvious as to be embarrassing. AI is having an enormous impact on our environment. Given AI systems require a huge amount of electricity to run, an equally huge carbon footprint is inevitable. Researchers at the University of Massachusetts Amherst examined the various Natural Language Processing models (such as ChatGPT) and discovered that training a single large language model generates a carbon footprint of around 600,000 pounds of CO2 emissions. That’s the equivalent of 125 round-trip flights between New York and Beijing.
But carbon is not the only footprint that AI is stomping on the planet; we also need to talk about the water footprint. All that energy that AI’s vast data centres and servers use means they generate extreme heat and need cooling. Cooling requires water. A lot of water. Research found training a model such as ChatGPT3 used around 700,000 litres of water. That’s more than you and several households on your street will collectively use in a year. Asking ChatGPT to help you with a simple work task could cost 500ml of freshwater.
Of course, the energy consumption does not stop there. Large language models (LLMs) only remain useful and relevant if they are constantly updated, meaning each model will require ongoing training and accompanying resource in order to function effectively and meaningfully. As the models grow, so does the energy needed to sustain them and this is where environmental impacts can be hit hardest.
One of the many benefits of AI that we are sold is the ability to combat the ongoing climate crisis by identifying extreme weather conditions and mapping likely locations of wildfires so that direction and provision can be provided far sooner. But what if the very systems designed to help us are significantly warming the very planet they are trying to save? The desire to be ahead in the AI game is seeing multiple iterations of these models being developed and utilised every day, so we are talking about the impacts of hundreds, if not thousands, of these models being powered by our planet.
I am not a naysayer when it comes to the world of AI. I see many fantastic opportunities to support, augment and enhance our lives. But I do see an ethical dilemma when it comes to using LLMs for, forgive me, nonsense. Whilst we may giggle at the Pope in Balenciaga or remain stunned by the incredibly realistic Morgan Freeman deepfake, knowing the cost to the planet makes them significantly less funny. This raises the question; if we are going to risk a negative environmental impact on the planet, shouldn’t we be imposing much tighter regulations on how and why AI can be used?
Many companies I work with express an appetite for AI and a desire to be ahead of the game utilising the competitive advantage that AI can offer them. These same companies have environmental and sustainability departments, workstreams and targets. Does it make sense to have these two completely conflicting ambitions side by side without first considering how one may influence the other?
I’m not saying we shouldn’t use AI, although it is estimated that around 80% of tasks currently using AI do not actually require AI. There are other, often more established and efficient, systems in place to do the job. However, no one wants to be left behind. No-one wants to be the loser in the AI race and this desire to win an as-yet-unnamed prize may cause us to turn our back on policies and ambitions that have taken decades to create and embed. We cannot lose sight of the bigger picture and the greater good.
Before implementing AI within our businesses, we need to establish clear boundaries on what they will be used for and by whom. We need to challenge the use of AI and ask why we need it and be able to justify the cost to our employees, our shareholders and our grandchildren. Whilst AI can generate multiple images of alternative planets, it can’t actually bring them to life. We have to be more responsible in our engagement with AI and we need to do it now.
Dr Stephanie Fitzgerald is an experienced Clinical Psychologist and Health and Wellbeing Consultant. Stephanie is passionate about workplace wellbeing and strongly believes everyone can and should be happy at work. Stephanie supports companies across all sectors to keep their employees happy, healthy, safe and engaged. Follow her on Instagram @workplace_wellbeing
July 3, 2024
Why AI is not getting the green light for sustainability
by Stephanie Fitzgerald • AI, Comment, Environment, SF, Technology
We might think of online and digital solutions such as AI as more sustainable and eco-friendly. We have email signatures reminding us to think before hitting print and are encouraged to send e-cards with seasonal greetings to save the trees. Our overall preference to switch to all things online means that we rarely question, or even consider, the environmental impact of our computers. It’s also how the truly enormous impact of AI has, thus far, stayed off our radar.
It is one of those concepts that isn’t obvious until it is pointed out and then it is SO obvious as to be embarrassing. AI is having an enormous impact on our environment. Given AI systems require a huge amount of electricity to run, an equally huge carbon footprint is inevitable. Researchers at the University of Massachusetts Amherst examined the various Natural Language Processing models (such as ChatGPT) and discovered that training a single large language model generates a carbon footprint of around 600,000 pounds of CO2 emissions. That’s the equivalent of 125 round-trip flights between New York and Beijing.
But carbon is not the only footprint that AI is stomping on the planet; we also need to talk about the water footprint. All that energy that AI’s vast data centres and servers use means they generate extreme heat and need cooling. Cooling requires water. A lot of water. Research found training a model such as ChatGPT3 used around 700,000 litres of water. That’s more than you and several households on your street will collectively use in a year. Asking ChatGPT to help you with a simple work task could cost 500ml of freshwater.
Of course, the energy consumption does not stop there. Large language models (LLMs) only remain useful and relevant if they are constantly updated, meaning each model will require ongoing training and accompanying resource in order to function effectively and meaningfully. As the models grow, so does the energy needed to sustain them and this is where environmental impacts can be hit hardest.
One of the many benefits of AI that we are sold is the ability to combat the ongoing climate crisis by identifying extreme weather conditions and mapping likely locations of wildfires so that direction and provision can be provided far sooner. But what if the very systems designed to help us are significantly warming the very planet they are trying to save? The desire to be ahead in the AI game is seeing multiple iterations of these models being developed and utilised every day, so we are talking about the impacts of hundreds, if not thousands, of these models being powered by our planet.
I am not a naysayer when it comes to the world of AI. I see many fantastic opportunities to support, augment and enhance our lives. But I do see an ethical dilemma when it comes to using LLMs for, forgive me, nonsense. Whilst we may giggle at the Pope in Balenciaga or remain stunned by the incredibly realistic Morgan Freeman deepfake, knowing the cost to the planet makes them significantly less funny. This raises the question; if we are going to risk a negative environmental impact on the planet, shouldn’t we be imposing much tighter regulations on how and why AI can be used?
Many companies I work with express an appetite for AI and a desire to be ahead of the game utilising the competitive advantage that AI can offer them. These same companies have environmental and sustainability departments, workstreams and targets. Does it make sense to have these two completely conflicting ambitions side by side without first considering how one may influence the other?
I’m not saying we shouldn’t use AI, although it is estimated that around 80% of tasks currently using AI do not actually require AI. There are other, often more established and efficient, systems in place to do the job. However, no one wants to be left behind. No-one wants to be the loser in the AI race and this desire to win an as-yet-unnamed prize may cause us to turn our back on policies and ambitions that have taken decades to create and embed. We cannot lose sight of the bigger picture and the greater good.
Before implementing AI within our businesses, we need to establish clear boundaries on what they will be used for and by whom. We need to challenge the use of AI and ask why we need it and be able to justify the cost to our employees, our shareholders and our grandchildren. Whilst AI can generate multiple images of alternative planets, it can’t actually bring them to life. We have to be more responsible in our engagement with AI and we need to do it now.
Dr Stephanie Fitzgerald is an experienced Clinical Psychologist and Health and Wellbeing Consultant. Stephanie is passionate about workplace wellbeing and strongly believes everyone can and should be happy at work. Stephanie supports companies across all sectors to keep their employees happy, healthy, safe and engaged. Follow her on Instagram @workplace_wellbeing