
From deepfakes to hallucinations and false sources, discover how AI is making it easier to propagate climate change denial and spread disinformation
By
Do offshore wind turbines cause the deaths of whales? The answer is no, but with the unsettling and ever-growing intelligence of artificial intelligence (AI), many unassuming members of the public were tricked by AI-generated images created by conservative think tank Texas Public Policy Foundation last year to suggest otherwise.
This is just one of the examples in which AI is being weaponised to provide seemingly real information on a whole host of climate topics. Whether spreading misinformation helps a particular company or group of people or attempts to defame a celebrity or public figure, the result is the same: adding to the circulation of incorrect and potentially damaging information on the planet.
And as AI’s use continues to grow – with many of these softwares now available for anyone to use for free – the speed at which misinformation can be disseminated using the technology is growing faster.
So, how exactly is AI being harnessed as a means to spread climate change denial and misinformation? And what can be done to stop it?
Pulling up AI’s sketchy citations
According to the latest global risks report by the World Economic Forum, misinformation and disinformation are the biggest short-term risks that the planet faces over the next two years. For clarity, these two terms have slightly different meanings: misinformation refers to the spreading of information that is incorrect without realising it, whereas disinformation is the deliberate spread of false information.

And misinformation is something that AI can increasingly be prone to spew. In the context of the technology, these are known as ‘hallucinations’ – incorrect or misleading results, often causing by insufficient data training, assumptions made by the model or biases in data used.
Hallucinations can appear convincing to the everyday user – with AI software known to fabricate links to sources or web pages that never existed.
Even Google, who have their own generative AI chatbot Gemini (formerly known as Bard) say that models can generate outputs that are ‘seemingly plausible’ but factually incorrect.
In the context of climate change denial, this can be especially worrisome. Supposed facts or sources can be created that bear no credibility but can be used by climate deniers to spread false information.
Enjoying this article? Check out our related reads on AI:
What’s more concerning is that this information can be spread in an even more convincing and persuasive way than ever before – made quickly and cheaply, and targeting social media and search engines alike.
Even if the a user wishing to spread malicious information on the climate isn’t the most adept writer, the sophistication of the latest AI models means by inputting several examples of articles or writing styles, they can produce relatively persuasive pieces of text in the same tone – social media posts, or lengthier blog articles – in just a few seconds.
Again, for climate change denial, this is troublesome: AI models can be fed reams of content that may have received high traction or gone viral in climate change denier forums or amongst conspiracy theorists, and pump out similarly-written text to further fuel this incorrect information, according to a study.
This requires little effort from those seeking to push such information, breaking down the obstacles historically required to mass-produce content.
This scale at which generative AI can produce content – all of which can be varied, personalised and a lot of the time, convincingly human – also bolsters the online presence of climate change denier communities. If several users can produce reams of content, written under various pseudonyms, what in reality is a small cluster of climate change deniers can instead appear much larger, strengthening the alleged appearance of the community.
AI pseudo-scientists
For those seeking to spread climate misinformation, there is yet again another avenue to which AI can be exploited and used for ulterior motivations. As well as infiltrating social media platforms, AI-generated content has also been found to have entered the body of scientific understanding about our planet.
In a study conducted by Harvard University, researchers discovered that GPT-fabricated studies appeared in searches on Google Scholar, an academic search engine used as a source for research. This GPT-fabricated content – which focused primarily on the topics of the environment, health and computing – emulated the style of scientific papers, sitting alongside genuine, peer-reviewed scientific papers in the search engine’s database.

Titles of these papers include general keywords and ‘buzzwords’ according to the study, including the phrase ‘climate policy’.
And what’s more concerning is that these papers are difficult – if not impossible – to remove from the databases they sit in, already existing in multiple copies and spreading to several archives, repositories and social media.
Ultimately, falsified studies on such important topics threatens to undermine the integrity of scientific records, which have always been viewed as a trustworthy and concrete space for which facts can be separated from false information.
The emergence of deepfakes
As well as text, other forms of digital media created by AI are quickly becoming more sophisticated, such as doctored videos, imagery and audio, which can further increase the spread of climate misinformation and climate change denial. In particular, ‘deepfakes’ – videos or pictures of celebrities or public figures that have been altered so that they appear to be doing or saying something that did not happen – have risen in popularity too.
Senior research scientist at the Allen Institute for AI, Jesse Dodge, has expressed concerns that these deepfake videos and pictures will be used to ‘accelerate’ climate misinformation.

And Dodge’s concerns are rightly founded: only last year, climate activist Greta Thunberg was recently at the centre of a deepfake storm.
A doctored clip of her allegedly endorsing ‘vegan grenades’ and ‘sustainable tanks and weaponry’ to promote a book titled Vegan Wars during a BBC interview made waves across social media – but in reality, her appearance on the news channel was to speak on climate anxiety, being vegan and her book The Climate Book.
Many commenters online appeared to believe the video was authentic, again reinforcing just how quickly deepfakes can alter public perceptions of both individuals and the topics at hand.
So how is it possible to differentiate facts from fakes? Deepfake videos may be difficult to spot, but there are steps to take in ensuring that videos are legitimate. Deepfakes often fail to represent the natural, real-life physics of human features – inconsistencies in facial movements such as blinking, as well as the video’s lighting and shadows may help discern real from fake. Lip movements can also be a giveaway: checking if the audio aligns with these movements can verify whether a video is real or not.

Another startling example of how quickly fake imagery can spread misinformation on our planet occurred back In March 2023, when a conservative and pro-fossil fuel think tank – the Texas Public Policy Foundation – created a fake image of a dead whale and wind turbines using AI.
Featuring in their newsletter, the image propelled disinformation that these animals’ deaths and beaching were caused by the renewable energy source, despite no scientific evidence to suggest wind farms have any significant impact on whales.
The spread of such false rhetoric was to further the foundation’s desire to halt plans of building new offshore wind projects on the US’s East Coast and in the Gulf of Mexico.
A false scientific paper linking whale deaths to wind turbines – allegedly produced by the University of Tasmania – was also spread on social media by a post in a Facebook group titled No Offshore Wind Farm for the Illawarra. The editor of the journal which the paper falsely claimed to have been published in, described the spreading of such disinformation as a ‘culture war’ aimed at diminishing the wind sector.
So what now?
While the potential misuse of AI should be well-understood in terms of spreading misinformation and climate change denial, it can also be a force for good for the planet.
From models being trained on measuring changes in icebergs 10,000 times faster than a human – which will help scientists understand how much meltwater icebergs release as a result of global warming – to those which can use AI to create detailed maps of ocean litter in remote locations – its uses for our world’s benefit are vast-reaching.

But as with any new advancement or technology, it’s important to understand exactly how it functions, and be aware of its limitations and potential repercussions.
‘We should stop looking at AI through the “benefit-only” analysis and recognise that in order to secure robust democracies and equitable climate policy, we must rein in big tech and regulate AI,’ said Head of Policy & Campaigns at Global Action Plan, Oliver Hayes.
Looking at AI through a more nuanced viewpoint, one that both acknowledges its positive impact but also recognises its current and future misuse – especially in spreading climate change denial and misinformation – is an attitude, then, that might ensure the technology is integrated into society in a more holistic way.
Ensuring strong AI regulation – as well as being aware of the ways in which we as users can develop a keener eye for noticing false content – will mean that our knowledge of the planet will always remain one cemented in scientific fact and not fiction.