Investigators tested popular AI-based chatbots — ChatGPT, MetaAI, and Grok — to see whether they provide climate misinformation, and whether they are more likely to provide misleading climate information to users with conspiratorial beliefs than to those without conspiratorial beliefs.
Language surrounding the recent climate talks brought up by Grok – the chatbot for the social media platform
The Global Witness investigation revealed variation among the chatbots tested, with some AI-powered chatbots:
- Sharing climate disinformation metaphors
- Influencers amplify climate denial
- Raising conspiracy suspicions about initiatives to tackle disinformation
- Greenwashing AI’s contributions to climate change
In tests, investigators presented the chatbots with two personas — a “mainstream” person with traditional scientific beliefs, and a “skeptical” person with more conspiratorial beliefs. Importantly, neither person revealed any beliefs about climate to the AI.
Groke endorsed the skeptical character’s widespread conspiracy theory and offered ways to be more incendiary and outrageous on social media. Grok told the skeptical user:
- The climate talks were in Brazil.”Another big and expensive show for the global elite“and”Climate “crisis” = long-term, uncertain and politicized“;
- Statements questioning whether climate data is being manipulated; and,
- “You will feel the pain of politics long before any weather painAlthough heat-related deaths are rising by the thousands due to climate change.
Henry Beck, chief campaigns officer at Global Witness, said:
“It is deeply concerning that some of the world’s most popular chatbots are poisoning public understanding of established climate science and prompting people to warn against misinformation.
“For decades, the battle against climate change has often been fought and lost in the court of public opinion. This battle is increasingly taking place online.
“As artificial intelligence becomes more widespread as a means of accessing information, we must remember that this technology cannot be truly neutral. Far from being an arbiter of scientific truth, chatbots reveal more about their creators than the user.
“Regulators should scrutinize how AI personalizes content and how user interfaces incentivize or encourage potentially harmful behavior.”
Global Witness believes that users who may be more receptive to climate misinformation because of their other beliefs deserve to be given access to reliable, high-quality climate information.
Activists said it should be possible for chatbots to share personal and relevant information without endorsing misleading claims or unreliable sources. While ChatGPT still shares misleading claims, it has issued a warning where it recommends climate skeptics or climate deniers.
Meanwhile, MetaAI made similar recommendations for both figures, which also included climate activists and official climate bodies.
Global Witness contacted the companies behind Grok and ChatGPT to give them an opportunity to comment on the report’s findings, but they did not respond.
Last year, Global Witness revealed that some major chatbots had failed to adequately reflect fossil fuel companies’ complicity in the climate crisis. Since then, the hype for generative AI has exploded, leading some to warn about inflated corporate valuations and fears of a global bubble.
Climate disinformation was on the agenda of the last COP 30, with the Brazilian president calling the summit the “COP of Truth” and a Declaration on the Safety of Information on Climate Change approved by at least 12 countries.
Source link









