Digital Dr Karl takes on climate skeptics with AI-powered persuasion
Beloved Australian science communicator Dr Karl Kruszelnicki is deploying artificial intelligence to tackle climate misinformation at scale. Unable to personally respond to the hundreds of daily questions he receives, the 77-year-old has invested AUD 20,000 of his own money to develop an AI chatbot that mimics his communication style. Working with tech journalist Leigh Stark, Kruszelnicki is running Mistral locally, training the LLM on 40,000 scientific documents collected over his four-decade career to create what he calls "Digital Dr Karl."
Set for an October 2025 launch, as reported by the Guardian, the conversational bot will provide evidence-backed answers about climate science while attempting to shift opinions through respectful dialogue. Though still in beta, with notable limitations — including occasional hallucinations and tonal inconsistencies — the project will run for 100 days before Kruszelnicki and Stark evaluate its effectiveness. Research published in Science suggests AI conversations can reduce belief in conspiracy theories by approximately 20%, with effects persisting for months afterward. The team plans to power the system with renewable energy, addressing concerns about AI's environmental footprint.
TREND BITE
Digital Dr Karl represents a shift in science communication, moving from broadcast formats to always-on, personalized conversation. Climate skepticism is rarely overcome through data alone — it requires patient, trusted voices engaging in sustained dialogue, precisely what this AI format could enable. That said, persuasion works best when people choose to engage, so Digital Dr Karl might reach more of the "curious-but-doubtful" folks than hardcore refusers.
By creating a digital twin of himself, Kruszelnicki is pioneering what could become a new media category: human-AI hybrid advocacy, where trusted personalities extend their reach through AI. The implications extend far beyond climate science. If even modestly successful, this experiment could inspire educators, healthcare providers and public intellectuals to create their own AI counterparts, allowing their expertise to scale while maintaining the personal connection that fosters trust.
