The Rise of AI Fact-Checkers: A Double-Edged Sword
Fact-checking has played a pivotal role in curbing what journalist and fact-checker Mayowa Tijani describes as “historical disinformation” — deliberate falsehoods often propagated by politicians to manipulate public opinion. As a journalist with TheCable and a seasoned fact-checker, Tijani observes that the political landscape in Nigeria has shifted. Politicians are now more cautious, aware that their statements will be scrutinized and potentially debunked. The fear of being exposed has helped reduce the intentional spread of false narratives.
Yet, disinformation — calculated, strategic lies — is no longer the only, or even the primary, concern. In today’s hyper-connected world, where billions of pieces of content circulate daily, fact-checkers must now also grapple with the far more pervasive challenge of misinformation: falsehoods shared not with malicious intent, but out of ignorance, confusion, or carelessness.
This information overload makes the role of human fact-checkers more crucial than ever. But it also raises a pressing question: what happens when artificial intelligence steps into the role of fact-checker?
One such AI is Grok, the chatbot developed by X (formerly Twitter). Increasingly, Nigerians — like users around the world — are turning to Grok not just for entertainment or general queries, but to verify claims and check facts in real-time. With misinformation spreading faster than ever, many find AI a convenient tool for quick verification.
However, this convenience comes with a caveat: accuracy is not guaranteed.
According to TechCrunch, users in countries like India have started relying on Grok as a de facto fact-checking tool — a troubling trend, given the AI’s tendency to “hallucinate” or fabricate details. In one notable example, Grok falsely accused NBA star Klay Thompson of vandalism, when in reality, his home was the target of a vandal throwing bricks. Before the truth emerged, the fabricated version had already spread widely, exemplifying how easily AI errors can fuel misinformation rather than stop it.
In Nigeria, Grok has gained traction both as a fact-checking assistant and a source of comic relief. But experts, including Tijani, caution against overreliance on such tools. While AI models like Grok can be fast and efficient, they are prone to errors — sometimes subtle, sometimes egregious — due to the limitations in their training data and the unpredictability of how they generate responses.
“AI doesn’t understand context the way a human does,” Tijani explains. “It may present information confidently, but that doesn’t mean it’s accurate. That’s dangerous in a society where people are eager for quick answers.”
As AI continues to evolve, its role in information verification will likely expand. But for now, fact-checkers remain essential, not just for countering misinformation, but for ensuring AI tools themselves are held accountable. Because when machines begin to shape our understanding of truth, the stakes are too high to leave unchecked.