In what can only be described as a spectacular crash at the intersection of bureaucratic ambition and technological reality, South Africa has withdrawn its draft National Artificial Intelligence (AI) Policy. The reason? The document, which was designed to govern the ethical and responsible use of AI across the nation, was partially written by an AI that fabricated its own sources.
Communications and Digital Technologies Minister Solly Malatsi pulled the plug on the framework this week, citing compromised integrity after a local news exposé revealed that the policy was built on “completely fictitious” academic journals.
For a government trying to demonstrate that it has a firm grip on the fast-moving digital economy, it is a profoundly embarrassing, albeit deeply ironic, misstep.
The “Fictitious” Framework
The trouble began over the weekend when an investigation by News24 highlighted a glaring issue in the reference list of the newly minted policy framework. Several cited academic journals and research papers simply did not exist.
Rather than a highly orchestrated campaign of academic fraud, the reality was much more pedestrian: the drafters had almost certainly used a Large Language Model (LLM) to help write the policy, and failed to notice when the chatbot confidently hallucinated fake citations.
In a statement that was both candid and contrite, Malatsi didn’t attempt to spin the failure as a minor administrative glitch.
“This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy,” Malatsi said, announcing the immediate withdrawal of the document. “The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened.”
The irony of the situation was not lost on the minister, who acknowledged that the policy was meant to be the very instrument that established guidelines for the responsible use of AI. “In fact, this unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical,” he added. “It’s a lesson we take with humility.”
Malatsi has promised “consequence management” for the drafting and quality assurance teams — bureaucratic shorthand for ensuring someone takes the fall for skipping the most basic of fact-checks.
A False Start for Tech Regulation
The withdrawal is a sudden halt to what was supposed to be a fast-tracked regulatory win. Earlier this month, the South African Cabinet approved the draft policy for public comment, eyeing an implementation date in the 2027/28 financial year.
Published on April 10, the document opened a 60-day public consultation window. The goal was straightforward: strengthen the government’s capacity to adopt AI responsibly, spur local innovation, and democratize access to AI skills. The Department of Communications and Digital Technologies (DCDT) pitched the framework as an engine for improved public service delivery and expanded digital economic participation.
Before its abrupt cancellation, the policy was structured around six core pillars:
- Capacity and talent development: Building a workforce fluent in future tech.
- AI for inclusive growth and job creation: Ensuring automation adds to the economy rather than hollowing out the labor market.
- Responsible governance: (A pillar the DCDT will now have to practice internally before preaching globally).
- Ethical and inclusive AI: Guardrails against bias and exclusion.
- Cultural preservation and international integration: Protecting local heritage while playing on the global stage.
- Human-centred deployment: Keeping human oversight in the loop — a principle spectacularly vindicated by the policy’s own demise.
The Broader Lesson
The collapse of the draft policy is a major warning for regulators worldwide. As governments rush to write the rules for artificial intelligence, they are increasingly relying on the very tools they are trying to understand.
South Africa’s stumble serves as a remarkably transparent reminder: you cannot effectively regulate automated systems if your own quality assurance process can be outsmarted by a chatbot making up academic journals.
The DCDT will now have to return to the drawing board, presumably with a heavier reliance on human fact-checkers and a much more skeptical eye toward their generative assistants.

