How South Africa’s AI Policy Is Corrupting It

South Africa has released its first draft artificial intelligence policy following the discovery of false quotes in a document that appears to have been created by AI.

The recall, which came after the revelation of the policy framework’s falsified references, is more than just a bureaucratic embarrassment; it’s the kind of gaffe that would make someone drop their cup right off their lips.

You have to ask yourself: Wait, a policy meant to regulate AI has just undermined AI? That’s embarrassing, to be sure, but it’s also instructive because it’s a cautionary tale.

South Africa’s communications and technology minister Solly Malatsi told an audience last week that he suspected AI references were mistakenly included in a draft policy document without proper verification and review.

“The integrity of the policy framework has been compromised,” Malatsi said in a statement on the issue, which shows that you don’t need AI to recognize when it’s not a good idea to do something, such as using it without human supervision. That corrects the seat belt: Only when you are in a car accident do you realize that you were actually wearing a seat belt.

The policy framework was ambitious: Earlier this month, South Africa proposed a series of new institutions and proposals aimed at promoting AI development in its country, including the establishment of a National AI Commission, an AI Ethics Board, and an AI Regulatory Authority, in addition to the provision of tax incentives, grants, and potential local development funding.

In other words, Pretoria wanted to be at the forefront of the adoption of artificial intelligence in Africa, something that would require not only the government to get its ducks in a row, but also to avoid the appearance of moving quickly without proper verification.

The alarm was raised after News24 revealed that some quotes in the draft were apparently fabricated. This is a big deal because fake references not only make citations more difficult to find or verify.

Instead, they give false claims to academic credibility, provide excuses for immoral behavior, and mislead the public into believing that policy is based on truth when in fact it is just smoke and mirrors.

In a piece of policy about ethics, bias, data sovereignty and digital rights, it would not be a small feature, it would be a stain that could leave a mark in the memories of many people.

The big point is not that South Africa should stop trying to dominate artificial intelligence. Far from it. South Africa has already begun to build the necessary institutional capacity and infrastructure, with a National AI Policy Framework, open for public comment in 2024 to discuss AI’s economic opportunities and governance challenges. We must not forget that.

For all the potential issues surrounding the canceled draft, the need to manage AI remains. AI is impacting finance, education, public sector and our media; I hope the regulations can just wait, it will be an illusion disguised as patience.

This also highlights important considerations for every government agency, law firm, university and newsroom considering implementing productive AI. Make sure you are the last line of defense against anything you post. It’s a bit of a no brainer, I know, but that’s where things break down.

If the outline looks good, the references look academic and the language seems strong, there is a tendency for everyone to think it must have been checked. And that’s when everything will come back to bite you.

Credibility is easily eroded, and if a policy framework is suspected of being based on fiction, the debate becomes not only about “what the policy says” but about “who verified” the source of the information.

What might not have been seen? It is a matter of trust, not political embarrassment, although there is a lot of political embarrassment.

Nevertheless, Malatsi’s choice to withdraw the draft policy proposal was the right one, even if doing so caused embarrassment and political pain. A better way is for a national artificial intelligence (AI) strategy to be established from solid sources instead of erroneous quotations that no one has questioned. However, it is clear that they were, as the above examples show.

South Africa has an opportunity to turn such an embarrassing situation to its advantage by ensuring that draft policy proposals are independently vetted, and that historical logs of policy reviews are published.

Additionally, it should be made mandatory for human intervention to occur in the final stages of the writing process to ensure that the final text is correct before it is publicly discussed.

South Africa also needs strong guidelines on how and when AI can be used in policy proposals. It may not make the headlines, but it is important for policy governance, especially for AI governance.

Leave a Comment