[imagesource:canva]
It’s getting to a point where we hear the term Artificial Intelligence (AI) so frequently, it barely shocks us anymore.
A bug-eyed, digitally-enhanced pageant crowing Miss AI with a cash prize? Tick! An AI camera that paintballs unwanted guests? Old news.
Before we become too complicit in the global quest for technological advancement, it’s time to look closer to home as we South Africans prepare for a monumental voting weekend at the end of this month. It’s easy to have a relieved giggle that load-shedding has been put on pause while political parties try to get into voters’ good books, but there are still some electioneering techniques that are lingering in the proverbial dark. Most importantly – AI.
We’ve very quickly moved from relatively innocent AI-generated recipes to OpenAI and other AI companies introducing platforms like Sora, which allow the user to generate ‘synthetic’ video content from only a few lines of text. While it’s not a sign to finally invest in a bunker to escape the ‘rise of the machines’, it’s certainly a reminder to familiarise oneself with the legal frameworks that work to protect us.
Journalism stalwarts at Daily Maverick partnered up with Alt Africa to give us Saffas more legal ammunition when it comes to detecting AI-generated content, and suggested that the top three principles to follow for political parties were as follows:
- Any AI policy that is used should be linked to pre-existing policies that protect journalistic integrity
- Parties should foster transparency and release as much information on their use of AI as possible
- Humans should still be supervising the use of AI
While list items one and two seem like no-brainers, it’s a little more challenging to enforce point number two, especially in SA where ‘transparency’ and ‘politician’ being used in the same sentence seems like an oxymoron. The new DA advert where our national flag goes up in a blaze, for example, is most likely created by AI.
However, a senior executive from one of SA’s biggest advertising companies anonymously (and with no punches pulled) commented that the DA’s recent advert foible could barely be described as AI.
“This is just stock-standard animation tools used by someone who should be ashamed of themselves… Honestly if it was made using AI it would be considerably better.”
Ouch, fair enough.
If you missed the embarrassing flag ad, local radio legend Lester Kiewiet explains the deets:
@capetalk The DA has taken some flack for its latest TV advert depicting a smouldering South Africa flag , in reference to the ANC led government. Is this disrespectful towards the flag? Lester Kiewit shares his view… #democraticalliance #flag #southafrica ♬ original sound – CapeTalk
So while political parties are meant to let the public know whether they’re dipping into AI for their ads, we can’t trust them to fulfil the transparency section of the brief. Therefore, it’s imperative to arm yourself with the following most crucial critical detection skills:
- Consistency: Humans make errors, while AI tries not to. If what you’re viewing looks too polished, it could be AI.
- Missing sources: Humans usually cite their sources (as the law requires), while AI often flies sourceless. If you can’t seem to find the original source of info, it could be AI.
- Unnatural emotions: Humans are emotional and unpredictable – we often give this away from our facial expressions. If the footage you’re watching is serving up ‘uncanny valley’, it’s possible it could be AI.
If you see anything dodgy along these lines, you can always report AI misuse via Real411.
We’ve got enough to stress about in SA without worrying about digital spuriousness – keep those peepers peeled for nonsense.
[source:dailymaverick]