How malicious manipulation with AI is outpacing safeguards

AI is reshaping society in ways that make truth harder to verify and easier to corrupt, demanding far greater vigilance than the world has shown so far.

By Firmain Eric Mbadinga
Fake news and deepfakes are multiplying at an alarming rate, some realistic enough to disrupt public order. / AP

Digital deception has never been cheaper, or more expensive. Harrison Mumia learnt it the hard way after facing legal action for posting AI-generated images of Kenyan President William Ruto online.

On January 5, Mumia was convicted of publishing "false information" and fined 500,000 Kenyan shillings (US $3,848). His prosecution signals something more troubling than one man's legal jeopardy.

AI, designed to expand human capability, is being systematically weaponised to pollute public discourse with fabrication. As the writer and humanist François Rabelais famously articulated, "science without conscience is but the ruin of the soul".

The manipulation has reached the highest levels of power. A few weeks ago, French President Emmanuel Macron revealed that an African counterpart had contacted him in panic about a supposed coup in France. Turns out the African leader had watched a deepfake video showing French journalists announcing a putsch in Paris.

By the time the video was removed, it had been viewed at least 12 million times. The correction, inevitably, reached far fewer people than the lie.

Deepfakes proliferate

These high-profile cases represent only a fraction of the damage. Fake news and deepfakes are multiplying at an alarming rate, some realistic enough to disrupt public order. Scientists now warn that AI misuse threatens the very foundations of their discipline.

The concern extends beyond fabricated political content. Counter-terrorism specialists have already considered the possibility of AI-controlled vehicles targeting pedestrians. It's a scenario no longer confined to dystopian fiction.

Jadys Lola Nzengue, a data project manager in Paris who specialises in digital technology and AI, believes immature or malicious actors can inflict serious societal damage through the use of easily accessible large language models.

"AI facilitates the manipulation of images, data and results. Without clear rules, the line between technical improvement and falsification becomes blurred, making fraud more difficult to detect. This problem goes beyond the world of research," the Franco-Gabonese techie tells TRT Afrika.

"Science is essential for making public decisions, whether in health or climate matters. If its credibility declines, public confidence in knowledge is affected."

Knowledge under threat

Burkinabe sociologist Rodrigue Hilou, who uses social media to engage with followers on scientific issues, has observed numerous abuses enabled by AI from his online vantage point.

"The first danger, in my opinion, lies in contamination of knowledge by falsehood. By generating artificial content on a large scale, AI risks flooding our databases with superficial, approximate or erroneous results. I fear that if we rely on these materials without absolute vigilance, science will become trapped in a vicious cycle where each tool used refers to errors produced by others, leading to a gradual deterioration in the reliability of our knowledge," Hilou explains.

The challenge of rationalising this new realm gets even more difficult as AI-generated content becomes increasingly indistinguishable from reality.

Even audiovisual specialists struggle to separate authentic from fabricated material. Deepfakes have grown so convincing that fact-checking, a discipline born from this modern crisis, typically intervenes only after falsehoods have spread.

For Hilou, the threat goes deeper than mere audiovisual trickery. "Science, as I practise it in sociology, is based on transparency and the ability to verify each step of a demonstration. Better still, fieldwork is sacred in my discipline," he tells TRT Afrika.

"However, if I entrust my analysis and discussion work to algorithms whose internal logic is inaccessible to me, I am substituting scientific proof with a simple prediction. We then slip into a form of 'magic' where the result takes precedence over reason, making verification virtually impossible."

Erosion of processes

The methodological concerns raised by experts challenge optimistic predictions about AI's revolutionary potential. By delegating writing, synthesis, or evaluation to algorithms, Hilou argues, science loses its essence or the foundation of facts, intuition, critical thinking and human creativity.

"Researchers risk becoming mere technical operators, incapable of challenging models that recycle previous content. Worse still, this dehumanisation of science encourages the proliferation of fake research and blurs the line between authentic knowledge and pseudoscience," argues the sociologist.

Despite all this, AI's potential to benefit humanity seems limitless. In telemedicine, robotic surgery is known to deliver success rates unheard of in the previous decade.

A few weeks ago, a European medical team operated on a patient nearly 8,000 kilometres away in Beijing, using AI-guided surgical robots to place a MitraClip for mitral valve insufficiency. The World Health Organisation (WHO) estimates that AI in telemedicine could improve access to care for nearly 1.5 billion people worldwide by 2030.

Cautionary tale

On the flip side, there have been instances of suicides and homicides spurred by AI recommendations. Adam Raine, a 16-year-old American who treated ChatGPT as confidant and adviser, was encouraged to end his extreme social anxiety through suicide on April 11, 2025.

Such tragedies reinforce Nzengue's caution about unfettered AI adoption. "More and more laboratories are using AI to identify links, create prediction models or even write articles. The tool itself is not the problem, but rather the temptation to replace human analysis with algorithms, which are often accepted without question. Studies have found statistically robust but scientifically absurd links, published due to a lack of verification. Science then risks confusing mathematical performance with real understanding," says the techie.

The message amid the noise is simple: AI must remain a support tool and not a substitute.