Back to News
🇿🇦

AI is risky business behind the ghost in the machine

citizen.co.za

Friday, February 20, 2026

3 min read
AI is risky business behind the ghost in the machine
Share:

AI crept in unnoticed, drafting e-mails faster than you can read them, generating code on the fly, automating workflows behind the scenes and empowering support teams with tools that feel harmless – until they aren’t. This is how shadow AI takes hold, not as a project, but as beha...

AI crept in unnoticed, drafting e-mails faster than you can read them, generating code on the fly, automating workflows behind the scenes and empowering support teams with tools that feel harmless – until they aren’t.

This is how shadow AI takes hold, not as a project, but as behaviour. And that is exactly why it has become one of the most dangerous risks most businesses are carrying today.

Shadow AI refers to any artificial intelligence system operating without security oversight, approval or governance.

It includes employees using tools like ChatGPT, Copilot, Perplexity or Claude for client work. It includes AI features silently embedded inside software as a service (Saas) platforms. It includes teams training internal models on company data without understanding where that data goes. It includes external AI agents with excessive access and bots that can read sensitive information, send e-mails, create files or delete them entirely.

These systems are productive, efficient and invisible. And invisibility is where risk lives.

ALSO READ: Woolworths promises no jobs are at risk as it tests self-service till

Threat actors are already weaponising AI in real world attacks. AI-driven phishing campaigns now scale faster and adapt faster than human led operations can. Malware is being generated and reshaped continuously to evade detection.

Self-learning agents are probing cloud environments for weak identity controls. Credentials are abused quietly. Employees are impersonated across e-mail, chat and voice. These attacks are already happening.

Cyber resilience is about understanding the behaviour of machines that act on your behalf. Non-human identities now move data, make decisions and trigger actions at speed. When those identities are not visible or governed, they become perfect entry points for attackers.

A survey of cybersecurity decision-makers showed that 69% of organisations suspect or have evidence of employees using prohibited AI tools and it’s predicted that by 2030, more than 40% of enterprises will experience security or compliance incidents linked to unauthorised shadow AI.

In this context, when Gartner, the global research and advisory firm that is the world authority on AI, refers to “prohibited AI tools” it means AI applications not formally approved, assessed or governed for business use.

ALSO READ: AI can be your worst enemy

This includes employees using public generative AI platforms such as OpenAI’s ChatGPT, Microsoft Copilot, Anthropic Claude or Perplexity AI without security review or data governance controls in place.

It also covers AI features quietly embedded into approved Saas platforms, unvetted AI coding assistants, internally trained models connected to sensitive company data, and third party AI agents operating with excessive privileges.

The risk is that these tools operate outside policy, visibility and oversight. Security teams may have no insight into what data is being uploaded, how outputs are being used, whether intellectual property is being exposed, or how non-human identities are accessing systems.

AI innovation needs to be visible, governed and secured. If your organisation is using AI, officially or unofficially, now is the time to take visibility seriously. You cannot protect what you cannot see.

Read the full article

Continue reading on citizen.co.za

Read Original

More from citizen.co.za