Warning over WhatsApp voice notes in South Africa

Technology experts are raising red flags as cybercriminals increasingly turn to generative artificial intelligence (AI) to impersonate individuals and exploit business vulnerabilities.

One method that is gaining traction involves using AI to clone voices, specifically through WhatsApp voice notes, which can be manipulated to create highly convincing scams targeting businesses and individuals alike.

This trend, already observed globally, has begun to impact South Africa, where companies and individuals now face a new level of cybersecurity threat.

Generative AI, designed initially to enhance productivity, automate tasks, and provide rapid analytical support, has, unfortunately, become a double-edged sword.

While it offers unparalleled efficiency for businesses, its misuse has opened the door to innovative and unsettling types of cyber fraud.

Stephen Osler, Co-Founder and Business Development Director at Nclose, explained last year that the use of AI to clone voices has unlocked a novel risk realm for companies and individuals alike.

He highlighted cases such as a 2019 incident in the UK where AI-generated voice cloning enabled criminals to impersonate the CEO of an energy company, leading to a fraudulent transfer of $243,000 (about R4.3 million).

In another case in Hong Kong in 2021, criminals exploited AI to steal $35 million (approximately R631 million).

This technology, once reserved for sophisticated schemes targeting corporations, is now being directed at everyday users.

Voice-cloning scams are proving alarmingly effective in personal and professional settings.

Common scenarios include fake kidnapping claims, requests for urgent financial help from family members, and emergency messages, each crafted with chilling authenticity.

One area of concern, according to Osler, is the vulnerability that WhatsApp voice notes pose, particularly for executives and high-level employees.

With just a brief sample of someone’s voice, often obtained from social media posts, recorded calls, or even prior voice messages, cybercriminals can create realistic, AI-generated audio clips that sound convincingly like the intended victim.

These cloned voices can be used to direct employees to carry out instructions that seem legitimate, bypassing usual security protocols.

Osler illustrates how easily a threat actor could exploit this in a workplace scenario.

For example, an IT administrator might receive a WhatsApp voice note from someone they believe to be their manager, instructing them to reset a password or provide access to critical systems.

In reality, the voice note is from a cybercriminal, but the familiarity of the voice causes the administrator to comply, unknowingly granting the attacker access to privileged information.

Once inside, the criminal could leverage this access to deploy ransomware or exfiltrate sensitive data, potentially bringing operations to a standstill.

The risk is not limited to executives

In fact, junior employees, who may lack the experience to spot red flags in communication, are equally at risk.

Cybercriminals know that once they infiltrate a corporate network—whether by using a compromised executive’s credentials or by exploiting an unsuspecting junior staff member—they gain access to an array of sensitive information.

With this foothold, attackers can steal valuable data or hold a company hostage via ransomware, demanding payment to unlock critical systems.

According to William Petherbridge, Manager of Systems Engineering at Fortinet, the rapid shift to digital platforms, fueled by remote work and cloud-based technologies, has turned employees into prime targets.

In today’s digital workplace, the accessibility and convenience provided by shared employee credentials pose a significant risk.

If these credentials are compromised, the repercussions can ripple through the entire corporate infrastructure, exposing confidential data and even threatening an organisation’s viability.

Phishing attacks remain one of the most common tactics used to harvest employee credentials.

As Petherbridge explains, phishing emails often mimic the language and tone of high-ranking executives, prompting employees to quickly comply without questioning the legitimacy of the request.

The risk, he adds, can be mitigated by fostering an organisational culture of vigilance and encouraging employees to question any unusual communication, even if it appears to be from a trusted source.

Employees are urged to report suspicious requests through their company’s established channels for phishing and other cyber threats.

The alarming rise of AI-driven scams should act as a wake-up call for organisations to strengthen their security measures and educate employees on identifying potential threats.

It is essential for companies to implement multi-factor authentication, regularly update cybersecurity protocols, and conduct ongoing training sessions for employees at all levels.

Awareness and proactive measures are crucial in staying a step ahead of cybercriminals in this era of increasingly sophisticated attacks.

As technology continues to evolve, so too must the defences businesses and individuals build to protect themselves from the ever-advancing tactics of cybercriminals.