Protecting your clients in the age of Generative AI and Deepfakes
Crime, corruption and fraud continue to dominate local news feeds as 2024 winds to a close. From the chilling quarter-on-quarter crime update shared by the Minister of Police to the release of the 2024 Insurance Crime Bureau (ICB) Annual Report, the numbers paint a grim picture.
Sickening contact crime statistics
In a country where 957 women and 315 children were murdered over the past three months, South Africa’s citizens could be excused for caring less about cybercrime and fraud; but the growing threat these scourges pose cannot be ignored.
The threat was detailed during a recent International Fraud Week webinar hosted by the Institute of Commercial Forensic Practitioners (ICFP), law firm CMS South Africa and the ICB.
A select group of presenters set out to explore the intersection of artificial intelligence (AI) and commercial fraud, with a specific focus on the insurance industry. Mawande Ntontela, a senior associate at CMS South Africa, introduced attendees to generative AI, deep fakes and deep learning. He explained how AI-backed scams were affecting the insurance industry and offered basic mitigation measures that firms and individuals could implement to avoid these threats.
Today’s AI and generative AI frenzy is unfolding on the back of improvements in computing power and data storage, coupled with unprecedented access to information. “Experts estimate that over the last 10 to 15 years, the amount of computing capacity being used to train AI systems has increased by a factor of 350 million,” Ntontela said. The result is twofold. First, there has been a massive leap in the development of AI systems and algorithms. Second, significant progress has been made in iterative human learning and problem-solving.
New approaches to core insurer functions
The fast-paced adoption of emerging technologies has changed how insurers approach core functions like claims assessment, policy administration and underwriting. “Many insurers are already using digital platforms for claims assessment and validation procedures, creating perfect conditions for the implementation of AI as a solution within the insurance industry,” Ntontela said. However, he warned that these conditions are equally appealing to criminals who want to exploit them for illegal activities.
In his visit to the virtual stage, Garth de Klerk, CEO of the ICB, delved into what many call the dark side of AI. “Criminals are abusing AI to push higher volumes of ‘attacks’ at you in order to get you to do something compromising,” he said, before mentioning some of the popular social engineering techniques in use today. In an extreme example, a criminal syndicate used a combination of email compromise and AI-backed deep fake videos to dupe an employee of a Hong Kong-based firm into making millions of dollars in unauthorised payments.
Being caught by deepfake videos
Local financial and risk advisers may have stumbled across similar deep fake videos in recent interactions with the now infamous Banxso investment platform. According to De Klerk, this outfit solicited funds from consumers using the triple lure of high (too-good-to-be-true) rates of return, concealing the fact that its FSCA licence had been withdrawn and marketing its insane promises using deep fake videos featuring Anton Rupert and Elon Musk. “People are being caught by deepfake videos,” De Klerk said. This happens despite the myriad warnings the public receives to exercise extreme caution when confronted with ‘get-rich-quick’ offers.
Ntontela referenced the most recent Aon State of the Market report to position the threat posed to the broad insurance sector by the widespread adoption of AI and generative AI. The report notes: “As companies adapt and explore AI and other emerging technologies, [they will need] an entirely new risk management system and measures to address new challenges.” The principal concerns centre on the negative impact that AI-empowered threat actors will have in areas like cybersecurity, data privacy and governance, to name a few.
“Threat actors are using new technologies to perpetrate crimes in completely novel and unconventional ways,” Ntontela said. In insurance, this might involve using deepfake techniques to digitally manipulate images and documentation to mislead insurers during claims assessment and validation processes, triggering fraudulent payments. And some criminal syndicates go as far as using two AI models in conjunction. One model employs deep learning techniques to produce accurate, augmented and manipulated content. The other identifies and further manipulates that augmented content to prevent it from being flagged.
Crime, fraud taken to the next level
“When these two systems operate in conjunction with one another, you essentially have a deepfake tool that uses an iterative process of refinement to dupe claims assessors into thinking an image produced as a deep fake is authentic,” Ntontela explained. These new techniques are so sophisticated they make advanced Photoshop work look like something your child produced using crayons. Beware, dear reader: crime and fraud have gone to the next level.
AI and other technologies are also being deployed in the fight against crime and fraud. “There are brilliant software applications that can take huge volumes of data and do something in 10 seconds that would take a human operator days to complete,” De Klerk said. Today’s fraud monitoring applications use anomaly detection, machine learning, predictive modelling and natural language processing to flag potential frauds and initiate a more intensive human investigation. AI and automation give humans an edge via the error-free processing of massive data sets, but systems still rely on human experience and training for their effectiveness.
Concerns over AI replacing humans were once again dismissed outright. “You need human intelligence to translate AI into practical actions,” De Klerk said. This sentiment echoed that of the previous speaker, who debunked the notion that AI would completely replace iterative human learning and problem-solving. “You can teach AI how to imitate and play a Queen hit, but when you ask the same AI model to conceive or create a hit song unassisted, it falls flat on its face,” Ntontela said. AI learns through imitation and observation but cannot yet produce original ideas or concepts.
Collaboration around risk mitigation
Collaboration is necessary to develop risk mitigation measures against deep fake scams. “It will take engagement, dialogue and contributions from many experts across various industry sectors to develop and implement mitigation measures [that will] address the risks associated with AI systems,” Ntontela warned. For insurers, an important first consideration is to fight fire with fire by empowering humans with AI systems that can detect deep fakes and digitally augmented content, whether audio, photo or video. Another key mitigation might be to create additional validation procedures as part of a more comprehensive risk rating system for claims.
The presenter noted that regardless of your mitigation efforts, “if you are not providing frequent fraud detection training to your claim handlers and assessors, you are tying your hands behind your back and hoping for a good outcome; [if your staff] do not know how to maximise your systems to execute their tasks effectively, [you will struggle to address] AI and deep fake-related risks.” This call to action will no doubt resonate with the assessors and claims staff on the frontline of preventing this type of fraud.
Tips to protect your client
Today’s newsletter concludes with a couple of excellent tips to help you (and your clients) stay safe in a digital world. First, if something sounds too good to be true, it usually is. Second, if you are receiving something for free, there is a good chance you are the product. The latter point refers to the ‘free’ applications and social media accounts many of us use, often sharing reams of personal information and behavioural data with service providers.
“Your information and intelligence are being utilised to build consumer and market behaviour profiles,” concluded De Klerk. His final impassioned plea was for attendees to take extreme measures to protect their digital identities (and those of their loved ones) across all digital applications and platforms. “Whether you are in a physical environment or digital environment, your risk is the same. Please protect yourself and be careful out there,” he said.
Writer: Gareth Stokes
Writer’s thoughts: AI and deepfake technologies present opportunities and threats to you and your clients. Is your practice ready for the coming wave of AI-backed fraud, and what steps are you taking to mitigate the risk? Please interact with us on X at @fanews_online or email us your thoughts editor@fanews.co.za