Security Researchers Uncover Massive Biometric Security Circumvention Using AI
When a major Indonesian financial institution reported a deepfake fraud incident affecting its mobile app, Group-IB’s threat intelligence specialists set out to determine exactly what happened. Although this large organization has multiple layers of security, as any regulated industry would require, including defenses against rooting, jailbreaking, and exploitation of its mobile app, it fell victim to a deepfake attack. Despite dedicated protections for mobile application security such as anti-emulation, anti-virtual environments and anti-hooking mechanisms, the institution still fell victim to a deepfake attack. I wanted to repeat this because, like many organizations within and outside the financial sector, digital identity verification incorporating facial recognition and activity detection has been enabled as a secondary verification layer . This report, this warning, shows how easy it is becoming for threat actors to bypass what until very recently were considered cutting-edge security protections.
This is how AI bypasses biometric security in financial institutions
Group-IB’s fraud investigation team has been tapped to help investigate an unnamed but “significant” Indonesian financial institution following a series of more than 1,100 deepfake fraud attempts used to circumvent security process of their loan applications. With over 1,000 fraudulent accounts detected and a total of 45 specific mobile devices identified as being used in the fraud campaign, most running Android but a handful also using the iOS app, the team was able to analyze the techniques used to circumvent the Know Your Customer and biometric verification systems in place.
“The attackers obtained the victim’s identity through various illicit channels, said Yuan Huang, cyber fraud analyst at Group-IB, “such as malware, social media and the dark web, they manipulated identity image by changing characteristics such as clothing. and hairstyle – and used the falsified photo to bypass the institution’s biometric verification systems. The deepfake incident raised significant concerns for Groupe-IB’s fraud protection team, Huang said, but the resulting research led to the highlighting of “several key aspects of deepfake fraud”.
Key findings uncovered by Groupe-IB research into AI Deepfake attack
Key findings from Group-IB’s investigation into the Indonesian cyberattack include:
Deepfake AI fraud has financial and societal impact
Groupe-IB investigators determined that deepfake AI fraud of the type used against this financial institution presented a significant financial risk. “Potential losses in Indonesia alone,” Huang said, were “estimated at $138.5 million.” Then there are the social implications which include threats to personal and national security as well as the integrity of financial institutions with all the economic impact that entails. To reach the $138.5 million figure, Group-IB estimated that around 60% of Indonesia’s population was “economically active and eligible for loan applications”, which equates to around 166.2 million elderly people from 16 to 70 years old. With a detected fraud rate of 0.05% at the bank analyzed, this resulted in an estimate of 83,100 fraud cases nationwide and given an average fraudulent loan amount of $5,000, Group-IB has stated: “the estimated financial damages could reach $138.5 million over three months.”
AI Deepfakes and Advanced App Cloning in Play
The report highlights that the fraudsters in this case used fake AI-generated images to bypass biometric verification systems, which included the ability to bypass activity detection protections. “By leveraging advanced AI models,” Huang explained, “face swapping technologies allow attackers to replace one person’s face with another in real time using a single photo. ” This not only creates an illusion of an individual’s legitimate identity in the video, but, Huang continued, “these technologies can effectively fool facial recognition systems due to their smooth, natural exchanges and ability to mimic facial recognition.” convincingly convey expressions and movements in real time. .” Fraudsters also exploited virtual camera software to manipulate biometric data using pre-recorded videos to imitate real-time facial recognition. The use of app cloning has further enabled fraudsters to fake multiple devices, highlighting the vulnerabilities of traditional fraud detection systems.
AI Deepfakes Have Introduced Unprecedented Security Challenges for Financial Institutions
There is no doubt, Huang said, that the emergence of AI deepfake technologies has introduced unprecedented challenges for financial institutions, “disrupting traditional security measures and exposing vulnerabilities in security verification processes.” identify “.
The Group-IB investigation has certainly highlighted the multifaceted problems with deepfakes, covering everything from emulation exploitation to application cloning to help these advanced AI attacks evade detection. “These tactics allow fraudsters to impersonate legitimate users, manipulate biometric systems, and exploit gaps in existing anti-fraud measures,” Huang warned, concluding that “financial institutions must go beyond a single verification method, improving account verification processes and adopting a multi-method approach. multi-tiered approach that integrates advanced anti-fraud solutions.