Currently one of the fastest growing types of hostile AI, deepfake-related losses are expected to rise from $12.3 billion in 2023 to $40 billion by 2027growing at an astonishing annual growth rate of 32%. Deloitte Deep fakes are expected to develop into increasingly common in the coming years, with banking and financial services being their most important goal.
Deepfakes are an example of cutting-edge AI attacks that are achieving 3000% increase just last 12 months. The variety of deepfake incidents is expected to increase by 50% to 60% in 2024, with 140,000-150,000 cases worldwide predicted this 12 months.
The latest generation of AI-powered apps, tools and platforms give attackers all the pieces they need to quickly and cost-effectively create fake videos, voicemails and fraudulent documents. Drops of pins‘ Voice Intelligence and Security Report 2024 estimates that deepfake scams targeting contact centers cost about $5 billion annually. Their report highlights how deepfake technology poses a serious threat to banking and financial services
Bloomberg reported last 12 months that “there’s an entire cottage industry on the dark web selling fraud software for $20 to thousands of dollars.” A recent infographic based on Sumsub Identity Fraud Report 2023 presents a global picture of the rapid growth of AI-powered fraud.
Countdown to VB Transform 2024
Join enterprise leaders in San Francisco July Sept. 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and find out how to integrate AI applications into your industry. Register now
Businesses are not prepared for deepfakes and hostile AI
Adversarial AI is creating latest attack vectors that no one expects and creating a more complex, nuanced threat landscape that prioritizes identity-based attacks.
It’s no wonder that one in three corporations has no strategy in place to address the risk of an AI attack, which is likely to start with deepfakes of key executives. IvantiRecent research shows that 30% of enterprises have no plans to discover and defend against attacks from hostile AI.
Ivanti Cybersecurity State Report 2024 found that 74% of surveyed enterprises are already seeing evidence of AI-based threats. The overwhelming majority, 89%, imagine that AI-based threats are just starting. Of the majority of CISOs, CIOs, and IT leaders interviewed by Ivanti, 60% are concerned that their enterprises are not prepared to defend against AI-based threats and attacks. The use of deepfakes as a part of an orchestrated strategy that features phishing, software exploits, ransomware, and API-related vulnerabilities is becoming increasingly common. This is consistent with threats that security experts predict will develop into more dangerous due to the generation of AI.
Source: Ivanti 2024 State of Cybersecurity Report
Attackers focus their efforts on deepfakes against CEOs
VentureBeat repeatedly hears from corporate cybersecurity CEOs preferring to remain anonymous about how deepfakes have evolved from easily recognizable fakes to recent videos that look authentic. Voice and video deepfakes appear to be a favorite attack strategy for industry executives to swindle tens of millions of dollars from their corporations. The threat is compounded by the incontrovertible fact that nation states and large cybercriminal organizations are doubling down on developing, hiring, and expanding their expertise generative adversarial network (GAN) technologies. Of the 1000’s of deepfake CEO attempts that have taken place this 12 months alone, the one that goals to CEO of the world’s largest promoting company shows how sophisticated attackers are becoming.
In the recent Technical informations With “Wall Street Journal”, CrowdStrike General Director George Kurtz explained how improvements in AI are helping cybersecurity practitioners defend systems, and also commented on how attackers are exploiting them. Kurtz spoke with WSJ reporter Dustin Volz about AI, the 2024 U.S. election, and threats from China and Russia.
“The deepfake technology today is so good. I think that’s one of the areas that you really worry about. I mean, in 2016, we were tracking this and you could see that people were actually just talking to bots, and this was in 2016. And they’re literally arguing or promoting their cause, and they’re having an interactive conversation, and it’s like there’s no one behind it. So I think it’s pretty easy for people to buy into what’s real, or there’s some narrative that we want to support, but a lot of this can be driven and has been driven by other nation states,” Kurtz said.
The CrowdStrike Intelligence team has spent significant time understanding the nuances of making convincing fake videos and where technology is evolving to achieve maximum impact on viewers.
Kurtz continued, “And what we’ve seen in the past, we’ve spent a lot of time investigating this with our CrowdStrike intelligence team, is it’s kind of like a pebble in a pond. Like you take a topic or you hear a topic, anything related to the geopolitical environment, and the pebble falls into the pond, and then all these ripples ripple out. And that’s this amplification that happens.”
CrowdStrike is known for its deep expertise in AI and machine learning (ML) and its unique single-agent model, which has proven effective in executing the platform’s strategy. With such deep expertise at the company, it’s easy to see how its teams would experiment with deepfake technologies.
“And now, in 2024, having the ability to create deepfakes, and some of our internal people have done some funny parodies of me and just to show me how scary it is, you couldn’t tell that’s not me in the video. So I think that’s one of the areas that really worries me,” Kurtz said. “There’s always concerns about infrastructure and things like that. Those areas, a lot of it is still paper voting and things like that. Some of it isn’t, but how do you create a false narrative to force people to do things that the nation state wants them to do, that’s an area that really worries me.”
Businesses must rise to the challenge
Businesses are exposing themselves to risk lost war with artificial intelligence if they do not match the rapid pace of attackers who are weaponizing AI against deepfakes and all other types of hostile AI. Deepfakes have develop into so common that Department of Homeland Security published a guide, The Growing Threat of Deepfake Identities.