close
close

4 Ways to Fight AI-Based Fraud

COMMENT

As cybercriminals refine their use of generative AI (GenAI), deepfakes, and many other AI-powered techniques, their fraudulent content is becoming increasingly disturbingly realistic, posing an immediate security challenge for both individuals and businesses. Voice and video cloning doesn't just happen to celebrities politician or Celebrities; It involves defrauding individuals and companies of significant losses amounting to millions of dollars.

AI-based cyberattacks are on the rise, and according to a study by 85% of security professionals Deep instinctattribute this rise to generative AI.

The AI ​​fraud problem

Earlier this year, Hong Kong police revealed that a financial employee was tricked into transferring $25 million to criminals Deepfake video call with multiple people. Although this type of sophisticated deepfake fraud is still quite rare, advances in technology are making it easier to carry out, and the huge profits make it a potentially lucrative venture. Another tactic is to target specific workers by making an urgent request over the phone pretend to be their boss. Gartner now predicts that 30% of companies will consider this Identity verification and authentication solutions “unreliable” by 2026, largely due to AI-generated deepfakes.

A common type of attack is the fraudulent use of biometric data, an area of ​​particular concern given the widespread use of biometric data to grant access to devices, apps and services. In one example, a convict cheater in the state of Louisiana managed to use a mobile driving license and stolen credentials to open multiple bank accounts, deposit fraudulent checks and purchase a pickup truck. In another case, IDs were created without facial recognition biometrics on AadharIndia's flagship biometric ID system allowed criminals to open fake bank accounts.

Another form of biometric fraud is also on the rise. Instead of impersonating real people's identities as in previous examples, cybercriminals use biometric data to inject fake evidence into a security system. In these injection-based attacks, attackers manipulate the system to provide access to fake profiles. According to Gartner, the number of injection-based attacks increased by an incredible 200% in 2023. A common type of immediate injection This involves tricking customer service chatbots into revealing sensitive information or allowing attackers to completely take over the chatbot. In these cases, convincing deepfake footage is not required.

There are several practical steps CISOs can take to minimize AI-based fraud.

1. Prevent caller ID spoofing

Deepfakes, like many AI-based threats, are effective because they work in combination with other proven fraud techniques such as social engineering and fraudulent calls. For example, almost all AI-based scams are Caller ID spoofingThe number of a fraudster is disguised as a known caller. This increases credibility, which plays a crucial role in the success of these scams. Stopping caller ID spoofing effectively pulls the rug out from under fraudsters.

One of the most effective methods is to change the way operators identify and handle fake numbers. And the regulatory authorities are catching up: In Finland, the regulatory authority has paved the way for Traficom with a clear lead Technical guide to preventing caller ID spoofinga move being closely watched by the EU and other regulators worldwide.

2. Use AI analytics to combat AI fraud

Increasingly, security professionals are joining cybercriminals in using the AI ​​tactics used by fraudsters solely to defend against attacks. AI/ML models excel at detecting patterns or anomalies in large data sets. This makes them ideal for detecting subtle signs that a cyberattack is taking place. Phishing attempts, malware infections, or unusual network traffic may indicate a breach.

Predictive analytics is another important AI capability that the AI ​​community can leverage in the fight against cybercrime. Predictive AI models can predict potential vulnerabilities – or even future attack vectors – before they are exploited, enabling preventive security measures such as the use of game theory or honeypots to divert attention from high-value targets. Organizations need to be able to reliably detect, in real time, subtle behavioral changes occurring across all areas of their network, from users to devices to infrastructure and applications.

3. Focus on data quality

Data quality plays a critical role in pattern recognition, anomaly detection, and other machine learning-based methods to combat modern cybercrime. When it comes to AI, data quality is measured by accuracy, relevance, timeliness, and completeness. While many companies previously relied on (unsafe) log files, many are now relying on telemetry data, such as: B. Network traffic information from Deep Packet Inspection (DPI) technology because it provides the “ground truth” on which effective AI defenses can be built. In one Zero trust worldTelemetry data like that provided by DPI provides the right kind of “never trust, always verify” foundation to combat the rising tide of deepfakes.

4. Know your normality

The data volume and patterns on a particular network, similar to a fingerprint, are a unique indicator specific to that network. That's why it's critical that organizations develop a deep understanding of what the “normal” state of their network looks like so they can detect and respond to anomalies. When companies know their networks better than anyone else, they have a huge insider advantage. However, to exploit this defensive advantage, they must address the quality of the data that feeds their AI models.

In summary, cybercriminals are rapidly exploiting AI, and GenAI in particular, to carry out increasingly realistic fraud cases that can be carried out on a scale not previously possible. As deepfakes and AI-based cyber threats escalate, companies must leverage advanced data analytics to strengthen their defenses. By adopting a zero trust model, improving data quality, and using AI-driven predictive analytics, organizations can proactively counter these sophisticated attacks and protect their assets – and reputation – in an increasingly dangerous digital landscape.