Fraud Doubles in 2023 Making it the Second-Biggest Year for Scams in the Last Two Decades!

It is estimated that by 2026 credit card losses from fraud will reach $43 billion. Financial fraud has become a multimillion-dollar criminal enterprise, with generative AI looking to make it more profitable. (

Fraud doesn’t just affect individuals; it is a massive pressure and cost for businesses. For example, if a customer was affected by credit card fraud the bank would usually repay the money and suffer the loss themselves.

The increase in fraud will drastically affect businesses’ costs unless solutions are put in place to stop fraudsters. Fraud in 2023 rose to £2.3bn, double the year before making it the second biggest year for fraudsters in the last two decades. The number of high value cases over £50m has increased by 60% with half of them being over £200m. This shows the importance of businesses investing in fraud detection methods as well as being able to identify the financial problems caused to business costs rise. (The Guardian Report –’s%20latest,frauds%20were%20over%20%C2%A3200m.)

Fraud detection in AI can be identified through multiple machine learning models, which detect differences in algorithms and behaviours: These patterns can detect fraud such as identity theft, credit card etc.

As well as this, Large Language Models (“LLMs”) are being used by criminals to develop more human-like interactions. However, this makes it harder to differentiate a human and a chatbot, resulting to an increase in fraud such as phishing emails.

Intelligent Voice uses its LexiQal engine to strengthen risk management for fraud prevention methods, aiming to trap modern fraudsters. Using trained LLMs, LexiQal can understand difficult behaviours that commonly suggest or initiate fraud and scams, which are recognised early on to prevent damage.

Another form of fraudulent activity that is becoming prevalent is the use of deepfake audio, which is generated by AI to closely impersonate an individual’s voice. The voice produces a humanlike replicate of tone, accents, and unique characteristics as well as emotional and sentiment content.

Deepfake audio scams have emerged as a rising concern for UK consumers, posing a significant threat to the online banking realm due to identity theft and credit card fraud. Robust fraud detection measures would be of great advantage to major financial institutions, particularly sectors having to deal with this threat.

An example of a deepfake audio are the recent clips of President Biden telling people not to vote in the New Hampshire primary.

Clips were processed through Intelligent Voice’s deepfake engine. The first were two clips of the real Biden with the result of an 87.98% match, which is higher than the 80% threshold that indicates a “true” match. However, with the alleged deepfake clips against the real clips they dropped to 71% which is well below the positive threshold, proving that the audio is a deepfake. Read our blog on deepfakes here –

Another example of deepfake fraud is where a father was rung by his son claiming that he was in a car accident and needed money urgently. But in fact, it was a deepfake phone call and the son’s voice was cloned. To watch the video – Luckily the son was able to get in touch with his father, who is a lawyer, before the money was transferred but this shows how realistic a deepfake can be.  These sorts of attacks may lead to individuals having increasing mistrust with technology.

New research shows that at least 24% of UK citizens have been a victim of identity theft, the highest rate in Europe with three quarters of the UK population having been exposed to fraudulent behaviour. Identity fraud is now one of the fastest growing crimes and businesses are seen as not prepared. (,the%20UK’s%20fastest%20growing%20crimes)

Investing in fraud detection is vital for businesses, particularly in the financial industry where confidentiality is crucial. Preventing information leaks due to fraud is essential to safeguarding a business’s reputation and relationships with clients.

Large Language Models (LLMs) offer advanced technology that continually learns and evolves, improving success rates and accuracy in detecting complex language and behaviours. Solutions like Intelligent Voice’s engine LexiQal bolster risk management in industries where there is voice interaction, combating fraud threats such as vishing and social engineering. By using implementing voice identity management, businesses can enhance authentication and verification processes.

Intelligent Voice’s voice identity management and sentiment analysis tools strengthens security against emerging AI-driven fraud tactics like deepfake engines. Ultimately, maintaining fraud prevention is crucial for preserving trust between businesses and clients, while also embracing the positive opportunities AI offers, such as increased efficiency and responsiveness. As technology evolves, maintaining trust and security will remain essential in combating future threats.

I'd like to teach the world..

Intelligent Voice is fully trainable to understand how you and your customers speak quickly. Our “QuickTrain” methodology lets you add context-specific language in a matter of minutes. And best of all, it is available in 30 languages and dialects. Using our “SmartTranscript” outputs, you can let your customers “see” their audio as well as listen to it.
Book A Meeting To Learn More

Speak to one of our experts today to discuss your solution requirements

We’d love to hear from you. Please fill out this form or shoot us an email.
Send us a direct email
Come say hello at our office HQ.

Intelligent Voice Limited, St Clare House, 30-33 Minories, London, EC3N 1DD.

Mon-Fri from 9am to 5:30pm.
+44 20 3627 2670
Product and Account Support
Visit our help centre for all your queries or to reach out to our support team.
Help Centre →
* Please note that submissions will be added to our newsletter marketing list.

News and Blogs

Subscribe to learn about new product features, the latest in technology, solutions, and updates.