Overfitting of ASR Models and the Need for Domain-Specific Adaptation for Challenging Customer Audio 

Understanding Overfitting in AI Models 

Overfitting in AI models is a common issue that occurs when a model performs exceptionally well on training data but poorly on validation or test data. This problem is characterized by: 

  • High Training Accuracy, Low Test Accuracy: The model’s performance drops significantly when exposed to new data. 
  • Complex Models: Overfitting often arises in models that are too complex for the amount of training data available. These models have numerous parameters and can fit the training data very closely. 
  • Low Bias, High Variance: Overfitted models show low bias (they fit the training data well) but high variance (they don’t generalize well to new data). 

Challenges with Modern ASR Systems 

Modern Automatic Speech Recognition (ASR) systems, like those based on transformer models (similar to ChatGPT), offer good out-of-the-box generalisation for many languages and use cases. These systems excel with clean audio data and are ideal for applications like subtitles and YouTube videos. However, they can suffer from overfitting in specific domains due to their training data, leading to reduced accuracy in real-world conditions. 

Need for Domain-Specific and Language-Specific ASR Models 

To address the limitations of general ASR models, it’s crucial to develop domain-specific and language-specific models, especially in environments with significant noise or specialized language use. Real-world data often presents challenges that general models aren’t equipped to handle effectively. 

Challenging Environments 

Environments like contact centers and trading floors are particularly challenging due to: 

  • Acoustic Challenges: 
  • Background noise, overlapping speech, far-field effects (e.g., distant microphones), and reverberation. 
  • Different audio codecs used in file handling and transmission (e.g., GSM, G711). 
  • Linguistic Challenges: 
  • Specific grammatical shorthand and terminology unique to the customer’s domain. 

Performance Variability in Multilingual ASR Models 

The performance of out-of-the-box multilingual ASR models varies significantly across languages. For instance, there’s a higher availability of well-labeled audio data for English than for languages like Cantonese. Therefore, fine-tuning is essential to enhance the performance of these models for specific languages and domains. 

Case Study: Cantonese 

We recently adapted ASR models for a Chinese bank, focusing on Cantonese. By fine-tuning the model and testing it on unseen data (1011 test files, approximately 217 minutes of audio), we reduced the baseline Character Error Rate (CER) by 30.01%. The fine-tuned model not only outperformed Azure Cognitive Services but also met the customer’s requirement for private deployment. 

Moreover, for regulatory compliance in turret calls, the customer required at least 85% keyword and phrase spotting accuracy. The base model achieved 85.7%, while the fine-tuned model reached 91% accuracy. In comparison, Azure achieved 81%, and the popular open-source Whisper Large v3 model only 58%. 

Case Study: English Loan Words 

Another Chinese banking customer needed to capture English loan words frequently spoken at the bank. The base model, initially outputting Mandarin in Simplified characters, was adapted to handle Taiwanese transcription (Mandarin with Traditional characters) and English loan words. Fine-tuning resulted in a 60% improvement in true positive loan word capture rate over the Azure benchmark. 

Enhancing Transformer Models with ARPA-Style LM Process 

Traditional n-gram language models, which assign probabilities to phrases, have been replaced by more robust transformer models. However, this change removed the quick and efficient method of boosting key words or phrases using an n-gram list. We at IV have integrated an ARPA-style LM process into the transformer decoding pipeline, allowing independent keyword boosting without matching audio and text, thus improving word accuracy. 

Next-Generation: Self-Learning ASR 

Combining model fine-tuning with ARPA-boosting yields optimal results in challenging domains. However, gathering ground truth data is time-consuming and fraught with privacy concerns. IV has developed a patented technique to create realistic voice clones of speakers, using these to resynthesize audio data based on realistic, domain-specific text data. This approach improves recognition for individual speakers and enhances the model’s generalization capabilities. 

Conclusion 

Bespoke ASR models tailored to customer domains significantly outperform off-the-shelf models. Fine-tuning and adapting ASR models both acoustically and linguistically is essential to meet specific customer requirements and ensure high accuracy in challenging environments. By leveraging domain-specific knowledge and advanced techniques, we can achieve superior performance and reliability in ASR systems. 

I'd like to teach the world..

Intelligent Voice is fully trainable to understand how you and your customers speak quickly. Our “QuickTrain” methodology lets you add context-specific language in a matter of minutes. And best of all, it is available in 30 languages and dialects. Using our “SmartTranscript” outputs, you can let your customers “see” their audio as well as listen to it.
Book A Meeting To Learn More

Speak to one of our experts today to discuss your solution requirements

We’d love to hear from you. Please fill out this form or shoot us an email.
Email
Send us a direct email
info@intelligentvoice.com
Office
Come say hello at our office HQ.

Intelligent Voice Limited, St Clare House, 30-33 Minories, London, EC3N 1DD.

Phone
Mon-Fri from 9am to 5:30pm.
+44 20 3627 2670
Product and Account Support
Visit our help centre for all your queries or to reach out to our support team.
Help Centre →
* Please note that submissions will be added to our newsletter marketing list.

News and Blogs

Subscribe to learn about new product features, the latest in technology, solutions, and updates.