Published February 4, 2026 | Version v1.0
Publication Open

A Comparative Study of From-Scratch Squeezeformer Training for Arabic and English Speech Recognition

  • 1. ROR icon Helwan University

Description

While modern speech recognition systems achieve impressive performance through large-scale pre-training, the fundamental relationship between orthographic properties and training efficiency in from-scratch models remains underexplored. This paper presents a systematic comparative study of Squeezeformer training dynamics for Arabic and English automatic speech recognition, both implemented from scratch without transfer learning. We demonstrate that despite Arabic's morphological complexity, it achieves superior character-level accuracy (13.72% CER vs 14.06% CER) and dramatically faster convergence (18 epochs vs 400 epochs) compared to English when trained with identical architectures. Our analysis attributes these differences to Arabic's phonetic transparency and orthographic consistency, which compensate for its morphological richness in character-level recognition tasks. The English model, trained on 190,000 samples, achieves 34.43% WER with 8.5M parameters, while the Arabic model, using only 78,531 samples and 19.01M parameters, reaches 45.24% WER with 22× faster convergence. These findings challenge conventional assumptions about resource requirements for morphologically rich languages and provide practical insights for low-resource ASR development

Files

A Comparative Study of From-Scratch Squeezeformer Training for Arabic and English Speech Recognition paper.pdf

Additional details

Software

Programming language
Python