December 4, 2024: Leaderboard is online and accepting submissions
February 10 February 12 11:59 AM AoE, 2025: Final Ranking Announced
February 12 11:59 PM AoE, 2025: Paper Submission Deadline
Multilingual SUPERB (ML-SUPERB) is an extension of the SUPERB benchmark, designed to evaluate the cross-lingual capabilities of speech representation learning. For this year's challenge, our focus is to encourage the development of state-of-the-art ASR systems for all languages and language varieties. The ML-SUPERB 2.0 Challenge has three main themes:
Flexibility: Participants are allowed to use almost any algorithm, dataset, or pre-trained model. We hope this encourages creative applications of the latest pre-trained models or modelling techniques.
Robustness across Languages: Models will also be scored in consistency and fairness, encouraging improvements in performance across all languages.
Robustness across Language Varieties: Models will be evaluated on a hidden set containing 200+ language variations, such as non-standard accents and dialects.
We are partnering with DynaBench to host a live leaderboard and online evaluation server. Inference and scoring will automatically be performed on the server, which is slated to open on December 4th. For more details about the challenge, please refer to the overview page.
Further details about the overall SUPERB project can be found here.
Acknowledgement
We thank and for creating and maintaining the SUPERB official website.