Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 170 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 37 tok/s Pro
GPT-5 High 39 tok/s Pro
GPT-4o 130 tok/s Pro
Kimi K2 187 tok/s Pro
GPT OSS 120B 445 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Development of Automatic Speech Recognition for Kazakh Language using Transfer Learning (2003.04710v1)

Published 8 Mar 2020 in eess.AS and cs.SD

Abstract: Development of Automatic Speech Recognition system for Kazakh language is very challenging due to a lack of data.Existing data of kazakh speech with its corresponding transcriptions are heavily accessed and not enough to gain a worth mentioning results.For this reason, speech recognition of Kazakh language has not been explored well.There are only few works that investigate this area with traditional methods Hidden Markov Model, Gaussian Mixture Model, but they are suffering from poor outcome and lack of enough data.In our work we suggest a new method that takes pre-trained model of Russian language and applies its knowledge as a starting point to our neural network structure, which means that we are transferring the weights of pre-trained model to our neural network.The main reason we chose Russian model is that pronunciation of kazakh and russian languages are quite similar because they share 78 percent letters and there are quite large corpus of russian speech dataset. We have collected a dataset of Kazakh speech with transcriptions in the base of Suleyman Demirel University with 50 native speakers each having around 400 sentences.Data have been chosen from famous Kazakh books. We have considered 4 different scenarios in our experiment. First, we trained our neural network without using a pre-trained Russian model with 2 LSTM layers and 2 BiLSTM .Second, we have trained the same 2 LSTM layered and 2 BiLSTM layered using a pre-trained model. As a result, we have improved our models training cost and Label Error Rate by using external Russian speech recognition model up to 24 percent and 32 percent respectively.Pre-trained Russian LLM has trained on 100 hours of data with the same neural network architecture.

Citations (13)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.