2000 character limit reached
The IBM 2016 English Conversational Telephone Speech Recognition System (1604.08242v2)
Published 27 Apr 2016 in cs.CL
Abstract: We describe a collection of acoustic and LLMing techniques that lowered the word error rate of our English conversational telephone LVCSR system to a record 6.6% on the Switchboard subset of the Hub5 2000 evaluation testset. On the acoustic side, we use a score fusion of three strong models: recurrent nets with maxout activations, very deep convolutional nets with 3x3 kernels, and bidirectional long short-term memory nets which operate on FMLLR and i-vector features. On the LLMing side, we use an updated model "M" and hierarchical neural network LMs.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.