Emergent Mind

Abstract

Clipping, as a current nonlinear distortion, often occurs due to the limited dynamic range of audio recorders. It degrades the speech quality and intelligibility and adversely affects the performances of speech and speaker recognitions. In this paper, we focus on enhancement of clipped speech by using a fully convolutional neural network as U-Net. Motivated by the idea of image-to-image translation, we propose a declipping approach, namely U-Net declipper in which the magnitude spectrum images of clipped signals are translated to the corresponding images of clean ones. The experimental results show that the proposed approach outperforms other declipping methods in terms of both quality and intelligibility measures, especially in severe clipping cases. Moreover, the superior performance of the U-Net declipper over the well-known declipping methods is verified in additive Gaussian noise conditions.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.