Emergent Mind

Portable Camera-Based Product Label Reading For Blind People

(1405.6627)
Published May 7, 2014 in cs.HC and cs.CY

Abstract

We propose a camera-based assistive text reading framework to help blind persons read text labels and product packaging from hand-held objects in their daily life. To isolate the object from untidy backgrounds or other surrounding objects in the camera vision, we initially propose an efficient and effective motion based method to define a region of interest (ROI) in the video by asking the user to tremble the object. This scheme extracts moving object region by a mixture-of-Gaussians-based background subtraction technique. In the extracted ROI, text localization and recognition are conducted to acquire text details. To automatically focus the text regions from the object ROI, we offer a novel text localization algorithm by learning gradient features of stroke orientations and distributions of edge pixels in an Adaboost model. Text characters in the localized text regions are then binarized and recognized by off-the-shelf optical character identification software. The renowned text codes are converted into audio output to the blind users. Performance of the suggested text localization algorithm is quantitatively evaluated on ICDAR-2003 and ICDAR-2011 Robust Reading Datasets. Experimental results demonstrate that our algorithm achieves the highest level of developments at present time. The proof-of-concept example is also evaluated on a dataset collected using ten blind persons to evaluate the effectiveness of the scheme. We explore the user interface issues and robustness of the algorithm in extracting and reading text from different objects with complex backgrounds.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.