Emergent Mind

Abstract

Vision plays a crucial role in comprehending the world around us. More than 85% of the external information is obtained through the vision system. It influences our mobility, cognition, information access, and interaction with the environment and other people. Blindness prevents a person from gaining knowledge of the surrounding environment and makes unassisted navigation, object recognition, obstacle avoidance, and reading tasks significant challenges. Many existing systems are often limited by cost and complexity. To help the visually challenged overcome these difficulties faced in everyday life, we propose VisBuddy, a smart assistant to help the visually challenged with their day-to-day activities. VisBuddy is a voice-based assistant where the user can give voice commands to perform specific tasks. It uses the techniques of image captioning for describing the user's surroundings, optical character recognition (OCR) for reading the text in the user's view, object detection to search and find the objects in a room and web scraping to give the user the latest news. VisBuddy has been built by combining the concepts from Deep Learning and the Internet of Things. Thus, VisBuddy serves as a cost-efficient, powerful, all-in-one assistant for the visually challenged by helping them with their day-to-day activities.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.