Emergent Mind

Abstract

Accurate information about the location and orientation of a camera in mobile devices is central to the utilization of location-based services (LBS). Most of such mobile devices rely on GPS data but this data is subject to inaccuracy due to imperfections in the quality of the signal provided by satellites. This shortcoming has spurred the research into improving the accuracy of localization. Since mobile devices have camera, a major thrust of this research has been seeks to acquire the local scene and apply image retrieval techniques by querying a GPS-tagged image database to find the best match for the acquired scene.. The techniques are however computationally demanding and unsuitable for real-time applications such as assistive technology for navigation by the blind and visually impaired which motivated out work. To overcome the high complexity of those techniques, we investigated the use of inertial sensors as an aid in image-retrieval-based approach. Armed with information of media other than images, such as data from the GPS module along with orientation sensors such as accelerometer and gyro, we sought to limit the size of the image set to c search for the best match. Specifically, data from the orientation sensors along with Dilution of precision (DOP) from GPS are used to find the angle of view and estimation of position. We present analysis of the reduction in the image set size for the search as well as simulations to demonstrate the effectiveness in a fast implementation with 98% Estimated Position Error.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.