Emergent Mind

Abstract

This article presents a concise view of vehicular clouds that incorporates various vehicular cloud models, which have been proposed, to date. Essentially, they all extend the traditional cloud and its utility computing functionalities across the entities in the vehicular ad hoc network (VANET). These entities include fixed road-side units (RSUs), on-board units (OBUs) embedded in the vehicle and personal smart devices of the driver and passengers. Cumulatively, these entities yield abundant processing, storage, sensing and communication resources. However, vehicular clouds require novel resource provisioning techniques, which can address the intrinsic challenges of (i) dynamic demands for the resources and (ii) stringent QoS requirements. In this article, we show the benefits of reinforcement learning based techniques for resource provisioning in the vehicular cloud. The learning techniques can perceive long term benefits and are ideal for minimizing the overhead of resource provisioning for vehicular clouds.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.