Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 39 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Optimization of Processing Allocation in Vehicular Edge Cloud based Architecture (2007.12108v1)

Published 23 Jul 2020 in cs.NI

Abstract: Vehicular edge computing is a new distributed processing architecture that exploits the revolution in the processing capabilities of vehicles to provide energy efficient services and low delay for Internet of Things (IoT)-based systems. Edge computing relies on a set of distributed processing nodes (i.e. vehicles) that are located close to the end user. In this paper, we consider a vehicular edge cloud (VEC) consisting of a set of vehicle clusters that form a temporal vehicular cloud by combining their computational resources in the cluster. We tackle the problem of processing allocation in the proposed vehicular edge architecture by developing a Mixed Integer Linear Programming (MILP) optimization model that jointly minimizes power consumption, propagation delay, and queuing delay. The results show that the closer the processing node (PN) is to the access point (AP), the lower the power consumption and delay, as the distance and number of hops affect the propagation delay and queuing delay. However, the queuing delay at the AP becomes a limiting factor when it operates at a low service rate compared to the traffic arrival rate. Thus, processing tasks at the vehicular nodes (VN) was avoided whenever the objective function included queueing delay and the AP operated at a low service rate. Increase in the AP service rate results in a lower queuing delay and better VN utilization.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.