Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 100 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 33 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 200 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

A Data Streaming Process Framework for Autonomous Driving By Edge (2006.05817v1)

Published 10 Jun 2020 in cs.DC

Abstract: In recent years, with the rapid development of sensing technology and the Internet of Things (IoT), sensors play increasingly important roles in traffic control, medical monitoring, industrial production and etc. They generated high volume of data in a streaming way that often need to be processed in real time. Therefore, streaming data computing technology plays an indispensable role in the real-time processing of sensor data in high throughput but low latency. In view of the above problems, the proposed framework is implemented on top of Spark Streaming, which builds up a gray model based traffic flow monitor, a traffic prediction orientated prediction layer and a fuzzy control based Batch Interval dynamic adjustment layer for Spark Streaming. It could forecast the variation of sensors data arrive rate, make streaming Batch Interval adjustment in advance and implement real-time streaming process by edge. Therefore, it can realize the monitor and prediction of the data flow changes of the autonomous driving vehicle sensor data in geographical coverage of edge computing node area, meanwhile minimize the end-to-end latency but satisfy the application throughput requirements. The experiments show that it can predict short-term traffic with no more than 4% relative error in a whole day. By making batch consuming rate close to data generating rate, it can maintain system stability well even when arrival data rate changes rapidly. The Batch Interval can be converged to a suitable value in two minutes when data arrival rate is doubled. Compared with vanilla version Spark Streaming, where there has serious task accumulation and introduces large delay, it can reduce 35% latency by squeezing Batch Interval when data arrival rate is low; it also can significantly improve system throughput by only at most 25% Batch Interval increase when data arrival rate is high.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (2)

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube