Emergent Mind

Enabling Incremental Query Re-Optimization

(1409.6288)
Published Sep 22, 2014 in cs.DB

Abstract

As declarative query processing techniques expand in scope to the Web, data streams, network routers, and cloud platforms there is an increasing need for adaptive query processing techniques that can re-plan in the presence of failures or unanticipated performance changes. A status update on the data distributions or the compute nodes may have significant repercussions on the choice of which query plan should be running. Ideally, new system architectures would be able to make cost-based decisions about reallocating work, migrating data, etc., and react quickly as real-time status information becomes available. Existing cost-based query optimizers are not incremental in nature, and must be run "from scratch" upon each status or cost update. Hence, they generally result in adaptive schemes that can only react slowly to updates. An open question has been whether it is possible to build a cost-based re-optimization architecture for adaptive query processing in a streaming or repeated query execution environment, e.g., by incrementally updating optimizer state given new cost information. We show that this can be achieved beneficially, especially for stream processing workloads. Our techniques build upon the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We implement our solution within an existing research query processing system, and show that it effectively supports cost-based initial optimization as well as frequent adaptivity.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.