Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 144 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 124 tok/s Pro
Kimi K2 210 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Skyline Queries in O(1) time? (1709.03949v1)

Published 12 Sep 2017 in cs.DB

Abstract: The skyline of a set $P$ of points ($SKY(P)$) consists of the "best" points with respect to minimization or maximization of the attribute values. A point $p$ dominates another point $q$ if $p$ is as good as $q$ in all dimensions and it is strictly better than $q$ in at least one dimension. In this work, we focus on the static $2$-d space and provide expected performance guarantees for $3$-sided Range Skyline Queries on the Grid, where $N$ is the cardinality of $P$, $B$ the size of a disk block, and $R$ the capacity of main memory. We present the MLR-tree, which offers optimal expected cost for finding planar skyline points in a $3$-sided query rectangle, $q=[a,b]\times(-\infty,d]$, in both RAM and I/O model on the grid $[1,M]\times [1,M]$, by single scanning only the points contained in $SKY(P)$. In particular, it supports skyline queries in a $3$-sided range in $O(t\cdot t_{PAM}(N))$ time ($O((t/B)\cdot t_{PAM}(N))$ I/Os), where $t$ is the answer size and $t_{PAM}(N)$ the time required for answering predecessor queries for $d$ in a PAM (Predecessor Access Method) structure, which is a special component of MLR-tree and stores efficiently root-to-leaf paths or sub-paths. By choosing PAM structures with $O(1)$ expected time for predecessor queries under discrete $\mu$-random distributions of the $x$ and $y$ coordinates, MLR-tree supports skyline queries in optimal $O(t)$ expected time ($O(t/B)$ expected number of I/Os) with high probability. The space cost becomes superlinear and can be reduced to linear for many special practical cases. If we choose a PAM structure with $O(1)$ amortized time for batched predecessor queries (under no assumption on distributions of the $x$ and $y$ coordinates), MLR-tree supports batched skyline queries in optimal $O(t)$ amortized time, however the space becomes exponential. In dynamic case, the update time complexity is affected by a $O(log{2}N)$ factor.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.