Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Skyline Queries in O(1) time? (1709.03949v1)

Published 12 Sep 2017 in cs.DB

Abstract: The skyline of a set $P$ of points ($SKY(P)$) consists of the "best" points with respect to minimization or maximization of the attribute values. A point $p$ dominates another point $q$ if $p$ is as good as $q$ in all dimensions and it is strictly better than $q$ in at least one dimension. In this work, we focus on the static $2$-d space and provide expected performance guarantees for $3$-sided Range Skyline Queries on the Grid, where $N$ is the cardinality of $P$, $B$ the size of a disk block, and $R$ the capacity of main memory. We present the MLR-tree, which offers optimal expected cost for finding planar skyline points in a $3$-sided query rectangle, $q=[a,b]\times(-\infty,d]$, in both RAM and I/O model on the grid $[1,M]\times [1,M]$, by single scanning only the points contained in $SKY(P)$. In particular, it supports skyline queries in a $3$-sided range in $O(t\cdot t_{PAM}(N))$ time ($O((t/B)\cdot t_{PAM}(N))$ I/Os), where $t$ is the answer size and $t_{PAM}(N)$ the time required for answering predecessor queries for $d$ in a PAM (Predecessor Access Method) structure, which is a special component of MLR-tree and stores efficiently root-to-leaf paths or sub-paths. By choosing PAM structures with $O(1)$ expected time for predecessor queries under discrete $\mu$-random distributions of the $x$ and $y$ coordinates, MLR-tree supports skyline queries in optimal $O(t)$ expected time ($O(t/B)$ expected number of I/Os) with high probability. The space cost becomes superlinear and can be reduced to linear for many special practical cases. If we choose a PAM structure with $O(1)$ amortized time for batched predecessor queries (under no assumption on distributions of the $x$ and $y$ coordinates), MLR-tree supports batched skyline queries in optimal $O(t)$ amortized time, however the space becomes exponential. In dynamic case, the update time complexity is affected by a $O(log{2}N)$ factor.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.