Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 44 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Summarizing Large Query Logs in Ettu (1608.01013v1)

Published 2 Aug 2016 in cs.DB

Abstract: Database access logs are large, unwieldy, and hard for humans to inspect and summarize. In spite of this, they remain the canonical go-to resource for tasks ranging from performance tuning to security auditing. In this paper, we address the challenge of compactly encoding large sequences of SQL queries for presentation to a human user. Our approach is based on the Weisfeiler-Lehman (WL) approximate graph isomorphism algorithm, which identifies salient features of a graph or in our case of an abstract syntax tree. Our generalization of WL allows us to define a distance metric for SQL queries, which in turn permits automated clustering of queries. We also present two techniques for visualizing query clusters, and an algorithm that allows these visualizations to be constructed at interactive speeds. Finally, we evaluate our algorithms in the context of a motivating example: insider threat detection at a large US bank. We show experimentally on real world query logs that (a) our distance metric captures a meaningful notion of similarity, and (b) the log summarization process is scalable and performant.

Citations (4)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.