- The paper introduces Sensecape, an interactive system that leverages LLMs to facilitate multilevel exploration and dynamic sensemaking.
- It details innovative features like the canvas view with semantic zoom and a hierarchy view, which support seamless information structuring and abstraction.
- User studies show that Sensecape significantly enhances concept exploration and hierarchical organization, thereby reducing cognitive overload.
Sensecape: Enabling Multilevel Exploration and Sensemaking with LLMs
Introduction
"Sensecape: Enabling Multilevel Exploration and Sensemaking with LLMs" introduces Sensecape, an interactive system designed to enhance complex information tasks using LLMs. Sensecape specifically addresses the nonlinear nature of complex tasks that require spatial information arrangement. By allowing users to manage information through multilevel abstraction and switch seamlessly between exploration and sensemaking, Sensecape aims to empower users in complex information management.
System Overview
Sensecape is structured around two main views: the canvas view and the hierarchy view. The canvas view acts as a digital whiteboard, enabling users to add, group, and connect nodes to manage information spatially. The hierarchy view provides an overarching abstraction level, allowing users to navigate across different abstraction levels (Figure 1).
Figure 1: An example workflow on canvas view. A user asks Sensecape to generate a list of questions by selecting the node.
Canvas View
In the canvas view, users can interact with nodes to manage and manipulate information efficiently. Features such as the Expand Bar offer functionalities like generating questions, explanations, and subtopics using LLMs. Additionally, the Text Extraction feature facilitates the breakdown of complex LLM-generated responses, supporting organizational efforts.
A significant feature is the Semantic Zoom, which manages information granularity in line with spatial layout changes—as users zoom in or out, the information display adjusts from detailed content to concise keywords, effectively managing informational overload (Figure 2).

Figure 2: Semantic zoom functionality displaying varying granularity of information.
Hierarchy View
The hierarchy view provides a macro perspective of the information landscape, visualizing relationships between different information layers—essential in understanding complex systems. Users can expand, restructure, and navigate the information hierarchy, empowering a higher-level comprehension and organization (Figure 3).
Figure 3: Hierarchy view; users can add a canvas above or a new hierarchy on the side.
Implementation
Implemented with React and supported by gpt-3.5-turbo and gpt-4 models for LLM functionalities, Sensecape efficiently manages resource allocation between exploration and sensemaking. It capitalizes on the distinctions between gpt-3.5-turbo for speed and gpt-4 for complexity, ensuring optimal user experience across features.
Evaluation
A within-subject user paper involving 12 participants was conducted to evaluate Sensecape against a baseline—an integrated environment with a conversational interface and canvas. Participants explored topics, focusing on organizing information for a deeper understanding.
Findings
- Exploration: Participants explored significantly more concepts with Sensecape compared to baseline interfaces, averaging 68.3 concept explorations—a marked improvement in informational breadth (Figure 4).
Figure 4: When using Sensecape, participants explored more concepts and organized knowledge more hierarchically.
- Sensemaking: Sensecape facilitated deeper hierarchical structuring, with participants creating substantially more levels of abstraction, facilitating a deeper understanding of complex topics.
- Utility Perception: Participants valued the Expand Bar for directing their exploration effectively. Semantic Zoom and the hierarchical organization were pivotal in managing cognitive overload.
Figure 5: Evaluation of Sensecape's features.
Discussion
The results revealed Sensecape's ability to enhance complex information exploration and sensemaking, allowing users to effectively engage with nonlinear workflows. This could be particularly transformative in academic research and multifaceted planning tasks by providing structured cognitive mapping.
Conclusion
Sensecape represents a significant advancement in using LLMs for complex information tasks. By facilitating multilevel exploration and enabling dynamic sensemaking processes, Sensecape demonstrates the potential of LLM-integrated systems to transform how complex information is navigated and understood. Future work could extend collaborative capabilities and integrate more intelligent assistance to further enhance its applicability across broader contexts.