Emergent Mind

WebLINX: Real-World Website Navigation with Multi-Turn Dialogue

(2402.05930)
Published Feb 8, 2024 in cs.CL , cs.CV , and cs.LG

Abstract

We propose the problem of conversational web navigation, where a digital agent controls a web browser and follows user instructions to solve real-world tasks in a multi-turn dialogue fashion. To support this problem, we introduce WEBLINX - a large-scale benchmark of 100K interactions across 2300 expert demonstrations of conversational web navigation. Our benchmark covers a broad range of patterns on over 150 real-world websites and can be used to train and evaluate agents in diverse scenarios. Due to the magnitude of information present, LLMs cannot process entire web pages in real-time. To solve this bottleneck, we design a retrieval-inspired model that efficiently prunes HTML pages by ranking relevant elements. We use the selected elements, along with screenshots and action history, to assess a variety of models for their ability to replicate human behavior when navigating the web. Our experiments span from small text-only to proprietary multimodal LLMs. We find that smaller finetuned decoders surpass the best zero-shot LLMs (including GPT-4V), but also larger finetuned multimodal models which were explicitly pretrained on screenshots. However, all finetuned models struggle to generalize to unseen websites. Our findings highlight the need for large multimodal models that can generalize to novel settings. Our code, data and models are available for research: https://mcgill-nlp.github.io/weblinx

Conversational web navigation task showing communication between instructor and navigator using natural language and website data.

Overview

  • The paper introduces WEB LINX, a benchmark for training and evaluating digital agents on conversational web navigation tasks using LLMs and a method called Dense Markup Ranking.

  • WEB LINX includes a dataset of 100K actions and utterances across 155 websites and demonstrates a method for efficiently parsing web pages, improving agent performance.

  • The study evaluates models ranging from text-only decoders to multimodal LLMs, finding that fine-tuned text-only models outperform larger and multimodal ones in task performance.

  • Identifies challenges in model generalization to new websites and scenarios, and highlights the need for advancements in multimodal models' integration of visual information.

Introduction

In the rapidly evolving field of conversational web navigation, researchers are pushing the boundaries to create digital agents that can navigate websites and perform tasks based on user instructions. The paper "WEB LINX: Real-World Website Navigation with Multi-Turn Dialogue" introduces an innovative approach to this problem. By leveraging LLMs and a unique method of efficiently parsing web pages, it marks a significant step forward in making digital agents more versatile and effective in handling real-world web navigation tasks.

Benchmarking Conversational Web Navigation

The core contribution of the study is the introduction of WEB LINX, a comprehensive benchmark designed for training and evaluating agents on conversational web navigation tasks. It encompasses a vast dataset of 100K actions and utterances over 2300 expert demonstrations across 155 real-world websites, covering a broad spectrum of interaction patterns and scenarios.

One of the major challenges in conversational web navigation is efficiently understanding and manipulating the vast amount of information on web pages. Traditional approaches that directly feed entire HTML pages to LLMs are not feasible due to the models' input size limitations and the real-time processing requirement. To address this, the researchers developed a method akin to a retrieval system called Dense Markup Ranking (DMR), which prioritizes relevant webpage elements based on the dialogue's context. This enables the model to focus on pertinent parts of the page, improving efficiency and performance.

Evaluating Model Performance

The paper thoroughly evaluates various models, ranging from smaller text-only decoders to substantial multimodal LLMs capable of processing both text and visual inputs. The experiments reveal that smaller, fine-tuned decoders excel over larger zero-shot LLMs in task performance. However, all models struggle with generalizing to unseen websites and scenarios, highlighting the need for improvements in model robustness and adaptability.

One of the key findings is that even when fine-tuned, multimodal models like Fuyu-8B and GPT-4V do not outperform their text-only counterparts, suggesting current multimodal models might not be fully leveraging the additional visual information for these tasks. Moreover, the fine-tuned models significantly outperform their zero-shot counterparts, emphasizing the importance of task-specific fine-tuning.

Challenges and Future Directions

The study underscores several challenges and potential areas for further research. Firstly, models need to better understand and act upon the dynamic elements of web pages, such as handling updates or changes in content after actions like clicks. Secondly, the authors identify the necessity for models to improve generalization to novel scenarios, which remains a significant hurdle.

Furthermore, the paper calls for advancements in multimodal models' ability to integrate textual and visual information more effectively. Despite the inclusion of screenshots and multimodal inputs, current models do not always translate this additional information into better performance, suggesting a gap in how these models process and utilize multimodal data.

Conclusion

"WEB LINX: Real-World Website Navigation with Multi-Turn Dialogue" makes an important contribution to the field of AI and conversational agents by presenting a robust benchmark for conversational web navigation and evaluating a range of models on this task. While highlighting the effectiveness of fine-tuned text-only decoders, it also outlines significant challenges that must be addressed to enhance the versatility and generalization capabilities of digital agents. The findings from this study set the stage for future research into more advanced models that can seamlessly navigate the complex landscape of the web, making conversational agents more powerful and user-friendly.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.