Emergent Mind

Abstract

Automated Static Analysis Tools (ASATs) have evolved over time to assist in detecting bugs. However, the excessive false warnings can impede developers' productivity and confidence in the tools. Previous research efforts have explored learning-based methods to validate the reported warnings. Nevertheless, their coarse granularity, focusing on either long-term warnings or function-level alerts, which are insensitive to individual bugs. Also, they rely on manually crafted features or solely on source code semantics, which is inadequate for effective learning. In this paper, we propose FineWAVE, a learning-based approach that verifies bug-sensitive warnings at a fine-grained granularity. Specifically, we design a novel LSTM-based model that captures multi-modal semantics of source code and warnings from ASATs and highlights their correlations with cross-attention. To tackle the data scarcity of training and evaluation, we collected a large-scale dataset of 280,273 warnings. We conducted extensive experiments on the dataset to evaluate FineWAVE. The experimental results demonstrate the effectiveness of our approach, with an F1-score of 97.79\% for reducing false alarms and 67.06% for confirming actual warnings, significantly outperforming all baselines. Moreover, we have applied our FineWAVE to filter out about 92% warnings in four popular real-world projects, and found 25 new bugs with minimal manual effort.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.