Emergent Mind

Abstract

Due to its powerful automatic feature extraction, deep learning (DL) has been widely used in source code vulnerability detection. However, although it performs well on artificial datasets, its performance is not satisfactory when detecting real-world vulnerabilities due to the high complexity of real-world samples. In this paper, we propose to train DL-based vulnerability detection models in a human-learning manner, that is, start with the simplest samples and then gradually transition to difficult knowledge. Specifically, we design a novel framework (Humer) that can enhance the detection ability of DL-based vulnerability detectors. To validate the effectiveness of Humer, we select five state-of-the-art DL-based vulnerability detection models (TokenCNN, VulDeePecker, StatementGRU, ASTGRU, and Devign) to complete our evaluations. Through the results, we find that the use of Humer can increase the F1 of these models by an average of 10.5%. Moreover, Humer can make the model detect up to 16.7% more real-world vulnerabilities. Meanwhile, we also conduct a case study to uncover vulnerabilities from real-world open source products by using these enhanced DL-based vulnerability detectors. Through the results, we finally discover 281 unreported vulnerabilities in NVD, of which 98 have been silently patched by vendors in the latest version of corresponding products, but 159 still exist in the products.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.