Emergent Mind

Abstract

Modern Python projects execute computational functions using native libraries and give Python interfaces to boost execution speed; hence, testing these libraries becomes critical to the project's robustness. One challenge is that existing approaches use coverage to guide generation, but native libraries run as black boxes to Python code with no execution information. Another is that dynamic binary instrumentation reduces testing performance as it needs to monitor both native libraries and the Python virtual machine. To address these challenges, in this paper, we propose an automated test case generation approach that works at the Python code layer. Our insight is that many path conditions in native libraries are for processing input data structures through interacting with the VM. In our approach, we instrument the Python Interpreter to monitor the interactions between native libraries and VM, derive constraints on the structures, and then use the constraints to guide test case generation. We implement our approach in a tool named PyCing and apply it to six widely-used Python projects. The experimental results reveal that with the structure constraint guidance, PyCing can cover more execution paths than existing test cases and state-of-the-art tools. Also, with the checkers in the testing framework Pytest, PyCing can identify segmentation faults in 10 Python interfaces and memory leaks in 9. Our instrumentation strategy also has an acceptable influence on testing efficiency.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.