Emergent Mind

Abstract

With the rapid development of the Internet, more and more applications (app) are playing an important role in various aspects of the world. Among all apps, mobile apps and web apps are dominant in people's daily life and all industries. In order to tackle the challenges in ensuring the app quality, many approaches have been adopted to improve app GUI testing, including random technologies, model-based technologies, etc. However, existing approaches are still insufficient in reaching high code coverage, constructing high quality models, and achieving generalizability. Besides, current approaches is heavily dependent on the execution platforms (i.e., Android, Web). Apps of distinct platforms share commonalities in GUI design, which inspires us to propose a platform-independent approach with the development of computer vision algorithms. In this paper, we propose UniRLTest. It is a reinforcement learning based approach utilizing a universal framework with computer vision algorithms to conduct automated testing on apps from different platforms. UniRLTest extracts the GUI widgets from GUI pages and characterizes the GUI corresponding layouts, embedding the GUI pages as states. UniRLTest explores apps with the guidance of a novelly designed curiosity-driven strategy, which uses a Q-network to estimate the values of specific states and actions to encourage more exploration in uncovered pages without platform dependency. The state embedding similarity is used to calculate the rewards of each exploration step. We conduct an empirical study on 20 mobile apps and 5 web apps, and the results show that UniRLTest can perform better than the baselines, especially in the exploration of new states.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.