Emergent Mind

Abstract

Information extraction (IE) for visually-rich documents (VRDs) has achieved SOTA performance recently thanks to the adaptation of Transformer-based language models, which shows the great potential of pre-training methods. In this paper, we present a new approach to improve the capability of language model pre-training on VRDs. Firstly, we introduce a new query-based IE model that employs span extraction instead of using the common sequence labeling approach. Secondly, to further extend the span extraction formulation, we propose a new training task that focuses on modelling the relationships among semantic entities within a document. This task enables target spans to be extracted recursively and can be used to pre-train the model or as an IE downstream task. Evaluation on three datasets of popular business documents (invoices, receipts) shows that our proposed method achieves significant improvements compared to existing models. The method also provides a mechanism for knowledge accumulation from multiple downstream IE tasks.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.