Emergent Mind

Abstract

Good OCR results for historical printings rely on the availability of recognition models trained on diplomatic transcriptions as ground truth, which is both a scarce resource and time-consuming to generate. Instead of having to train a separate model for each historical typeface, we propose a strategy to start from models trained on a combined set of available transcriptions in a variety of fonts. These \emph{mixed models} result in character accuracy rates over 90\% on a test set of printings from the same period of time, but without any representation in the training data, demonstrating the possibility to overcome the typography barrier by generalizing from a few typefaces to a larger set of (similar) fonts in use over a period of time. The output of these mixed models is then used as a baseline to be further improved by both fully automatic methods and semi-automatic methods involving a minimal amount of manual transcriptions. In order to evaluate the recognition quality of each model in a series of models generated during the training process in the absence of any ground truth, we introduce two readily observable quantities that correlate well with true accuracy. These quantities are \emph{mean character confidence C} (as given by the OCR engine OCRopus) and \emph{mean token lexicality L} (a distance measure of OCR tokens from modern wordforms taking historical spelling patterns into account, which can be calculated for any OCR engine). Whereas the fully automatic method is able to improve upon the result of a mixed model by only 1-2 percentage points, already 100-200 hand-corrected lines lead to much better OCR results with character error rates of only a few percent. This procedure minimizes the amount of ground truth production and does not depend on the previous construction of a specific typographic model.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.