Building an efficient OCR system for historical documents with little training data

Jiří Martínek and Ladislav Lenc and Pavel Král
Neural Computing and Applications (2020)



As the number of digitized historical documents has increased rapidly during the last a few decades, it is necessary to provide efficient methods of information retrieval and knowledge extraction to make the data accessible. Such methods are dependent on optical character recognition (OCR) which converts the document images into textual representations. Nowadays OCR methods are often not adapted to the historical domain, moreover, they usually need a significant amount of annotated documents. Therefore, this paper introduces a set of methods that allows performing an OCR on historical document images using only a small amount of real, manually annotated training data. The presented complete OCR system includes two main tasks: page layout analysis including text block and line segmentation and OCR. Our segmentation methods are based on fully convolutional networks and the OCR approach utilizes recurrent neural networks. Both approaches are state of the art in the relevant fields. We have created a novel real dataset for OCR from Porta fontium portal. This corpus is freely available for research and all proposed methods are evaluated on these data. We show that both the segmentation and OCR tasks are feasible with only a few annotated real data samples. The experiments aim at determining the best way how to achieve good performance with the given small set of data. We also demonstrate that obtained scores are comparable or even better than the scores of several state-of-the-art systems. To sum up, this paper shows a way how to create an efficient OCR system for historical documents with a need for only a little annotated training data.



@ARTICLE{ncaa2020, author = {Mart\'inek, J. and Lenc, L. and Kr\'al, P.}, title = {Building an efficient {OCR} system for historical documents with little training data}, journal = {Neural Computing and Applications}, year = {2020}, pages = {1-19}, doi = {10.1007/s00521-020-04910-x}, url = {}, document_type = {Article}, issn = {0941-0643}, publisher = {Springer}, note = {Received: 25 December 2019, Accepted: 06 April 2020, Published: 09 May 2020} }
Back to Top