Poster Open Access
Nynke van 't Hof;
Vera Provatorova;
Mirjam Cuper;
Evangelos Kanoulas
A high quality of OCR-output (Optical Character Recognition) has many benefits. Documents become more accessible to readers and NLP tasks can thrive on the data. However, for many reasons, such as the condition of the documents, the OCR-output of historical documents suffers from a significant amount of errors. This study focuses on detecting and correcting these errors after the OCR process has taken place. It has a focus on Dutch historical data. A comparison will be made between the performance of two methods often used for this over the last few years: word2vec and BERT. While BERT has been shown to substantially outperform Word2Vec on OCR post-correction, the reasons behind this performance gap remain under-explored. From related literature, several pitfalls of word2vec in general were retrieved. This study attempts to find where these pitfalls might occur and compare whether BERT has less (or more) problems with these pitfalls than word2vec. This will give insight not only into the advantages and disadvantages of the used word embeddings for OCR post-correction (on historical data), but also into the application of state-of-the-art methods on historical data, something that these methods have often not been trained on and designed for specifically.
Name | Size | |
---|---|---|
Abstract.pdf
md5:68e6cda4da3751b0eb15a2dd737e43e0 |
95.1 kB | Download |
Poster.pdf
md5:0a02d38d4790895c7723845f9c80f14e |
387.7 kB | Download |
All versions | This version | |
---|---|---|
Views | 197 | 197 |
Downloads | 176 | 176 |
Data volume | 26.4 MB | 26.4 MB |
Unique views | 177 | 177 |
Unique downloads | 141 | 141 |