Abstract:
|
[EN] The evaluation of Handwritten Text Recognition (HTR) systems has traditionally used metrics based on the edit distance between HTR and ground truth (GT) transcripts, at both the character and word levels. This is very ...[+]
[EN] The evaluation of Handwritten Text Recognition (HTR) systems has traditionally used metrics based on the edit distance between HTR and ground truth (GT) transcripts, at both the character and word levels. This is very adequate when the experimental protocol assumes that both GT and HTR text lines are the same, which allows edit distances to be independently computed to each given line. Driven by recent ad-vances in pattern recognition, HTR systems increasingly face the end-to-end page-level transcription of a document, where the precision of locating the different text lines and their corresponding reading order (RO) play a key role. In such a case, the standard metrics do not take into account the inconsistencies that might appear. In this paper, the problem of evaluating HTR systems at the page level is introduced in detail. We analyse the convenience of using a two-fold evaluation, where the transcription accuracy and the RO goodness are considered separately. Different alternatives are proposed, analysed and empir-ically compared both through partially simulated and through real, full end-to-end experiments. Results support the validity of the proposed two-fold evaluation approach. An important conclusion is that such an evaluation can be adequately achieved by just two simple and well-known metrics: the Word Error Rate (WER), that takes transcription sequentiality into account, and the here re-formulated Bag of Words Word Error Rate (bWER), that ignores order. While the latter directly and very accurately assess intrin-sic word recognition errors, the difference between both metrics (AWER) gracefully correlates with the Normalised Spearman's Foot Rule Distance (NSFD), a metric which explicitly measures RO errors associ-ated with layout analysis flaws. To arrive to these conclusions, we have introduced another metric called Hungarian Word Word Rate (hWER), based on a here proposed regularised version of the Hungarian Al-gorithm. This metric is shown to be always almost identical to bWER and both bWER and hWER are also almost identical to WER whenever HTR transcripts and GT references are guarantee to be in the same RO.
[-]
|
Thanks:
|
This paper is part of the I+D+i projects: PID2020-118447RA-I00 (MultiScore) and PID2020-116813RB-I00a (SimancasSearch), funded by MCIN/AEI/10.13039/501100011033. The first author research was developed in part with the ...[+]
This paper is part of the I+D+i projects: PID2020-118447RA-I00 (MultiScore) and PID2020-116813RB-I00a (SimancasSearch), funded by MCIN/AEI/10.13039/501100011033. The first author research was developed in part with the Valencian Graduate School and Research Network of Artificial Intelligence (valgrAI, co-funded by Generalitat Valenciana and the European Union). The second author is supported by a Maria Zambrano grant from the Spanish Ministerio de Universidades and the European Union NextGenerationEU/PRTR. The third author is supported by grant ACIF/2021/356 from the "Programa I+D+i de la Generalitat Valenciana".
[-]
|