I’m using the OCRTesseract Api from OpenCV (strictly speaking OpenCVSharp).
I’m having the problem that in the font I’m trying to recognize, l
and I
look practically the same. Even though I baked a Dictionary into my fine tuned .traineddata. I still get CeII
instead of Cell
.
I’d like to play with parameters like segment_penalty_dict_nonword
but the OpenCV Tesseract abstraction does not seem to offer that. Is that really true? I can’t quite believe that there is no possibility to set these params via the OpenCV Api.
Thanks!!