Processing for text extraction

I am working on a project to extract information from driver’s licenses.
The extraction is good with some images and bad with others.
I tried different image processing to improve the quality of the extraction. It helps but not with all the images because the images don’t have the same degree of blur, luminosity… (I tried all kinds of filters but I didn’t find a good filter for all the images).
So I calculated the blur and brightness of all the images to find a filter criterion or condition, but the result was not clear to me.
Any suggestion to understand which filter I should use?

It sure would help if you provided example images, including ones that work and ones that don’t.

I used this filter:

image=cv2.cvtColor(image, cv2.IMREAD_COLOR)
img11 = cv2.resize(image, None, fx=2, fy=2, interpolation=cv2.INTER_CUBIC)
dst = cv2.fastNlMeansDenoisingColored(img11, None, 10, 10, 7, 15)
and this pic works.