Acceleration of TextDetectionModel_DB and TextRecognitionModel model

I followed the example OpenCV: samples/dnn/text_detection.cpp

I did inference with my own images and the runtime is about 100ms. This is to slow for my usecase. Now, I want to accelerate the runtime. How can I do this?

I found TensorRT but if I use it, how do I set the modell settings like setBinaryThreshold(binThresh) / .setPolygonThreshold(polyThresh) / .setMaxCandidates(maxCandidates) / .setUnclipRatio(unclipRatio)?