Hello, I was getting this error after running a python script trying to add gpu computing functionality on some opencv dnn code. I built opencv from source for gpu and it does seem to be recognizing my gpu but it is running on cpu and is giving me this error:
checkVersions CUDART version 11020 reported by cuDNN 8100 does not match with the version reported by CUDART 11000
I have done a lot of googling but I dont see any references to this error and am unable to parse what it means. I would be very greatful if someone could walk me through how to fix this error. I am using opencv 4.5 with cuda 11.1 and a laptop rtx 2060 with a compute capability of 7.5. My gpu does work with tensorflow and darknet.
Yes I did build it from source so that I could get opencv to use my gpu in python. I do not know how to make those versions match. I do not recall installing multiple versions of CUDA however it is possible. Where would you recommend I start to fix this issue?
It implies that cuDNN 8.1 requires CUDA 11.2 which according to the release notes is not true (10.2 and above should be supported according to the matrix).
The same version of a given cuDNN library can be compiled against different NVIDIAĀ® CUDAĀ® Toolkitā¢ versions. This routine returns the CUDA Toolkit version that the currently used cuDNN library has been compiled against.
so the check looks to be rejecting CUDA 11.0 because cuDNN 8.2 was built agains CUDA 11.2. That said I think this would have been reported somewhere already if its true.
Anyway easy test would be to install cuDNN 8.0.0-8.0.3 as these should have been built against CUDA 11.0. You shouldnāt need to rebuild as cuDNN is dynamically linked.
If I do that, I believe that it will make my TensorFlow not work? Iām kinda paranoid cuz the TensorFlow installation was not enjoyable and I want that to continue to work. If I install 8.0.3, it only works with CUDA 10.2? I have installed cuda 11. Can I install both simulataneously?
You have several options but the easiest thing to do is check that this is the problem first. Canāt remember how to install cuDNN on linux, but from memory you just need to set two simlinks. If thatās it you can test to see if this is the problem.
If it is you should have 3 options
use the new version of cuDNN if it works with tensorflow,
use cuDNN 8.1.0 and install CUDA 11.2 and re-compile OpenCV (or do the same with CUDA 11.1 if a previous version of cuDNN is suitable for tensorflow), or
comment out the check crackwitz pointed you to and re-compile.
[WARN:0] global /home/aoberai/opencv/modules/dnn/src/cuda4dnn/init.hpp (34) checkVersions cuDNN reports version 8003 which does not match with the version 8100 with which OpenCV was built
2.0731723958091663
I have got similar message [ WARN:0] global g:\lib\opencv\modules\dnn\src\cuda4dnn/init.hpp (42) cv::dnn::cuda4dnn::checkVersions CUDART version 11010 reported by cuDNN 8005 does not match with the version reported by CUDART 11020
The same version of a given cuDNN library can be compiled against different NVIDIAĀ® CUDAĀ® Toolkitā¢ versions. This routine returns the CUDA Toolkit version that the currently used cuDNN library has been compiled against.
I agree, itās only a warning, and it might be overly cautious to warn about these situations.
the original issue was that OPās code doesnāt use the GPUā¦ so I wonder, does OpenCV even see the GPU at runtime, let alone pick or be forced to use it? I am unfamiliar with the cuda modules, cuda4dnn in particular. how can that be determined?
Rebuilding after changing cudnn version worked. It does appear to be running on gpu however it is rather slow. I should be able to run yolo on opencv dnn at around 20 fps but i am getting 2. I have no idea why.
Please check if you have set the backend and target to DNN_BACKEND_CUDA and DNN_TARGET_CUDA (or DNN_TARGET_CUDA_FP16) as shown in this example script
I think the paragraph quoted is informing that the same version of cuDNN āsourceā can be compiled with different versions of CUDA Toolkit. cuDNN used to have different releases for different minor versions. So Iām not sure if the check is too strict. But there is something new since CUDA 11.1:
First introduced in CUDA 11.1, CUDA Enhanced Compatibility provides two benefits:
By leveraging semantic versioning across components in the CUDA Toolkit, an application can be built for one CUDA minor release (such as 11.1) and work across all future minor releases within the major family (such as 11.x).
But this still doesnāt explain whatās happening. cuDNN built for CUDA 11.2 need not be compatible with CUDA 11.0?
But this is incorrect since CUDA 11.1. The check needs to be updated.
If the user requested CUDA backend for inference, warnings will be issued when no GPU device was detected or the selected device is incompatible.