I have been trying to hours to somehow use CUDA with OpenCV’s DNN in Python. However, for now it has been unsuccessful.
I started with
import cv2
net = cv2.dnn.readNet(PATH_TO_WEIGHTS, PATH_TO_CONFIG)
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)
which, at first, was ofc expected to fail as it I use normal opencv-contrib-python. I then removed that one from my virtual environment and followed How to use OpenCV DNN Module with NVIDIA GPUs on Linux to compile OpenCV. I downloaded & compiled OpenCV, I did not forget make install. I have cv2.cpython-39-x86_64-linux-gnu.so file in /usr/local/lib/python3.9/site-packages/cv2/python-3.9/. I made a cv2.so symlink to that file in site-packages of my virtual environment’s Python.
However, if I try to import cv2, it does not seem to work. I cannot print cv2.__version__ (AttributeError: module 'cv2' has no attribute '__version__') or do pretty much anything else.
I am afraid that you are missing the point. __version__ is just an example, I cannot do ANYTHING.
Anyway, after like 10th re-installation, it seems to work.pkg-config --modversion opencv4 was giving me 4.6.0 so the installation itself was successful, and the problem was somewhere else. I am not sure where but in the end I had to build outside of my virtual env for Python 3.9. Then, after it worked for that Python, it worked in the virtual env once I updated Numpy & Python version, but that took a loot of time indeed.
the title of your question is misleading. I’ve fixed that.
your python seems broken, or you made a mess when you tried to install OpenCV for python.
remove all of that, start fresh. and show us every single command you executed. and avoid containers, virtual machines, virtual environments, … that just complicates everything, and you aren’t ready yet to complicate your life, and nobody willing to help will tolerate needless complications.