The result of the conversion is only 0s!
cv::Mat cpuTmp, cpuMat(32,32,CV_8UC3,cv::Scalar(255,0,0)); //b,g,r
cv::cuda::GpuMat gpuMat, gpuMat16;
The result into cpuTmp contains only zeroes!
Any idea how to solve this, or maybe there is another way for this conversion into the GPU memory?
PS: I should have to mention, that the OpenCV version I use is 4.5.4.
I don’t think it works on the CPU either.
No, it works fine, I have dumped and checked the result. But CV_16F on GPU produces only 0000000…
Apologies, I was examining with Image Watch.
Appears that when that type was introduced the CUDA functions were not updated to deal with it.
“Appears that when that type was introduced the CUDA functions were not updated to deal with it.”
That’s right - the type code is 7, and this was formely OTHER, as I understand during googling for a solution.
cuda modules live in the contrib repo. you could file a bug there (but check for existing issues first).
opencv type codes are about to experience some overhaul. the element type is in the lower 3 bits so far (
CV_CN_SHIFT). I saw talk of extending that space. it’ll break some binary compatibility. they might do that for OpenCV v5, or maybe later.