cv::cvtColor() can convert from yuv to rgb. So I want to konw, why cv::cuda::cvtColor() can not do this. Why cv::cuda::cvtColor() not support 2 channels.
Agreed the link I posted has this as a first step.
It does see the link I posted.
See my response above, additionally YUV in OpenCV is 3 channels.
In case the link is broken the example shows the conversion from RGB to YUV on the host (cv::cvtColor()), 3 channels (RGB) to 3 channels (YUV) and then YUV to RGB on the device (cv::cuda::cvtColor()).
Thank you for your patient help. Now, I have a YVYU image, cv::cvtColor() can convert it to rgb, but I want use cv::cuda::cvtColor(). Using cv::cuda::cvtColor() the program will report the following error:
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.7.0) opencv-4.7.0/opencv_contrib/modules/cudaimgproc/src/color.cpp:2103: error: (-206:Bad flag (parameter or structure field)) Unknown/unsupported color conversion code in function 'cvtColor'
Aborted (core dumped)
My guess is that YUV to RGB is good enough to use function cv::cvtColor(), there is no need to use function cv::cuda::cvtColor() hardware acceleration.
Is that really the case?
I wonder why dual channel can’t use GPU acceleration, or why GPU acceleration is not more efficient than CPU.
Not all OpenCV CPU functionality has been implemented in CUDA. This has nothing to do with performance, the CUDA color conversion routines implement some of the most commonly required conversions.
Additionaly I think, but I may be wrong, that all the compressed YUV formats are stored in a single channel in OpenCV. Have you tested
You may be able to find an NPP implementation of the conversion you require, see modules/cudacodec/src/video_reader.cpp for an example of how to call from OpenCV. If the conversion is not covered by NPP then it is again probably not that common.
My data is obtained from the camera. Now I convert a frame of data into BGR color space by cv::cvtColor() function, and then encode into png picture (4K) by cv::imencode(). However, the time is too high, is there an efficient method to use hardware acceleration.
My friend thank you for telling me MRE. Actually, the focus of my question is not the code, but whether opencv has a way to hardware-accelerate encoding of png or jpg images. Or is there some other technology that can do this, that uses the power of the GPU to encode images more efficiently than the CPU does?
Maybe I got my knowledge wrong, and I didn’t find an effective way to hardware-encode the picture.
The reason I suggested an MRE is because I wanted to check where your bottleneck is and understand why you would want to save 4K images instead of video.
My client’s requirement is to save every frame data in png format. My camera frame format is YUV, so my job is to convert YUV to BGR and then encode the data into png format. The code is very simple, and the conversion process is as follows: