Thank you for your patient help. Now, I have a YVYU image, cv::cvtColor() can convert it to rgb, but I want use cv::cuda::cvtColor(). Using cv::cuda::cvtColor() the program will report the following error:
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.7.0) opencv-4.7.0/opencv_contrib/modules/cudaimgproc/src/color.cpp:2103: error: (-206:Bad flag (parameter or structure field)) Unknown/unsupported color conversion code in function 'cvtColor'
Aborted (core dumped)
My guess is that YUV to RGB is good enough to use function cv::cvtColor(), there is no need to use function cv::cuda::cvtColor() hardware acceleration.
Is that really the case?
I wonder why dual channel can’t use GPU acceleration, or why GPU acceleration is not more efficient than CPU.
You may be able to find an NPP implementation of the conversion you require, see modules/cudacodec/src/video_reader.cpp for an example of how to call from OpenCV. If the conversion is not covered by NPP then it is again probably not that common.
My data is obtained from the camera. Now I convert a frame of data into BGR color space by cv::cvtColor() function, and then encode into png picture (4K) by cv::imencode(). However, the time is too high, is there an efficient method to use hardware acceleration.
My friend thank you for telling me MRE. Actually, the focus of my question is not the code, but whether opencv has a way to hardware-accelerate encoding of png or jpg images. Or is there some other technology that can do this, that uses the power of the GPU to encode images more efficiently than the CPU does?
Maybe I got my knowledge wrong, and I didn’t find an effective way to hardware-encode the picture.
My client’s requirement is to save every frame data in png format. My camera frame format is YUV, so my job is to convert YUV to BGR and then encode the data into png format. The code is very simple, and the conversion process is as follows: