I made a class using GStreamer to get frames from some cameras. I have both MIPI CSI and UVC cameras. This is on the Jetson Nano so they have some other GstElements. The simplified pipelines:
CSI: nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12 ! nvvidconv ! video/x-raw ! appsink
UVC: v4l2src ! image/jpeg, format=MJPG ! nvv4l2decoder mjpeg=true ! nvvidconv ! video/x-raw ! appsink
These pipelines will give me the data in YUV NV12 so I use cv::cvtColorTwoPlane
with cv::COLOR_YUV2BGR_NV12
. The class has three functions: one to play the pipeline; another to decode the last frame received in the pipeline; and a function to return the decoded image. I had a test program where these are working fine. I tried to include it in my main program and I get the following error for all my cameras:
OpenCV(4.4.0) /tmp/build_opencv/opencv/modules/imgproc/src/color_yuv.dispatch.cpp:409: error: (-2:Unspecified error) in function 'void cv::cvtColorTwoPlaneYUV2BGRpair(cv::InputArray, cv::InputArray, cv::OutputArray, int, bool, int)'
> (expected: 'ysrc.step == uvsrc.step'), where
> 'ysrc.step is 1920
> must be equal to
> 'uvsrc.step' is 960
The error occurs at CV_CheckEQ(ysrc.step, uvsrc.step, "");
in color_yuv.dispatch.cpp
.
It’s pretty self-explanatory where the Y Mat’s step needs to be equal to the UV Mat’s step. The only problem is that it is different in my test program which doesn’t give me an error. I printed the steps with nv12_y.step
and nv12_uv.step
and they print 1920 and 960 respectively, but somehow pass the CV_CheckEQ
statement in my test program but not in my main application. The actual decoding code should be fine because the frames appear properly in my test program. Here is the decode code:
GstBuffer *gbuffer = gst_sample_get_buffer(m_sample);
GstMapInfo *gmap;
gst_buffer_map(gbuffer, &gmap, GST_MAP_READ);
cv::Mat nv12_y, nv12_uv;
nv12_y = cv::Mat(height, width, CV_8UC1, (uchar *) gmap.data);
nv12_uv = cv::Mat(height/2, width/2, CV_8UC1, ( (uchar *) gmap.data ) + (height * width));
cv::cvtColorTwoPlane(nv12_y, nv12_uv, frame, cv::COLOR_YUV2BGR_NV12);
My test code looks like the following in pseudcode:
while(true) {
play all camera pipelines for a few milliseconds; pause pipelines
decode latest frames from each pipeline
cv::imshow each frame
}
My main application is doing the same thing, but with a lot of other stuff going on around it. The only difference would be the time between each step. The pipelines and cameras are the exact same.