ok so create the Mat from a data pointer (which is then shared), or create the mat and copy data over. I’m sure emgu has those constructors as appropriate for C#
note: the usual order is YCbCr but whoever implemented this stuff in OpenCV got the names wrong and calls it YCrCb. the code probably does the right thing though. there really ought to be a pull request fixing this and deprecating these broken constants.
Ok, so after some digging and a lot of try & error I’m still facing the same problem: how to get a Mat object from data?
crackwitz pointed me into the right direction, now I know (as per Y, Cb and Cr buffers size) data frames are using color subsampling, not sure if 4:2:0 or other.
So I tried creating Mat objects from channels Y, Cb and Cr, and then merge all together expecting getting the desired YCbCr Mat object.
But it seems Merge function expects an array of Mat objects with the same size, which is -because of subsampling- not the case: Y buffer is 921600 bytes (1280x720) while Cb and Cr buffers are 230400 (640x360).
Got it working, just my few cents for newbies to come…
First of all, be sure which format are you handling. You’ll save a lot of time!
My case was I420 and I’d available both in packed (Y, U and V components stored in a single array) and planar (Y, U and V components stored separately) formats.
So I ended creating a Mat object from the packed array keeping in mind the chroma sampling used (4:2:0 means 2:1 horizontal and vertical downsampling) and then converting to BGR (using Yuv2BgrI420 color conversion).
Instead of Imshow I ended saving Mat as Image<Bgr, Byte> object while figuring which color conversion to use.