YCbCr video frame to BGR Mat


I’m newie to OpenCV and I’m struggling trying to create BGR Mat objects from YCbCr video frames and showing them using Imshow.

I’ve access to raw frames from a third-party videoconference solution through its API.

Within video frame object’s public properties I have access to both YCbCr frame payload and individual color space buffers (array of bytes).

The steps I plan to follow are:

  1. YCbCr byte buffers to YCrCb Mat
  2. YCrCb Mat to BGR Mat
  3. Display BGR Mat

Second and third steps don’t seem to be a problem thanks to OpenCV’s CvtColor and Imshow methods, but how can I solve the first one?

I’m using C# and Emuge wrapper if it helps.

Any help would be appreciated.


it doesnt, in fact it’s a problem

this is entirely emgu / c# specific, dont expect much help from here

have you tried their own place ?

ok so create the Mat from a data pointer (which is then shared), or create the mat and copy data over. I’m sure emgu has those constructors as appropriate for C#

note: the usual order is YCbCr but whoever implemented this stuff in OpenCV got the names wrong and calls it YCrCb. the code probably does the right thing though. there really ought to be a pull request fixing this and deprecating these broken constants.

be aware of related formats.

  • planar vs interleaved
  • chroma-subsampled or not

Which part of I’m newie with OpenCV and its community did you don’t understand? I did not mean to offend anyone…

Anyway, nothing would make me happy than stick to C++ and be able to use OpenCV directly, but this is a project requirement.

Thank you crackwitz!

Ok, so after some digging and a lot of try & error I’m still facing the same problem: how to get a Mat object from data?

crackwitz pointed me into the right direction, now I know (as per Y, Cb and Cr buffers size) data frames are using color subsampling, not sure if 4:2:0 or other.

So I tried creating Mat objects from channels Y, Cb and Cr, and then merge all together expecting getting the desired YCbCr Mat object.

But it seems Merge function expects an array of Mat objects with the same size, which is -because of subsampling- not the case: Y buffer is 921600 bytes (1280x720) while Cb and Cr buffers are 230400 (640x360).

Any suggestions? I’m in the right path?


Finally format is YUV 4:2:0 planar…

just glue all three planes together, don’t interleave.

see if (some of!) this helps: c - Processing YUV I420 from framebuffer? - Stack Overflow


Got it working, just my few cents for newbies to come…

First of all, be sure which format are you handling. You’ll save a lot of time!

My case was I420 and I’d available both in packed (Y, U and V components stored in a single array) and planar (Y, U and V components stored separately) formats.

So I ended creating a Mat object from the packed array keeping in mind the chroma sampling used (4:2:0 means 2:1 horizontal and vertical downsampling) and then converting to BGR (using Yuv2BgrI420 color conversion).

Instead of Imshow I ended saving Mat as Image<Bgr, Byte> object while figuring which color conversion to use.

Hope it helps.