I’am grabbing video with the Blackmagic DeckLink SDK and want to convert the videostream into a cv::Mat type for further processing.
Because I’am new to programming with OpenCV - what is the best way for doing this?
I tried something like this but i’am not sure if this is a valid way?
void* data; //void* means pointer to an undefined type
inputFrame->GetBytes(&data);
if (inputFrame->GetPixelFormat() == bmdFormat8BitYUV) {
cv::Mat mat = cv::Mat(inputFrame->GetHeight(), inputFrame->GetWidth(), CV_8UC2, data, inputFrame->GetRowBytes());
cv::cvtColor(mat, outputFrame, CV_BGRA2BGR);
return true;
}
you understand there’s a Mat constructor that takes a pointer to plain data. good. that is how you should do this. either keep the data valid (don’t free/delete it), or explicitly copy the Mat after construction.
your code claims that CV_8UC2 data is BGRA… I think that’s wrong.
I thought that the code parameter in cvtColor specify the colorspace in which the output frame should be converted? bmdFormat8BitYUV is the lowest supported pixel format in the DeckLink SDK and I want to convert this format into BGR. EDIT: Misunderstanding of mine. Of course CV_BGRA2BGR can’t work.
@berak
Thank you. But why do you think CV_8UC1 is correct? This means 8Bit unsigned and 1 channel. Because we have the luminance and the color component - why one channel? Of course I can try this, but I want to understand it. If you have source with helpful information I’am grateful.
honestly, im only guessing (from “planar” yuv being 8UC1 and H x 1.5) .
it’s either 8UC1 and width x 2, or 8UC2 and width x 1,
you have to fiddle & try, i’m afraid
it’s also unclear to me, if inputFrame->GetWidth() is the width of the incoming yuv image or the outgoing bgr one
(try to print them out & see, what makes most sense here)
that’s the convention in OpenCV for “weird” image formats. when in doubt, it’s a good bet.
neat would be if each pixel had its three or so color values, right next to each other in memory. that’d be CV_8UC3.
weird is anything where the channels have different resolutions (chroma subsampling), and also anything that might be planar or otherwise not strictly per-pixel interleaved. most YUV is clearly weird.
in those cases, it makes no sense to use anything other than “it’s a bag of bytes” (CV_8U). you can certainly not read a single pixel by reading adjacent bytes in those cases.
in those cases, the width of the Mat is often the full resolution width of the image that is represented. “height” then takes up however much it has to to fit the data. that could be true height *1.5 in the 4:2:0 chroma-subsampling case, and *2.0 for 4:2:2.