Write Mat::frame to stdout and pipe to ffmpeg

Hello,

OpenCV version : 4.5.2-dev
ffmpeg version : 4.3.2
os version : osx 11.2.3 Big Sur

I’m trying to use OpenCV combined to ffmpeg/ffplay for streaming video.
So far, I tried to encode the frame using cv::imencode() but ffplay cannot read the output data

The C++ code:

Mat frame;
while (waitKey(1) < 0){
       video.read(frame);

       // Processing frame

      // output to ffmpeg / ffplay
      std::vector<uchar> buffer;
      imencode(".jpg", frame, buffer);
      for (auto i = buffer.begin(); i != buffer.end(); ++i){
          std::cout << *i;
      }
}

command :

./main | ffplay -f rawvideo -pixel_format bgr24 -video_size 1920x1080 -i pipe:

The output : [C++] OpenCV pipe to ffplay output - Album on Imgur

I tried also to do like this, its “better” but the image is still not perfect:

Mat frame;
while (waitKey(1) < 0){
       video.read(frame);

       // Processing frame

      // output to ffmpeg / ffplay
      std::string out((char*)frame.data, frame.total() * frame.elemSize());
      std::cout << out; 
}

command: ./main | ffplay -f rawvideo -pixel_format bgr24 -video_size 1920x1080 -i pipe:
output: [C++] Opencv pipe with ffplay (2) - Album on Imgur

In python, its working well with this code:

sys.stdout.buffer.write(frame.tobytes())

Thanks for any help

Ok it was due to the input resolution set to ffplay. Now it is good but I still have issue with the color.
I know that the frame’s output of opencv should be in BGR24. But I still have weird color issue

command :

./main | ffplay -f rawvideo -pixel_format bgr24 -video_size 1280x720 -i pipe:

ffplay_output : [C++] Opencv to ffplay color - Album on Imgur

Anyone has an idea ??

first of all, you have to understand the difference between sending raw bitmap data, and sending a compressed image as a jpeg or png or something.

when you understand that, you should be able explain why you see “noise” in that first album you gave.

next… you should check the actual size and number of channels of the data in frame (after video.read(frame);)

it is probably 640 x 480 and you fixed that by changing the VideoCapture resolution.

third… you are probably sending BGR color data. check that it looks correct using imshow.

ffmpeg is told to expect bgr24 so this should result in a proper picture. it is not, so either you aren’t sending BGR24 or ffmpeg isn’t interpreting it as BGR24.

you haven’t shown your source code. it is probably causing this. show your source code.

First, thanks for the reply :

Yes, indeed, it was due to the resolution. After changing the resolution I got the third image I sent.
→ But if you look closely to the third picture, the first pixel column in the begin of the frame should be located in the last pixel column of the frame. I don’t know why I got this behavior…

Yes, opencv is sending BGR color data. The output for imshow is correct:

The output of variable:
frame.channels() = 3
frame.type() = 16 which correspond to CV_8UC3

My code source below:

void process(){  
     VideoCapture video;
     Mat frame;
     video.open(0);
     video.read(frame);
     this->getSize(frame);
     bool isResize = false;
     while (waitKey(1) < 0){
           video.read(frame);
           if(frame.empty()){
              waitKey(this->timeout);
           }

           // Resize frame
           if (isResize){
              resize(frame, frame, Size(this->video_height, this->video_width));
           }

           // output to ffmpeg pipe
           spdlog::get(this->loggerName)->info(frame.channels()); // return 3
           spdlog::get(this->loggerName)->info(frame.type()); // return 16
           this->outputFfmpeg(frame);
           imshow(this->id, frame);
     }         
     video.release();
}

void outputFfmpeg(Mat frame){
    for (size_t i = 0; i < frame.dataend - frame.datastart; i++){
        std::cout << frame.data[i];
    }
}

void getSize(Mat frame){
     this->frame_height = frame.cols;
     this->frame_width  = frame.rows;
     std::string log = "Resolution: " + std::to_string(this->frame_height) + "x" + std::to_string(this->frame_width);
     spdlog::get(this->loggerName)->info(log); // Return 1280x720
}

good.

It seems that ffmpeg/ffplay doesn’t implicitly convert pixel formats, at least not in this situation. I don’t know if it’s supposed to.

you should explicitly tell ffplay to convert, or use cvtColor and send rgb24 instead, which should be easier for ffmpeg to work with.

Thanks, I tried it but it is not working…

Mat toRgb;
cvtColor(frame, toRgb, COLOR_BGR2RGB);
for (size_t i = 0; i < toRgb.dataend - toRgb.datastart; i++){
       std::cout << toRgb.data[i];
}

The output of ffplay with rgb24 option:


But I find out weird behavior:
My code tree looks like this
├── src
—├── main.cpp
—└── detection
-----└── detection.hpp

the output of the code below works well only if I put it in the main.cpp.
If the code is running inside a class, the output will be as the third picture I sent above.

Im trying to understand why but for now I didn’t find it…
Do you have the same behavior if you write the frame to ouput inside a class ?

Ok I find out my problem …

In my main.cpp, I printed out some data and it affects obviously ffplay which takes that dummy datas into account in his stdin and it revert everything…

Now I remove the print and it works well :slight_smile:

Thank you for helping me !