OpenCV. How do you catch a real time frame from OpenCV Video Capture

Hello, I have a question about the following OpenCV code for reading videos. I’m working on a project that involves processing frames from a video using this code.

From what I understand, when the video capture input is a video file, it processes the first frame before moving to the next one, resulting in a delay between frames.

My first question is, if the video capture input is a live camera (designated as ‘0’ in my sample code), does it process frames in real-time, as they are captured? Or does it process the second frame at the moment the camera is opened?

My second question is, if it processes frames with a delay, does this delay accumulate over time, potentially causing a larger delay if the code runs for hours? (I’m planning to implement a concept that needs to run continuously for extended periods.)

My third question is, is there a simple mechanism that can be implemented to ensure that if the video capture input is from a live camera, it processes the very first frame it accesses in real time and continues to access live frames as they are captured? (mechanism that treats my issue in question number 2.)

image

It’s a bit hard to word it out but I hope you understood the question. Thanks a lot!

The rate of capture will be determined by either 1) the camera maximum rate, or 2) the delay you cause between grabbing frames (the processing time). If you have a long processing time then it will only capture the next frame after you have completed processing.

If you want to process faster and capture faster then you must do all processing in another thread, so instead of long processing time you hand off the image to a separate thread. It can be multiple threads if needed to process fast enough but each processing thread will increase CPU usage.

If you are new to threads it can be a complex topic but it is the only way to separate processing time from capture rate.

2 Likes

The rate of capture will be determined by either 1) the camera maximum rate, or 2) the delay you cause between grabbing frames (the processing time). If you have a long processing time then it will only capture the next frame after you have completed processing.

So you mean to say, that the longer the code is running, the higher the accumulated delay from time to time?

you don’t necessarily need threads. the frame rate of a camera is certainly only determined by the camera, not by how quickly or slowly you read frames from it. you might wanna ignore that specific post up there.

we need to see what exactly you do. we can’t make any statements on things you only describe with words. don’t post screenshots of code. post the code itself. I think someone already told you that in a different discussion.

1 Like

It would not accumulate more delay as time goes, it would skip all intermediate frames between start of processing and end of processing. The processing thread would skip frames as needed and only catch frames when finished.

I was NOT saying that the rate of camera capture was changed by the processing time. I was saying that the total time for each while loop is determined by the combined camera capture time + the processing time as shown in the screenshot code. The time of one round trip in the loop (as shown) will always be capture time + processing time.

1 Like

A post was merged into an existing topic: How would you synchronize two webcams to be acquiring their images at the same times?

It is not true that it will skip all intermediate frames ― due to the camera placing multiple frames in the buffers that it shares with the host. The exact behavior varies between camera models.

Also, OpenCV does not release the GIL, so grabbing on a thread has much much lesser value than you would think.

how did you come to that conclusion?

Maybe I’m wrong. I searched the source code of the related class and around it. It also matches my observation of how python threads race against each other on grabs to different cameras.

I can see now we are talking about two different methods. In my case (c++) I was not meaning that the camera skips frames. That is based on the camera implementation. What I meant simply was that if you have processing on a thread that is not the same thread as capture then intermediate frames that are captured will be skipped and not processed.

If I am wrong about this then please tell me how.

that simply depends on what code you wrote. if you spawn threads, you are responsible for synchronization. that could be done with a queue, in which case no data is lost that you don’t discard explicitly.

that’s not the only possible method however. you need to learn more about how threading is done, beyond spawning a thread and accessing global variables.

I understand the synchronization responsibility.

It sounds to me like you are saying that using a queue will allow capture of 100% of frames, and processing all of those frames, without spawning any other thread, and process each of those frames at the same rate of capture as if no processing was done. How is this possible? If this is what you are saying then I would would like to read any references available on the subject. The part I don’t understand is how you can accelerate a long processing time within the same thread as capture without taking more time than the capture itself.

I did not say that. nobody said that.

I will not play this game. don’t paraphrase me. quote me, with context.

in any case, this is an old thread. I have no interest in rehashing anything. least of all hypotheticals that need clarification.

Thanks for correcting me @visualbill !

(about not releasing the python GIL, I’m still not aware that OpenCV code releases it).