OpenCV. How do you catch a real time frame from OpenCV Video Capture

Hello, I have a question about the following OpenCV code for reading videos. I’m working on a project that involves processing frames from a video using this code.

From what I understand, when the video capture input is a video file, it processes the first frame before moving to the next one, resulting in a delay between frames.

My first question is, if the video capture input is a live camera (designated as ‘0’ in my sample code), does it process frames in real-time, as they are captured? Or does it process the second frame at the moment the camera is opened?

My second question is, if it processes frames with a delay, does this delay accumulate over time, potentially causing a larger delay if the code runs for hours? (I’m planning to implement a concept that needs to run continuously for extended periods.)

My third question is, is there a simple mechanism that can be implemented to ensure that if the video capture input is from a live camera, it processes the very first frame it accesses in real time and continues to access live frames as they are captured? (mechanism that treats my issue in question number 2.)

image

It’s a bit hard to word it out but I hope you understood the question. Thanks a lot!

The rate of capture will be determined by either 1) the camera maximum rate, or 2) the delay you cause between grabbing frames (the processing time). If you have a long processing time then it will only capture the next frame after you have completed processing.

If you want to process faster and capture faster then you must do all processing in another thread, so instead of long processing time you hand off the image to a separate thread. It can be multiple threads if needed to process fast enough but each processing thread will increase CPU usage.

If you are new to threads it can be a complex topic but it is the only way to separate processing time from capture rate.

2 Likes

The rate of capture will be determined by either 1) the camera maximum rate, or 2) the delay you cause between grabbing frames (the processing time). If you have a long processing time then it will only capture the next frame after you have completed processing.

So you mean to say, that the longer the code is running, the higher the accumulated delay from time to time?

you don’t necessarily need threads. the frame rate of a camera is certainly only determined by the camera, not by how quickly or slowly you read frames from it. you might wanna ignore that specific post up there.

we need to see what exactly you do. we can’t make any statements on things you only describe with words. don’t post screenshots of code. post the code itself. I think someone already told you that in a different discussion.

1 Like

It would not accumulate more delay as time goes, it would skip all intermediate frames between start of processing and end of processing. The processing thread would skip frames as needed and only catch frames when finished.

I was NOT saying that the rate of camera capture was changed by the processing time. I was saying that the total time for each while loop is determined by the combined camera capture time + the processing time as shown in the screenshot code. The time of one round trip in the loop (as shown) will always be capture time + processing time.

1 Like

A post was merged into an existing topic: How would you synchronize two webcams to be acquiring their images at the same times?