Can I keep a grabbed frame for a while?

Hi,
I have a loop that does frame grabbing and the processing of the frame. But so now and then I would like to give a frame to a lower priority thread/process to display the frame including some processing results. Because this is a lower priority thread/process it could take a while before it is ready with the frame. Is that a problem?

Can I find information somewhere explaining how the frame grabbing and buffering is implemented in OpenCV? Is it a cyclic buffer or a pool of buffers? What is the effect when not releasing a buffer for some while? Can we define the number of buffers?

Can I keep a grabbed frame for a while?

yes.

cv::Mat are ref-counted and any frames returned from a VideoCapture::read will be new memory. you need not worry about anything.

hmm, maybe I’m old-schooled. But I think to be a good programmer it is good to know the underlaying structures and techniques.
For instance: If I keep more frames for a longer time, the system may need to do new memory allocations. As a programmer of high speed systems I do not want that. I want the system to allocate enough space at once so that it does not have to do that in the middle of some processing.
So, is there a document describing how this works? And are there parameters that we can set? Like the number of pre-allocated buffers?

If you are using OpenCV to manage the camera (using cv::VideoCapture) it manages it for you and allocates new memory for each frame as Crackwitz explained. If you want more control over how the buffers are managed etc., you probably will have to take on all of the camera hand-holding yourself. I have similar requirements as you do, and that’s what I ended up doing.

From a practical standpoint you don’t have to worry about it if you use VideoCapture - the image you are given is backed by new memory, and the buffer gets returned to the camera “immediately”, so you can hold on to the image for as long as you want. If you are concerned about the malloc/memcpy overhead, I would suggest profiling it first - you might find that it really doesn’t take that much time to do.

If you are truly resource constrained or have other reasons that require more control, you can manage it on your own (for example using the V4L2 api on Linux). Depending on the camera capabilities you will be able to set up the type and number of buffers and have direct access to all of the camera controls directly. (I imagine you might be able to control some of this through the openCV interfaces.) It’s definitely more work to take it on yourself - I’d just make sure you actually need to do it before committing to that path. For my case having direct control is a necessity, but it sure makes it easier when I can use the VideoCapture interface.

1 Like

Thanks for your answer. It is not my goal to write my own VideoCapture class. When I know exactly how the buffers are handled, then I know what I can do with the current implementation. So that I do not have to write my own code for that.

So is there some document describing in more detail how it currently works?

I’m not aware of a document that details the inner workings of the VideoCapture class. It will most likely vary depending on what back end you are using, and probably also the type of IO being used. If you need more understanding beyond “OpenCV gets the image, makes a copy of the buffer, releases the buffer to the camera and then returns the copy to you”, then you’ll probably need to look at the code. Unfortunately the OpenCV code isn’t always well commented, so be prepared for that.

-Steve

1 Like