cv2.VideoCapture.release does not free RAM memory

I use opencv-python==4.4.0.46 over FFMPEG backend. I am trying to connect to an IP camera, read frames, then release it. The memory is not freed as I expected it to. Do you have advice on how to overcome this?

Example code:
import cv2
a=cv2.VideoCapture(“rtsp://10.1.1.15:554”)
a.read()
a.release()

Memory not fully released. If I open a new stream, I get further memory leak.

a.open(“rtsp://10.1.1.15:554”)
a.read()
a.release()

More memory leaked. How can I free it?

try using a new object instead of reusing the old one.

how exactly does memory consumption behave when you run this? screenshot some resource monitor plot if you can.

import time
import cv2 as cv
url = "rtsp://10.1.1.15:554"

a = cv.VideoCapture(url)
assert a.isOpened()
rv, frame = a.read()
a.release()
count = 1

while True:
    a.open(url)
    assert a.isOpened()
    rv, frame = a.read()
    a.release()
    count += 1
    print(count)
    time.sleep(1)

#Regarding your code:

import time
import cv2 as cv
url = “rtsp://10.1.1.15:554”

a = cv.VideoCapture(url)
assert a.isOpened()
rv, frame = a.read() # Adds~23MB to RAM
a.release() # Removes ~3MB
count = 1

while True:
a.open(url)
assert a.isOpened()
rv, frame = a.read()
a.release()
count += 1
print(count)
time.sleep(1)

#The loop stays at ±3MB and never releases the extra 20MB

#Another experiment (added additional camera):

a = cv.VideoCapture(url)
b = cv.VideoCapture(url2)
assert a.isOpened()
assert b.isOpened()
rv, frame = a.read() #Adds 24MB to RAM
rv2, frame2 = b.read() # Adds 15 MB to RAM
a.release() # Removes 1 MB
b.release() # Removes 3MB
count = 1

while True: #After 10 iterations it seems to stabilize at +10MB
a.open(url)
b.open(url)
assert a.isOpened()
assert b.isOpened()
rv, frame = a.read()
rv2, frame2 = b.read()
a.release()
b.release()
count += 1
print(count)
time.sleep(1)

How can I free the extra 45MB?
1. Deleting frame and frame2 freed 10MB

I have 35 MB left which cannot be freed without killing the process. I tried releasing a and b and also deleting them but it won’t help. As can be seen here, in case I had more cameras the memory leak will be worse. What can be done here?

apparently RAM use doesn’t grow from repeated object creation or use. in that case, this is not a leak.

operating systems and runtime libraries are allowed to (and usually do) keep allocated memory for future use even if it’s “freed”.

Sorry for being unclear and using the leak terminology. The problem is, when I add more cameras I get more RAM not freed and it continues to grow even for 50+ cameras. Can you advise how to free this? Thanks

it does not grow over time. we determined that. you may be dissatisfied by the constant RAM usage but it’s constant.

that means even if you connect to 50 cameras concurrently, RAM usage will be constant (times 50). it may be significant but that’s a different problem.

you seem to require specific optimizations. OpenCV is not intended for that.

please directly use ffmpeg, gstreamer, or other libraries.

Thank you for your answers. I do not mean to use all of them concurrently. Also if I use them with the same VideoCapture but one after the other I still get the memory growing. The only case where it is not is when I use the same video address. For example, in case I want to edit 1000 video files sequentially. I get the memory growing either if I use 1 instance of VideoCapture or multiple. Is that so specific? Is killing the process the only option for opencv?

you need to make that effect/issue reproducible for others.

does this rely on RTSP streams? or does also this happen with local files?

“edit”? this makes me think you should definitely not use OpenCV for reading and writing videos, but a library that is specialized to the purpose, such as ffmpeg.

Here is an easy code for reproducibility. It was run from a folder with ~30 video files. Just reading the first frame from each video, then releasing it. This is just an example so please don’t pick on the details, the same happens with RTSP.

import cv2 as cv
import os
for ind, path in enumerate(os.listdir()):
    a = cv.VideoCapture(path)
    assert a.isOpened()
    rv, frame = a.read()  
    a.release()  
    print(ind)

I get a total of 0.5GB of RAM not freed at the end of this simple loop. Any suggestions?

Same behavior happens if I try to change it to:

import cv2 as cv
import os
a = cv.VideoCapture()
for ind, path in enumerate(os.listdir()):
    a.open(path)
    assert a.isOpened()
    rv, frame = a.read()  
    a.release()  
    print(ind)

Still getting an extra 0.5GB per ~30 videos.
Thanks for your help!

I just ran that on a directory of ~100 random video files (most H.264, mkv/mp4). the process stayed around 50-70 MB from before the loop, throughout the loop, until the end of the loop.

I’ve given apiPreference=cv.CAP_FFMPEG to be sure. you should try that as well. maybe you are getting a backend other than ffmpeg and that might explain the issues.

I’m doing this on Windows 10, python 3.7, OpenCV v4.5.2 (self-built with VS 2019).

you already mentioned you use a python package of OpenCV v4.4. can you update that package? what package is it, where is it from?

you should consider it possible that you have special video files that might cause this issue. you have told me nothing about them…

The videos are just regular mp4 files nothing special. I tried to add the argument which you suggested as in:
a = cv.VideoCapture(path, apiPreference=cv.CAP_FFMPEG)

Which did not help at all (still getting 0.5GB+ more memory after the loop).
I also tried to use version 4.5.2.52 which did not change the issue. I am using Ubuntu 18.04LTS and python 3.8.10. I get the opencv-python from pip.

Any other recommendation what to do?

there are still a few differences between our systems (windows/linux, python 3.7/3.8, …) but I find it unlikely that those cause the issue.

set the environment variable OPENCV_VIDEOIO_DEBUG to 1 and run the program. it’ll say a little more than before… I doubt it’ll say anything useful though. please do post what it says in your situation. run it without setting apiPreference. it should work through backends automatically.

perhaps try and see if CAP_GSTREAMER behaves differently?

OPENCV_VIDEOIO_DEBUG output:

[ WARN:0] global /tmp/pip-req-build-eirhwqtr/opencv/modules/videoio/src/cap.cpp (126) open VIDEOIO(FFMPEG): trying capture filename='1.mp4' ...
[ WARN:0] global /tmp/pip-req-build-eirhwqtr/opencv/modules/videoio/src/cap.cpp (138) open VIDEOIO(FFMPEG): created, isOpened=1
0
[ WARN:0] global /tmp/pip-req-build-eirhwqtr/opencv/modules/videoio/src/cap.cpp (126) open VIDEOIO(FFMPEG): trying capture filename='2.mp4' ...
[ WARN:0] global /tmp/pip-req-build-eirhwqtr/opencv/modules/videoio/src/cap.cpp (138) open VIDEOIO(FFMPEG): created, isOpened=1
1
[ WARN:0] global /tmp/pip-req-build-eirhwqtr/opencv/modules/videoio/src/cap.cpp (126) open VIDEOIO(FFMPEG): trying capture filename='3.mp4' ...
[ WARN:0] global /tmp/pip-req-build-eirhwqtr/opencv/modules/videoio/src/cap.cpp (138) open VIDEOIO(FFMPEG): created, isOpened=1
2
[ WARN:0] global /tmp/pip-req-build-eirhwqtr/opencv/modules/videoio/src/cap.cpp (126) open VIDEOIO(FFMPEG): trying capture filename='4.mp4' ...
[ WARN:0] global /tmp/pip-req-build-eirhwqtr/opencv/modules/videoio/src/cap.cpp (138) open VIDEOIO(FFMPEG): created, isOpened=1
3
[ WARN:0] global /tmp/pip-req-build-eirhwqtr/opencv/modules/videoio/src/cap.cpp (126) open VIDEOIO(FFMPEG): trying capture filename='5.mp4' ...
[ WARN:0] global /tmp/pip-req-build-eirhwqtr/opencv/modules/videoio/src/cap.cpp (138) open VIDEOIO(FFMPEG): created, isOpened=1
4
....................(same happens for all video).................

Regading CAP_GSTREAMER I think I do not have the plugin:

[ WARN:0] global /tmp/pip-req-build-eirhwqtr/opencv/modules/videoio/src/cap.cpp (126) open VIDEOIO(GSTREAMER): trying capture filename='1.mp4' ...
[ WARN:0] global /tmp/pip-req-build-eirhwqtr/opencv/modules/videoio/src/cap.cpp (186) open VIDEOIO(GSTREAMER): backend is not available (plugin is missing, or can't be loaded due dependencies or it is not compatible)
Traceback (most recent call last):
  File "<input>", line 6, in <module>
AssertionError

Do you think recompiling with gstreamer backend might help here?
Thank you

I can’t predict that. if we’re down to trying things blindly, it might. it would be better to actually determine why memory keeps piling up, where that happens in the code. and it would be even better if anyone could reproduce this anywhere, because right now, I’m the only other one in this thread, and I can’t reproduce it, but then I don’t run ubuntu and I don’t run the package from PyPI.

I’d suggest valgrind but that is known to produce false positives…

I am not very familiar with valgrind but I tested it on 3 videos (it takes a long time to run). Same experiment just reading 1 frame from each then releasing. The summary of outputs:

==26196== HEAP SUMMARY:
==26196==     in use at exit: 4,400,406 bytes in 24,975 blocks
==26196==   total heap usage: 357,185 allocs, 332,210 frees, 1,171,894,278 bytes allocated
==26196== 
==26196== Searching for pointers to 24,975 not-freed blocks
==26196== Checked 25,692,696 bytes

==26196== LEAK SUMMARY:
==26196==    definitely lost: 24,586 bytes in 470 blocks
==26196==    indirectly lost: 0 bytes in 0 blocks
==26196==      possibly lost: 4,160,685 bytes in 23,899 blocks
==26196==    still reachable: 215,135 bytes in 606 blocks
==26196==                       of which reachable via heuristic:
==26196==                         stdstring          : 8,755 bytes in 138 blocks
==26196==         suppressed: 0 bytes in 0 blocks
==26196== Reachable blocks (those to which a pointer was found) are not shown.
==26196== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==26196== 
==26196== Use --track-origins=yes to see where uninitialised values come from
==26196== ERROR SUMMARY: 16113 errors from 8462 contexts (suppressed: 0 from 0)

Do you have a suggestion what to look for (the output is huge)?
Does this summary tell us something about the problem?

I agree that it would be helpful if someone recreated this. For me it happens on at least 3 different machines but all of them with the same installations.

Thanks

the summary tells us a little.

“possibly” isn’t “definitely” but I’ll take that as an indication.

4 MB could be about one Full HD frame, or several smaller frames. I would expect to see more than that. you said ~0.5 GB for ~30 videos, that means upto ~16 MB per video/instance. perhaps that video has a smaller resolution than the average video in your data set.

if you have a full log, feel free to post that. I’ll look at it.

if you haven’t already, I’d suggest trying latest 3.4 instead of 4.x. if that works correctly, that might help you immediately.

if 3.4 leaks too…

you could test different 4.x (or even 3.x) versions of OpenCV and narrow down in which version/release the behavior started. start with 4.0. maybe that behaves correctly. then pick a version in the middle between a working and a broken version.

I don’t know if PyPI keeps many old versions. the github for the package looks like it has some older ones: Releases · opencv/opencv-python · GitHub

4.0 split off from 3.x at some point but there have been 3.x releases and 4.x releases concurrently. it is not the case that 3.x ended and then 4.x began.

I tried the oldest 4.x , the newest 3.x , and oldest 3.x (3.4.14.53 , 4.1.2.30 , 3.4.8.29). All display similar behavior. I also ran valgrind on 5 videos and got the same output (although when looking at the RAM it did consume more). It might be that I have to recompile and not use PyPI but I do not know what is the problem so any configuration is just a shot in the dark :confused:

okay so either those various versions are all broken or the package isn’t at fault… both is possible.

you said you could replicate this on multiple machines, but they’re all identical/clones? it would be useful if you could replicate the issue on a system that’s somewhat different from those.

you could file a bug on Issues · opencv/opencv-python · GitHub (unless there is one already; I didn’t look)