Strategy for avoiding memory leak when using cv::cvtColor

Hardware platform: Jetson Nano
OpenCV version: 4.5.0

I’m using cv::cvtColor() to convert images from the RGBA to the BGR color space, and there appears to be a memory leak associated with cv::cvtColor(). Several discussion threads outside of the OpenCV forum have advocated pre-allocating memory for the destination buffer, but so far that strategy does not appear to be working for me.

The code uses cv::cvtColor() in a thread in a do-forever loop. Prior to the do-forever loop, we create a large buffer:

const uint32_t numberOfPixels = streamResolution.width() * streamResolution.height();
const uint32_t rgbaChannels = 4;
const uint32_t bgrChannels = 3;
uint8_t* largeBuffer = new uint8_t [2 * ((rgbaChannels + bgrChannels) * numberOfPixels)];

Within the do-forever loop, we have the following code block:

cv::Mat imgbuf = cv::Mat(streamResolution.height(), streamResolution.width(), CV_8UC4, pdata, params.pitch[0]);
cv::Mat bgr    = cv::Mat(streamResolution.height(), streamResolution.width(), CV_8UC3, largeBuffer);
cv::cvtColor(imgbuf, bgr, cv::COLOR_RGBA2BGR);

To detect the memory leak, I use top. If the invocation of cv::cvtColor() is commented out, no memory leak occurs.

It seems like the size of the “large buffer” is big enough to keep cv::cvtColor() from allocating memory.

Has anyone found a strategy for preventing this type of memory leak?

terrible idea. I see no justification for this. “this” being the use of new.

I need to see those discussions and those people for myself.

these people might not know what they are doing.

once we’ve determined what to do with these Mat objects, you might have to present your methodology for determining a “memory leak”.

if you’re concerned about allocations, you need not. Mat is perfectly capable of living for a while. cvtColor will use the memory of an existing Mat, when given as dst. if the shape is okay, there will be no reallocation.

please let me emphasize that I want to see those discussions you had. I’m disinclined to correct your misconceptions without knowing where/who they come from. this may be a case of “chinese whispers”. I need to see who you discussed this with and what their original words were.

Here is a typical example of what I have seen elsewhere:

To detect the memory leak, I just examine the output of top and watch the memory usage grow when cvtColor() is invoked. If I comment-out the invocation of cvtColor(), the memory usage does not grow over time.

See also here for a related observation.

by how much over what time span?

I need the exact code. a MRE. not a couple of pieces and an expectation that the reader fill in whatever.

that doesn’t address leaks or writing into an allocated Mat. that addresses in-place operation, which is not what you’re attempting.

that is likely related to an OpenCL context issue. if that turns out to be your problem, then it’s already been reported and expecting people to reinvestigate this would be a waste of time. you should determine if that is your issue or not. disable OpenCL at runtime and check.

The following example code does not reproduce the memory leak that I am observing in my application:

#include <thread>
#include <termios.h>
#include <sys/ioctl.h>
#include <opencv2/imgproc.hpp>
#include <iostream>
#include <atomic>

std::atomic<bool> terminateImageProcessingThread = false;
std::atomic<bool> imageProcessingThreadTerminated = false;

void imageProcessingThread();
int getKeyboardKeyCode();

int main(int argc, char* argv[]) {
    std::cout << "Press any key to exit..." << std::endl;
    std::thread ipt(imageProcessingThread);
    for(;;) {
        if (getKeyboardKeyCode()) {
            terminateImageProcessingThread = true;
            while(!imageProcessingThreadTerminated) {}
    std::cout << "Done!" << std::endl;
    return 0;

void imageProcessingThread() {
    const int height = 480;
    const int width = 640;
    uint64_t loopcounter = 0;
    while(!terminateImageProcessingThread) {
        cv::Mat source = cv::Mat(height, width, CV_8UC4);
        cv::Mat destination = cv::Mat(height, width, CV_8UC3);
        cv::cvtColor(source, destination, cv::COLOR_RGBA2BGR);
    std::cout << "Image processing thread terminated after processing " << loopcounter << " images" << std::endl;
    imageProcessingThreadTerminated = true;

int getKeyboardKeyCode() {
    static const int STDIN = 0;
    static bool initialized = false;

    if (! initialized) {
        // Use termios to turn off line buffering
        termios term{};
        tcgetattr(STDIN, &term);
        term.c_lflag &= ~ICANON;
        tcsetattr(STDIN, TCSANOW, &term);
        setbuf(stdin, nullptr);
        initialized = true;

    int bytesWaiting;
    ioctl(STDIN, FIONREAD, &bytesWaiting);
    return bytesWaiting;

Since my application’s image processing thread maps the source cv::Mat object to a memory buffer that is allocated by an NVIDIA API, it would appear that there is some memory pointer/allocation problem that I’ll have to resolve through further analysis.

Thanks for your help on this issue!