Using Opencv with OIDN

Hi there!
Noob question here, sorry.
Is it possible to convert opencv mat to a readable buffer for intel open image denoiser and back?
I was thinking about writing a temp file, then reading on oidn, then writing again, and reading back again on opencv, but that doesnt seem efficient.

that’s for sure not nessecary, as you can extract a Mat’s data pointer, and create a new Mat from arbitrary pointers on the way back

do you have an OIDN code example, so we can estimate, what exactly is required here ?

(i only found this:)

1 Like

Thank you so much for your time!

I am really noob at programming, I am a designer trying to guerilla some stuff =X so bear with me XD

I did try using the mat, like so

// Create an Intel Open Image Denoise device
	src2.convertTo(src2, CV_32FC3, 1.0, 0.0);
	src.convertTo(src, CV_32FC3, 1.0, 0.0);

	//savePFM(src, "c:/tmp/temp.pfm");
	oidn::DeviceRef device = oidn::newDevice();


	// Create a filter for denoising a beauty (color) image using optional auxiliary images too
	oidn::FilterRef filter = device.newFilter("RT"); // generic ray tracing filter
	filter.setImage("color", &[0], oidn::Format::Float3, width, height); // beauty
	//filter.setImage("albedo", albedoPtr, oidn::Format::Float3, width, height); // auxiliary
	//filter.setImage("normal", normalPtr, oidn::Format::Float3, width, height); // auxiliary
	filter.setImage("output", &[0], oidn::Format::Float3, width, height); // denoised beauty
	filter.set("hdr", true); // beauty image is HDR
	// Filter the image
	// Check for errors
	const char* errorMessage;
	if (device.getError(errorMessage) != oidn::Error::None)
		printf("Error: %s\n", errorMessage);
	src2.convertTo(src2, CV_32FC4, 1.0, 0.0);
	src.convertTo(src, CV_32FC4, 1.0, 0.0);

OIDN is executing, and doing something, but the image is corrupted, so I must be doing something very wrong on the way in or out.

result image:

what are src and src2, originally ?
how do you get those ?

Those are from adobe after effects, they are read like so:

        int width = ae->output->width;
	int height = ae->output->height;
	cv::Mat src = CAEcv::WorldToMat(ae, ae->input);

with this funcion

static cv::Mat WorldToMat32(PF_EffectWorldPtr world, A_long addX = 0, A_long addY = 0)
		if (addX < 0) addX = 0;
		if (addY < 0) addY = 0;
		int w = world->width;
		int wt = world->rowbytes / sizeof(PF_PixelFloat);
		int h = world->height;
		int mw = w + addX * 2;
		int mh = h + addY * 2;

		cv::Mat ret(cv::Size((int)mw, (int)mh), CV_32FC4);
		ret = cv::Scalar(0, 0, 0, 0);
		CV32FC4_Pixel * mData = (CV32FC4_Pixel *);
		PF_PixelFloat* wData = (PF_PixelFloat *)world->data;

		A_long matPos = 0;
		A_long wldPos = 0;

		matPos = addX + addY * mw;
		for (A_long y = 0; y < h; y++)
			for (A_long x = 0; x < w; x++)
				A_long matPosx = matPos + x;
				A_long wldPosx = wldPos + x;
				mData[matPosx].blue = wData[wldPosx].blue;
				mData[matPosx].green = wData[wldPosx].green;
				mData[matPosx].red = wData[wldPosx].red;
				mData[matPosx].alpha = wData[wldPosx].alpha;

			matPos += mw;
			wldPos += wt;
		return ret;

In this case they are 32bit floats (and always will be for this use case)

due to this, src has 4 channels, instead of 3

I am converting them using

	src2.convertTo(src2, CV_32FC3, 1.0, 0.0);
	src.convertTo(src, CV_32FC3, 1.0, 0.0);

// doing oidn stuf
// then converting back

	src2.convertTo(src2, CV_32FC4, 1.0, 0.0);
	src.convertTo(src, CV_32FC4 1.0, 0.0);

Should I be doing something else?

does not change the channel count, only the mat’s ‘depth’ (U8 → 32F)


src.convertTo(src, CV_32);

and simply preallocate src2 like:

Mat src2(src.size(),CV_32FC3);
1 Like

I did use the following now instead:

	cv::cvtColor(src2, src2, cv::COLOR_RGBA2RGB, 3); 
	cv::cvtColor(src, src, cv::COLOR_RGBA2RGB, 3);
//do stuff
	cv::cvtColor(src2, src2, cv::COLOR_RGB2RGBA, 4);
	cv::cvtColor(src, src, cv::COLOR_RGB2RGBA, 4);

It seems to “work”

Even though the image is not corrupted now, its not denoising haha.

but I guess its no longer an opencv problem

Thank you SO VERY MUCH

1 Like

try converting from [0…256] to [0…1]:

src.convertTo(src, CV_32, 1.0/256);

(just a guess … )

1 Like

Hi! Thanks Berak!!

The AE input is already a float, so that was not the problem.

The problem was that the render used to test it was using a gaussian filter, which does not denoise because of pixel correlation.

I redid the frame using a box filter and its now denoising properly!
Thanks a ton!

no, i meant : scale by 1.0/256

1 Like