Copy directX 12 texture to GpuMat

Hello,

I’m trying to copy a directx 12 texture to GpuMat with pure gpu memory transfert.
In opencv documentation I haven’t see any reference to directx 12 just 10 or 11.
So I copy the texture in cuda memory and set this memory to a GpuMat.

void getExternalMemory(DX12Texture& d3d12Texture)
{
	cudaExternalMemoryHandleDesc memDesc{};
	memset(&memDesc, 0, sizeof(memDesc));

	memDesc.type = cudaExternalMemoryHandleTypeD3D12Resource;
	memDesc.handle.win32.handle = d3d12Texture.handle;
	memDesc.size = d3d12Texture.memSize;
	memDesc.flags = cudaExternalMemoryDedicated;

	cudaError_t cudaError = cudaImportExternalMemory(&m_externalMemory, &memDesc);
	
}

void mapMemory(cv::cuda::GpuMat& mat, DX12Texture& d3d12Texture)
{
	void* pData = nullptr;
	cudaExternalMemoryBufferDesc buffDesc{};
	memset(&buffDesc, 0, sizeof(buffDesc));

	buffDesc.offset = 0;
	buffDesc.size = d3d12Texture.memSize;

	cudaExternalMemoryGetMappedBuffer(&pData, m_externalMemory, &buffDesc);
	//auto format = cv::directx::getTypeFromDXGI_FORMAT(d3d12Texture.resource->GetDesc().Format);
	auto format = cv::directx::getTypeFromDXGI_FORMAT(31);//D3DFMT_A2B10G10R10 is the same as DXGI_FORMAT_R10G10B10A2_UNORM

	cv::cuda::GpuMat(static_cast<int>(d3d12Texture.resource->GetDesc().Height),
		static_cast<int>(d3d12Texture.resource->GetDesc().Width),
		format, pData).copyTo(mat);
		
	cudaFree(pData);
}

My result is :


I didn’t know what I missed.
Do you have an idea ?
Thanks.
sorry for my english.

For information the description of directx12 texture is:
alignment = 65536
width = 1936
height = 1066
depth = 1
mip level = 1
format = DXGI_FORMAT_R10G10B10A2_UNORM

true ;(

how did you get that image ?

is that your only choice ? (it does not fit any known opencv format)

also, i’d think, that you’d need an opencv enum for the ‘type’ param, like CV_8UC3, not the dx ‘format’

The texture come from Unreal backbuffer and the format is
DXGI_FORMAT_R10G10B10A2_UNORM

I see in a other forum DXGI_FORMAT_R10G10B10A2_UNORM is equivalent to D3DFMT_A2B10G10R10 wich seem supported by opencv.
For the type I use getTypeFromDXGI_FORMAT to convert in opencv format.

ok, so the type resolves to CV_8UC4 (directx.cpp:103) [solved, sorry i didn’t get it earlier]…

imo, that would mean, opencv still expects B8G8R8X8 bit layout or similar
(your memory does not align channels to bytes)

any chance, you can get a more ‘opencv friendly’ layout from unreal ?

I change unreal configuration; now the output format is better understand by openCV.
Now The colors are correct but image is still “blurred”

I saw an example opencv / unreal but they use cvMat. I would avoid cpu memory.

Sorry for my english.

strides/steps/whatever don’t seem to match. mess around with that.

perhaps simplify first. I see indications that you’re trying to work with 10-bit data

make this work with the usual 8-bit per channel formats. when that works, you can figure out how those special formats work.

strides/steps/whatever don’t seem to match. mess around with that.

Agreed, but in the last picture GpuMat type is CV_8UC4 and the directx texture source is DXGI_FORMAT_B8G8R8A8_UNORM
(CV_8UC4 = getTypeFromDXGI_FORMAT(DXGI_FORMAT_B8G8R8A8_UNORM ))

@roy_c, @crackwitz is talking about the stride, it looks like you are using the default size_t step = Mat::AUTO_STEP but you mentioned directx12 texture has an alignement of alignment = 65536?

I don’t set the step when I create gpumat so step has the default value Mat::AUTO_STEP
alignment = 65536 is a part of the directx texture description retrieved by
TextureResource->GetDesc() but microsoft documentation is very minimal and says

Specifies the alignment

I don’t know anything about DirectX but I would guess if you have a 2D texture there is a row alignment parameter meaning that each row is pitched with a step size >= width. If you are directly mapping the memory and pitch != width then you need to find out what it is and pass it to the GpuMat.

I found a solution:
Make Directx texture with ROW_MAJOR as layout
then find the good row pitch
(very weird with direct X:
with texture 1936 * 1066 in R8G8B8A8 => pitch is 1936 * 4 + 128 = 7808 bytes
with texture 1920 * 1080 => pitch is 1920 * 4 = 7680 bytes)

1 Like