I’m trying to copy a directx 12 texture to GpuMat with pure gpu memory transfert.
In opencv documentation I haven’t see any reference to directx 12 just 10 or 11.
So I copy the texture in cuda memory and set this memory to a GpuMat.
void getExternalMemory(DX12Texture& d3d12Texture)
{
cudaExternalMemoryHandleDesc memDesc{};
memset(&memDesc, 0, sizeof(memDesc));
memDesc.type = cudaExternalMemoryHandleTypeD3D12Resource;
memDesc.handle.win32.handle = d3d12Texture.handle;
memDesc.size = d3d12Texture.memSize;
memDesc.flags = cudaExternalMemoryDedicated;
cudaError_t cudaError = cudaImportExternalMemory(&m_externalMemory, &memDesc);
}
void mapMemory(cv::cuda::GpuMat& mat, DX12Texture& d3d12Texture)
{
void* pData = nullptr;
cudaExternalMemoryBufferDesc buffDesc{};
memset(&buffDesc, 0, sizeof(buffDesc));
buffDesc.offset = 0;
buffDesc.size = d3d12Texture.memSize;
cudaExternalMemoryGetMappedBuffer(&pData, m_externalMemory, &buffDesc);
//auto format = cv::directx::getTypeFromDXGI_FORMAT(d3d12Texture.resource->GetDesc().Format);
auto format = cv::directx::getTypeFromDXGI_FORMAT(31);//D3DFMT_A2B10G10R10 is the same as DXGI_FORMAT_R10G10B10A2_UNORM
cv::cuda::GpuMat(static_cast<int>(d3d12Texture.resource->GetDesc().Height),
static_cast<int>(d3d12Texture.resource->GetDesc().Width),
format, pData).copyTo(mat);
cudaFree(pData);
}
I didn’t know what I missed.
Do you have an idea ?
Thanks.
sorry for my english.
For information the description of directx12 texture is:
alignment = 65536
width = 1936
height = 1066
depth = 1
mip level = 1
format = DXGI_FORMAT_R10G10B10A2_UNORM
The texture come from Unreal backbuffer and the format is
DXGI_FORMAT_R10G10B10A2_UNORM
I see in a other forum DXGI_FORMAT_R10G10B10A2_UNORM is equivalent to D3DFMT_A2B10G10R10 wich seem supported by opencv.
For the type I use getTypeFromDXGI_FORMAT to convert in opencv format.
strides/steps/whatever don’t seem to match. mess around with that.
Agreed, but in the last picture GpuMat type is CV_8UC4 and the directx texture source is DXGI_FORMAT_B8G8R8A8_UNORM
(CV_8UC4 = getTypeFromDXGI_FORMAT(DXGI_FORMAT_B8G8R8A8_UNORM ))
@roy_c, @crackwitz is talking about the stride, it looks like you are using the default size_t step = Mat::AUTO_STEP but you mentioned directx12 texture has an alignement of alignment = 65536?
I don’t set the step when I create gpumat so step has the default value Mat::AUTO_STEP
alignment = 65536 is a part of the directx texture description retrieved by
TextureResource->GetDesc() but microsoft documentation is very minimal and says
I don’t know anything about DirectX but I would guess if you have a 2D texture there is a row alignment parameter meaning that each row is pitched with a step size >= width. If you are directly mapping the memory and pitch != width then you need to find out what it is and pass it to the GpuMat.
I found a solution:
Make Directx texture with ROW_MAJOR as layout
then find the good row pitch
(very weird with direct X:
with texture 1936 * 1066 in R8G8B8A8 => pitch is 1936 * 4 + 128 = 7808 bytes
with texture 1920 * 1080 => pitch is 1920 * 4 = 7680 bytes)