Is it possible to output cuda disparities in other datatypes than uint8?

Hello all,

I am trying to create disparity maps using OpenCV’s createstereoBM function. Doing it with CPU works fine, but when I use the cuda version (

stereo = cuda.createStereoBM(
            numDisparities=numDisparities,
            blockSize=BlockSize
        )

 imgL_gpu = cv.cuda_GpuMat()
        imgR_gpu = cv.cuda_GpuMat()
        imgL_gpu.upload(imgL)
        imgR_gpu.upload(imgR)
    
    # Create a CUDA stream
    stream = cv2.cuda_Stream()
    
    disparity_gpu = stereo.compute(imgL_gpu, imgR_gpu, stream=stream)

    #print(f"disp dtype: {disparity_gpu.dtype}")
    disparity = disparity_gpu.download(stream=stream)

The resulting disparity maps look very strange and wrong. I realized that cuda outputs disparity maps in uint8, while the cpu version outputs it in int16. I am working with very large images, and the maximum dispartiy we measured (by “hand” in XnView) was 216 pixels. So theoretically, the 255 int limit of uint8 should be okay for this. However, the CPU version is known to scale its output disparity map with a factor of 16, and I am unsure if the GPU version does this as well or not. If it does, that could be the cause of the problem.

My question is two-fold:

  1. Is there a way to make the GPU output disparity maps in int16, float32 or something similar instead of uint8, to make sure it handles larger disparities well?
  2. Given the output of the uint8 disparity map, is there a way you should preprocess/postprocess it to make it look like the cpu version? The documentation on this is very scarce, I’ve already looked.

See the attached picture for a view of how it looks. Left image: input (left) image of the stereo pair. Middle image: CPU Disparity Map. Right image: GPU Disparity Map.
(ignore the titles above the images)

Thank you for your time.