I have a LIDAR synchronized to a left-handed camera and I have the left-handed camera in binocular stereo setup with another camera. I have projected the lidar onto the image plane of the left camera so I have depth values for pixels.
I want to convert the depth values to disparity values, how do I do this? I thought it was a simple transformation but I think I am wrong.
can you show, how you do that ?
like, a 1 to 1 (dense) correspondance ?
or rather a pixel value for each (sparse) lidar point ?
and what do you want the disparity for ?
while the formula is pretty easy:
depth = (baseline * focal_length) / disparity)
so, in reverse:
disparity = (baseline * focal_length) / depth)
again, we’d probably need to see your lidar->pixel mapping to come up with some idea, how to estimate the