Erroneous point cloud generated by cv2.reprojectImageTo3D()

I’m trying to extract depth information from a scene using a stereo fisheye camera pair, and I’m having trouble generating a valid point cloud from my disparity map.

I’ve been able to successfully calibrate my cameras and perform image rectification using cv2.fisheye.stereoCalibrate() and cv2.fisheye.stereoRectify() respectively, and I’ve got valid undistortion and rectification maps for cv2.remap()

I’m using cv2.StereoSGBM_create() to generate a disparity map that looks somewhat noisy but still seems like a reasonable starting point, but when I generate a point cloud from cv2.reprojectImageTo3D() , the output looks completely wrong:

It’s wildly unlike the disparity map, much worse than could be attributed to noise. It certainly isn’t useful as a 3D representation.

The Q matrix I’m given by cv2.fisheye.stereoRectify() is:

[[   1.            0.            0.         -317.47177241]
 [   0.            1.            0.         -427.29921546]
 [   0.            0.            0.          425.32021226]
 [   0.            0.            0.71428543   -0.        ]]

I’ve tried generating point clouds using other Q matrices with no discernable difference to the point cloud, and I’m now at a total loss for what to look at next. Any help is much appreciated.


you could provide usable data. npy/npz files would be good. and source code (link to it if it lives elsewhere).

OK - if you visit this link you can download an example left and right image and an NPZ file with the required parameters for the code below. Put them in a folder called data and you should be able to use this code to generate a disparity map and point cloud similar to what I’ve posted above.

import cv2
from matplotlib import pyplot as plt
import numpy as np

# Define path and filename for output file
PATH = './data/'
OUTPUT_FILE = 'point_cloud.ply'

# Function to create point cloud file
# From
def create_output(vertices, colors, filename):
	colors = colors.reshape(-1, 3)
	vertices = np.hstack([vertices.reshape(-1, 3), colors])

	ply_header = '''ply
		format ascii 1.0
		element vertex %(vert_num)d
		property float x
		property float y
		property float z
		property uchar red
		property uchar green
		property uchar blue
	with open(filename, 'w') as f:
		f.write(ply_header % dict(vert_num=len(vertices)))
		np.savetxt(f, vertices, '%f %f %f %d %d %d')

# Load example images from cameras and prepare rectified / trimmed dicts
image = {
	'left' : cv2.imread(PATH + 'left_fisheye.jpg'),
	'right': cv2.imread(PATH + 'right_fisheye.jpg')
image_rectified = {}
image_trimmed = {}

# Load calibration parameters
pars = np.load(PATH + 'rectification_pars.npz')

K = {
	'left' : pars['K1'],
	'right': pars['K2']
D = {
	'left' : pars['D1'],
	'right': pars['D2']
rvecs = pars['rvecs']
tvecs = pars['tvecs']
im_size = pars['im_size']
output_size = pars['output_size']

# Initialise remaining dicts
R = {}
P = {}
map1 = {}
map2 = {}

# Fisheye stereo rectification
R['left'], R['right'], P['left'], P['right'], Q = cv2.fisheye.stereoRectify(
	K1=K['left'], D1=D['left'],
	K2=K['right'], D2=D['right'],
	R=rvecs, tvec=tvecs,

# Manual image cropping parameters
# Trim the black regions from rectified pincushion images and correct vertical offset
crop = {
	'w': 600,
	'h': 467,
	'x': 33,
	'y': 132,
	'v': {
		'left' : 0,
		'right': 33

# Perform undistortion and rectification
for cam in ['left', 'right']:
	# Computes undistortion and rectification maps
	map1[cam], map2[cam] = cv2.fisheye.initUndistortRectifyMap(

	# Rectify input image
	image_rectified[cam] = cv2.remap(

	# Trim rectified image
	image_trimmed[cam] = image_rectified[cam][
						 crop['y'] + crop['v'][cam]:crop['y'] + crop['v'][cam] + crop['h'],
						 crop['x']:crop['x'] + crop['w']

	cv2.imwrite(PATH + '{}_rectified.jpg'.format(cam), image_rectified[cam])
	cv2.imwrite(PATH + '{}_trimmed.jpg'.format(cam), image_trimmed[cam])

# Create SGBM object
stereo = cv2.StereoSGBM_create(
	P1=8 * 3 * 4 ** 2,
	P2=32 * 3 * 4 ** 2,

# Compute disparity map and point cloud
disparity_map = stereo.compute(image_trimmed['left'], image_trimmed['right'])
points_3D = cv2.reprojectImageTo3D(disparity_map, Q)

# Show disparity map
plt.imshow(disparity_map, 'gray')

# Remove INF values from point cloud
points_3D[points_3D == float('+inf')] = 0
points_3D[points_3D == float('-inf')] = 0

# Get rid of points with value 0 (i.e no depth)
mask_map = disparity_map > disparity_map.min()

# Mask colors and points
colors = cv2.cvtColor(image_trimmed['left'], cv2.COLOR_BGR2RGB)
output_points = points_3D[mask_map]
output_colors = colors[mask_map]

# Generate point cloud
output_file = PATH + OUTPUT_FILE
create_output(output_points, output_colors, output_file)

print('All done!')

I’m aware that the input images are far from ideal and I’m working separately to improve them, but as far as I can tell the disparity map is reasonable given the inputs, while the point cloud seems to be totally off base.

After a lot more experimentation and digging around, it turns out that the Q matrix provided by cv2.fisheye.stereoRectify() is, basically, wrong. I’m not sure what’s different about my workflow that causes this issue (when presumably other people are able to obtain valid Q matrices?) but my issue was resolved by using a Q matrix of the following form:

Q = np.float32([[1, 0, 0, 0],
				[0, -1, 0, 0],
				[0, 0, f, 0],
				[0, 0, 0, 1]])

where f is 1 / focal_length, but in this case just manually tuned to give good depth values.