0
we are currently working on an autonomous vehicle project and we are testing this code using the Carla application. I have a vehicle from the simulation and I have two vehicles on the vehicle at a certain distance. These cameras do not rotate with each other, that is, the identity matrix and the distance between them is 1 meter, that is, their translation vectors are [0 1 0]. I took images from the cameras, did feature extraction with ORB, and matched the 2D points with bf matcher. I wrote a function related to Triagulation to convert Match points into 3D points, but the points constantly appear to be extremely small or have meaningless values. I never understand where I made a mistake, I would be very happy if you could help me.
My code :
Explain
def triangulate_points(self, points1, points2):
# Define the fixed translation vector between cameras
# The horizontal distance between the cameras is 1.0 meters
T = np.array([0, 1.0, 0]).reshape(3, 1)
# Camera parameters
image_size_x = 640 # Width of the camera image in pixels
image_size_y = 480 # Height of the camera image in pixels
fov = 110 # Field of view of the camera in degrees
# Calculate the intrinsic parameter matrix
# 'f' is the focal length calculated based on the field of view
f = image_size_x / (2 * np.tan(fov * np.pi / 360))
# 'cx' and 'cy' are the coordinates of the principal point (image center)
cx = image_size_x / 2
cy = image_size_y / 2
# 'K' is the intrinsic matrix containing these parameters
K = np.array([[f, 0, cx],
[0, f, cy],
[0, 0, 1]])
# Create rotation matrix and projection matrices
# 'R' is the rotation matrix, here an identity matrix as there's no rotation
R = np.eye(3)
# 'P1' and 'P2' are the projection matrices for each camera
P1 = np.hstack((K, np.zeros((3, 1)))) # First camera [K|0]
P2 = np.hstack((K, T)) # Second camera [K|T] (R is identity here)
# Triangulate points
try:
# Reshape and convert points to floating-point
points1 = points1.reshape(-1, 2).astype(float)
points2 = points2.reshape(-1, 2).astype(float)
# Triangulate points using OpenCV's function
points4D = cv2.triangulatePoints(P1, P2, points1.T, points2.T)
points4D = points4D.astype(float)
# Filter valid points (w != 0)
mask = points4D[3] != 0
if np.any(mask
points4D[:, mask] /= points4D[3, mask]
points3D = points4D[:3, mask].T
print(f"Number of valid 3D points after triangulation: {len(points3D)}")
return points3D
else:
print("Warning: No valid points after triangulation.")
return None
except cv2.error as e:
print(f"Error in triangulate_points: {e}")
return None
Results :
Explain
`3D Points:
[[-9.37852011e-06 -2.23542121e-03 9.46576977e-06]
[-1.06378488e-05 -2.23494570e-03 8.89392812e-06]
[-1.06853828e-05 -2.23560213e-03 9.69321238e-06]
[-1.07964941e-05 -2.23669609e-03 1.08969354e-05]
[-1.17854147e-05 -2.23596270e-03 1.01176338e-05]
[-1.18196214e-05 -2.23617017e-03 1.07832534e-05]
[-1.27338944e-05 -2.23878979e-03 1.28062113e-05]
[-1.60020040e-05 -2.24149623e-03 1.45932123e-05]
[-1.42502834e-05 -2.23880773e-03 1.28377051e-05]
[-1.08929527e-05 -2.23514820e-03 9.07740518e-06]
[-1.66937412e-05 -2.24150611e-03 1.46076301e-05]
[-1.29284274e-05 -2.23894660e-03 1.30487023e-05]
[-1.93861653e-08 -2.23171009e-03 6.68271828e-07]
[-1.30256227e-06 -2.23492214e-03 1.04985299e-05]
[-8.75291297e-06 -2.23512443e-03 8.77200646e-06]
[-3.06790332e-06 -2.23660554e-03 1.34786405e-05]...
I have tried various method, but I could not take any right results.