I have use case where I have 3d points, I project them with fisheye camera module, and then get the colors of the corresponding 2d pixels to propagate this information back to 3d points. The 2d images are collected with fisheye lense, hence the fisheye point projection method. The camera is moving, so rotation and translation vectors are changing while K and D stay the same.
I was thinking whether it is possible to separate it into two steps: create the undistortion maps initUndistortRectifyMap to speed-up the fisheye-related transformations, and separately linearly project the points to already-undisorted image using the rotation and translation vector.
Would this work or am I missing something? If this worked, then how would the point projection to undistorted image look like? I guess it would be cv.projectPoints instead of its counterpart from fisheye module, and with old K matrix and with 0 D matrix? Or am I missing something here?
import numpy as np
import cv2
# Example camera intrinsic matrix K (3x3)
K = np.array([
[500, 0, 320], # fx, 0, cx
[0, 500, 240], # 0, fy, cy
[0, 0, 1] # 0, 0, 1
])
# Distortion coefficients D for fisheye (4x1)
D = np.array([0.1, -0.05, 0.001, 0.0001])
# Rotation vector (3x1) and translation vector (3x1)
R = np.array([[0.0], [0.0], [0.0]]) # No rotation (identity)
t = np.array([[0.0], [0.0], [5.0]]) # Translation along Z-axis
# Define 3D points in space (N x 3)
points_3d = np.array([
[1.0, 1.0, 10.0],
[2.0, -1.0, 8.0],
[-1.0, 1.0, 6.0],
[0.5, 0.5, 5.0]
])
# Project 3D points to 2D using cv2.fisheye.projectPoints
points_2d, _ = cv2.fisheye.projectPoints(points_3d, R, t, K, D)
# Example image (let's create a dummy image)
img_height, img_width = 480, 640
image = np.random.randint(0, 255, (img_height, img_width, 3), dtype=np.uint8)
# Round the 2D points and get the corresponding pixel values
points_2d_rounded = np.round(points_2d).astype(int)
# Get the pixel values for each projected point
for i, (x, y) in enumerate(points_2d_rounded.squeeze()):
# Check if the projected points fall within the image bounds
if 0 <= x < img_width and 0 <= y < img_height:
pixel_value = image[y, x] # Note: (y, x) is the pixel coordinate