# Get pixel values from undistorted image

I have use case where I have 3d points, I project them with fisheye camera module, and then get the colors of the corresponding 2d pixels to propagate this information back to 3d points. The 2d images are collected with fisheye lense, hence the fisheye point projection method. The camera is moving, so rotation and translation vectors are changing while K and D stay the same.

I was thinking whether it is possible to separate it into two steps: create the undistortion maps initUndistortRectifyMap to speed-up the fisheye-related transformations, and separately linearly project the points to already-undisorted image using the rotation and translation vector.

Would this work or am I missing something? If this worked, then how would the point projection to undistorted image look like? I guess it would be cv.projectPoints instead of its counterpart from fisheye module, and with old K matrix and with 0 D matrix? Or am I missing something here?

``````import numpy as np
import cv2

# Example camera intrinsic matrix K (3x3)
K = np.array([
[500, 0, 320],  # fx, 0, cx
[0, 500, 240],  # 0, fy, cy
[0, 0, 1]       # 0, 0, 1
])

# Distortion coefficients D for fisheye (4x1)
D = np.array([0.1, -0.05, 0.001, 0.0001])

# Rotation vector (3x1) and translation vector (3x1)
R = np.array([[0.0], [0.0], [0.0]])  # No rotation (identity)
t = np.array([[0.0], [0.0], [5.0]])  # Translation along Z-axis

# Define 3D points in space (N x 3)
points_3d = np.array([
[1.0, 1.0, 10.0],
[2.0, -1.0, 8.0],
[-1.0, 1.0, 6.0],
[0.5, 0.5, 5.0]
])

# Project 3D points to 2D using cv2.fisheye.projectPoints
points_2d, _ = cv2.fisheye.projectPoints(points_3d, R, t, K, D)

# Example image (let's create a dummy image)
img_height, img_width = 480, 640
image = np.random.randint(0, 255, (img_height, img_width, 3), dtype=np.uint8)

# Round the 2D points and get the corresponding pixel values
points_2d_rounded = np.round(points_2d).astype(int)

# Get the pixel values for each projected point
for i, (x, y) in enumerate(points_2d_rounded.squeeze()):
# Check if the projected points fall within the image bounds
if 0 <= x < img_width and 0 <= y < img_height:
pixel_value = image[y, x]  # Note: (y, x) is the pixel coordinate
``````

I think what you are trying to do is to optimize the runtime performance of projecting 3D points to the camera image (in distorted image coordinates.) I don’t have a lot of experience with the fisheye distortion model, but assuming the functions work similarly to the standard distortion model I think what you are proposing would would work.

For the standard distortion model, I would approach it something like this:

1. Call initUndistortRectifyMap() to create the maps that encode the mapping from undistorted image space to distorted image space. (One time computation)
2. Call projcectPoints(), but pass an empty mat for the distortion coefficients. This will project the points using R and T and K, but won’t compute a distorted position (this is the runtime cost you are saving with this approach)
3. Using the undistorted image location and the distortion maps you can determine the corresponding distorted image coordinate which you can then use to index into the distorted image.

I’m pretty sure this would work with the standard distortion model, but I don’t know enough about the fisheye model to know for sure. For example, you might run into issues with very wide FOV images and the map sizes, or you might have to use the standard projectPoints() function in order to get undistorted image points (the documentation for the fisheye.projectPoints() didn’t mention passing an empty vector for the distortion coefficients)

If it doesn’t work for the fisheye model, maybe try the standard functions and use the RATIONAL model - that can handle quite a bit of distortion and might be worth a shot.

How many points are you projecting? I’m a little surprised there is a need to optimize the runtime performance of projectPoints.

Good luck!