I’m trying to implement a nodal offset calibration algorithm.
I have the following setup:
- Screen displaying a randomly moving and rotating checkerboard
- Webcam taking snapshots of the screen
- A ‘virtual camera’ taking snapshots of the same image (which is really just the warpPerspective function)
I have calibrated the webcam lens and I’m using its matrix for the virtual camera. I also know the distance between the webcam and the screen in millimeters. The distance is converted into pixels using the size of my screen and used to translate the image away from the virtual camera.
Here is the code:
def proj_matrix_3d(width, height):
"""2D -> 3D projection matrix"""
return np.array([ [1, 0, -width / 2],
[0, 1, -height / 2],
[0, 0, 0],
[0, 0, 1]])
img = gen.next()
proj_3d = proj_matrix_3d(monitor.width, monitor.height)
mat = gen.camera_matrix @ translation_matrix_3d(0, 0, gen.z_dist) @ proj_3d
img2 = cv2.warpPerspective(img, mat, (monitor.width, monitor.height), borderMode=cv2.BORDER_CONSTANT, borderValue=(255, 0, 0, 255))
cv2.imshow('window', img2)
cv2.waitKey(250)
z_dist is calculated as follows:
pixels_per_mm = monitor.width / monitor.width_mm
self.z_dist = mm_from_camera * pixels_per_mm
where mm_from_camera is the value measured and entered by the user.
The distance and the camera matrix are the same in both cases. However, the virtual camera’s snapshot doesn’t fit into the screen (the original image before applying the 3D transformations does). I expect the perspective to align with that of the webcam snapshot.
I’m at a complete loss. Seems like it could be an issue with calculating the distance but idk what could be wrong here.
For reference here is the exact same image captured by the webcam and the virtual camera:
Edit: I have since tried adding extrinsic parameters from camera calibration to the formula. Still doesn’t work properly.