Hi everyone, I am new to here and I get some troubles with calibration.
I am going to synthesize frames in Unity3D as dataset(Like MultiviewX ) for further CNN trainning.
For the first step, I need to calibrate the camera to get the intrinsics and extrinsics of the camera. Yes, there are some mathmatcial methods to get the intrinsics and the extrinsics. However, it could be better if I can get the result from OpenCV.
To finish the calibration, I setup the scene with many cubes(like figure below).
Then, the world and screen space coordinates of the cubes could be retrieved easily by gameobject.transform.position and Camera.WorldToScreenPoint(). Finally, I get pairs of points from world and screen.
In python, I use cameraMatrix = cv2.initCameraMatrix2D(points_3d, points_2d, (IMAGE_HEIGHT, IMAGE_WIDTH))
and retval, cameraMatrix, distCoeffs, rvecs, tvecs = \ cv2.calibrateCamera(points_3d, points_2d, (IMAGE_HEIGHT, IMAGE_WIDTH), cameraMatrix, None, flags=cv2.CALIB_USE_INTRINSIC_GUESS, )
They take in the 3d world and 2d screen corresponding points to do the calibration. Theoretically speaking, the 6 static same camera in the scene should get the same(at least close) intrinsics. Horever, in my case, only the first 4 of them could receive a good calibration 9.3530752008341096e+02 0. 9.6000011291213366e+02 0. 9.3530728269909184e+02 5.4000007545182882e+02 0. 0. 1.
. The remainig 2 cameras failed 1.9522217456988033e+03 0. 5.3950000000000000e+02 0. 1.9522217456988033e+03 9.5950000000000000e+02 0. 0. 1
.
I am confused about this, why is there the discrepancy?