Calibrate from Unity3D model data

Hi everyone, I am new to here and I get some troubles with calibration.

I am going to synthesize frames in Unity3D as dataset(Like MultiviewX ) for further CNN trainning.
For the first step, I need to calibrate the camera to get the intrinsics and extrinsics of the camera. Yes, there are some mathmatcial methods to get the intrinsics and the extrinsics. However, it could be better if I can get the result from OpenCV.

To finish the calibration, I setup the scene with many cubes(like figure below).

Then, the world and screen space coordinates of the cubes could be retrieved easily by gameobject.transform.position and Camera.WorldToScreenPoint(). Finally, I get pairs of points from world and screen.

In python, I use cameraMatrix = cv2.initCameraMatrix2D(points_3d, points_2d, (IMAGE_HEIGHT, IMAGE_WIDTH)) and retval, cameraMatrix, distCoeffs, rvecs, tvecs = \ cv2.calibrateCamera(points_3d, points_2d, (IMAGE_HEIGHT, IMAGE_WIDTH), cameraMatrix, None, flags=cv2.CALIB_USE_INTRINSIC_GUESS, )
They take in the 3d world and 2d screen corresponding points to do the calibration. Theoretically speaking, the 6 static same camera in the scene should get the same(at least close) intrinsics. Horever, in my case, only the first 4 of them could receive a good calibration 9.3530752008341096e+02 0. 9.6000011291213366e+02 0. 9.3530728269909184e+02 5.4000007545182882e+02 0. 0. 1.. The remainig 2 cameras failed 1.9522217456988033e+03 0. 5.3950000000000000e+02 0. 1.9522217456988033e+03 9.5950000000000000e+02 0. 0. 1.

I am confused about this, why is there the discrepancy?

look at the cx, cy

what do you notice?

Thanks for your reply.
It seems something get X,Y reversed.
The distribution of the failed camera follows some rules, the bottom quarter:

maybe the intrinsic guess was initialized wrong.

maybe it’s having trouble dealing with non-planar models (usually it’s checkerboards), or requires an additional flag for that.

or you give it points that shouldn’t be in view, or are behind the camera?

or something else is going on.

in principle, a single “view” with enough points should be enough for calibration.

the shadows of all those boxes don’t make neat lines on the ground. why’s that? just an optical illusion, different rows making shadows almost in line with each other?

Thanks for your reply.
I think the initialized guess is wrong. I will upload the data tomorrow for further check.

Here is the corresponding data:
The 2D points and 3D points of each camera are saved in the Cali folder
The calibration result is saved in the calibration folder
By running the python script calibrateByCali.py, the calibration result will be generated.

It seems that the single view can not accomplish the calibration…
I finally get the correct calibration by using this: CalibrateTool , which generates visual chessborad in different views.