Inconsistent camera calibration results

Thank you for your reply.

I realized that for the larger pattern, only 2 of my non-robot images are detected by cv2.findChessboardCorners(), so I am using medium pattern for the rest of this comment.
I read this document before posting my question. Here are my notes to each point:

Also, I have updated the link with visualization of detected corners: calibration medium pattern - Google Drive

So, I have done the calibration 3 times, one time with only 9 images using the robot, another one with only 6 images not using the robot, and the other one with all 15 images combined: here are the results:

code:

import cv2
import numpy as np
import glob

# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Parameters
pattern_size = (6, 9) # Number of inner corners per a chessboard row and column (6x9 for example)
square_size = 0.0835/7  # Square size in meters

# Prepare object points (like (0,0,0), (1,0,0), (2,0,0) ....,(6,8,0))
objp = np.zeros((pattern_size[0]*pattern_size[1], 3), np.float32)
objp[:,:2] = np.mgrid[0:pattern_size[0], 0:pattern_size[1]].T.reshape(-1,2)
objp *= square_size

# Arrays to store object points and image points
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.


# Load your images and robot poses here
images = glob.glob('mid_pattern/all/calibration_image_*.png')
print(images)
R_gripper2base = rotation_matrices # Robot base to end-effector rotation matrices
t_gripper2base = translation_vectors # Robot base to end-effector translation vectors
ii=0
print('total number of images: ', len(images))
for fname in images:
    
    img = cv2.imread(fname)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

    # Find the chess board corners
    ret, corners = cv2.findChessboardCorners(gray, pattern_size, None)

    # If found, add object points, image points (after refining them)
    if ret:
        ii+=1
        print(fname)
        objpoints.append(objp)

        corners2 = cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)
        imgpoints.append(corners2)

        # Draw and display the corners
        cv2.drawChessboardCorners(img, pattern_size, corners2, ret)
        cv2.imshow('img', img)
        cv2.imwrite('{}.png'.format(ii), img)
        print('{}.png'.format(ii))
        cv2.waitKey(50)

cv2.destroyAllWindows()
print(ii , ' number of used images')
# Calibrate camera using the object and image points
rms_error, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)

mean_error = 0
for i in range(len(objpoints)):
 imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
 error = cv2.norm(imgpoints[i], imgpoints2, cv2.NORM_L2)/len(imgpoints2)
 mean_error += error
 
print( "total error: {}".format(mean_error/len(objpoints)) )

case 1: only 9 robot images
rms_error: 0.2344
mean error: 0.030167432
intrinsic matrix: array([[1.16197151e+03, 0.00000000e+00, 1.14025087e+03],
[0.00000000e+00, 1.20612005e+03, 6.76111628e+02],
[0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
case 2: only 6 non-robot images:
rms_error: 0.7238
mean error: 0.0945285869945198
only using 6 non-robot images
intrinsic matrix: array([[1.07204959e+03, 0.00000000e+00, 1.09128501e+03],
[0.00000000e+00, 1.07310275e+03, 6.24612207e+02],
[0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
case 3: all 15 images:
rms error: 0.5081
mean error: 0.0585292
intrinsic matrix: array([[1.07054446e+03, 0.00000000e+00, 1.11565251e+03],
[0.00000000e+00, 1.07011719e+03, 6.28861017e+02],
[0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])

These are my points for each of the 12 points in the article you shared:
1, 2, 3: The images I took with the medium calibration pattern, where each square is 11.92 mm seems to fit the criteria here. (Note: I have used 11.90-11.93 mm as square size in the code, and the result of cv2.calibrateCamera() is almost equal in these cases, so the precision of square size is not the issue.

4: the images on robot does not have a lot of variation, and it is going to cause problems for me even after I resolve this issue (details are here: Eye-to-hand calibration for a 4 DOF robot arm) That’s why I have added those 6 extra images with very different orientations.
5, 6: I think the images are good here as well, and having low reprojection error confirms that as well.
9: camera is fixed and not moving. The robot is also robust and precise at performing movements. For those images that are not connected on the robot, I did my best on my hand not moving, and images visually look good to me.
10, 12: the only thing I have not done, is per feature error analysis. With very low reprojection error, I don’t think the problem is here.

11: I was aware of this, that is why I asked the 2nd question at the end of my original post.

My questions are still:

  1. These are still far from original Zed camera calibrations params. What should I do to make it closer to it? Even though it seems like I am doing all the steps correctly. Images have variations, visualization is good, reprojection error is low, etc.

  2. Assuming I get perfect results here, what should I do for 4DOF robot camera calibration, knowing that I cannot have very different orientations because the robot only rotates around its Z axis.
    This is the link to this question: Eye-to-hand calibration for a 4 DOF robot arm