I was trying to use an asymmetric circle grid for calibration, with no success.
cv2.findChessboardCorners for checkerboard calibration easily detects (finds) my pattern in almost all images.
But trying to be calibrating with an asymmetric circle grid, cv2.findCirclesGrid never detects (finds) my grid no matter how much I try fiddling blobDetector params. The detector itself does find all circle grid dots for most images (plus a few further away other blobs in the room).
Have you been able to pull off calibration with an asymmetric circle grid?
I’m using the same 10.9 inch tablet for showing the grids in both cases, same lighting conditions, tried many distances and angles and so on, practically all screen brightness levels, the code feeding off the webcam watching the tablet never detects the grid that the tablet is showing.
I attach one of the circle grids I have tried … for which my function call is specifically the following one, and I played quite a bit with a few detector params while at it.
I went for the (asymmetric) circle grid initially since I read it may offer a shorter path to calibration (less good images on average) but moreso because it lends itself to incremental calibration much more naturally.
Just in case you have any comment to success with asymmetric circle grids, or to the attached grid …
Like you, I struggled for a long time with the inability to calibrate asymmetrical circle patterns.
After confirming that the entire pattern was asymmetrical, with an odd number of columns and an even number of rows, I was able to calibrate it normally by adjusting the Circle Diameter and Diagonal Spacing.
I attached the pattern with the size changed, which calibrated successfully.
It seems that the optimal balance between Circle Diameter and Spacing is approximately 50% to 70%. The diameter/spacing size of the circle in the attached image is 60 mm/100 mm.
The calibration settings for this pattern are row 4, column 29.
It’s only thanks to your post that I came to realize what the dimensions for the detection function findCirclesGrids should be, and with that accounted for, this pattern now works for me:
I’m glad your pattern image was detected successfully!
I also struggled for a long time before I could get the pattern image recognized.
For calibration, it is recommended to use about 10 to 20 images that cover every corner of the camera image range, referring to the best practices below. I am still experimenting with this myself.
Regarding size, the following link is also recommended:
Actually, debugging while checking whether the pattern image is being recognized made things go much more smoothly!
When you look at the blob in the image, you can notice when unnecessary things are picked up, so I recommend confirming that the pattern is actually being recognized.
It’s a bit long, but here’s some test code for calibration with images in a single folder.
import cv2
import numpy as np
import glob
import os
import yaml
import json
import tqdm
import sys
# === Pattern selection on launch ===
# === Choose pattern type from command-line arguments ===
# Usage: python3 calibrate_gui.py [1: chessboard | 2: circles | 3: asymmetric_circles] [cols] [rows] [blob_detector: 0 or 1]
if len(sys.argv) < 4:
print('Usage: python3 calibrate_gui.py [1: chessboard | 2: circles | 3: asymmetric_circles] [cols] [rows] [blob_detector: 0 or 1]')
print('Example: python3 calibrate_gui.py 1 7 6')
print('This script requires at least 3 arguments: pattern option, number of columns, and number of rows.')
sys.exit(1)
pattern_option = int(sys.argv[1])
cols = int(sys.argv[2])
rows = int(sys.argv[3])
pattern_size = (cols, rows)
blob_detector = int(sys.argv[4]) if len(sys.argv) > 4 else 0
if pattern_option == 1:
pattern_type = 'chessboard'
elif pattern_option == 2:
pattern_type = 'circles'
elif pattern_option == 3:
pattern_type = 'asymmetric_circles'
else:
print('Invalid pattern option! Use 1, 2, or 3.')
sys.exit(1)
print(f'Pattern type: {pattern_type}, Size: {pattern_size}')
# === Directory settings ===
image_folder = './recordings'
image_pattern = os.path.join(image_folder, '*.jpg')
result_folder = os.path.join(image_folder, 'result')
os.makedirs(result_folder, exist_ok=True)
output_yaml = os.path.join(result_folder, 'calibration_result.yaml')
output_json = os.path.join(result_folder, 'calibration_result.json')
output_npz = os.path.join(result_folder, 'calibration_result.npz')
output_image = os.path.join(result_folder, 'undistorted_example.jpg')
# === Prepare 3D object points ===
# For chessboard
objp = np.zeros((pattern_size[0] * pattern_size[1], 3), np.float32)
if pattern_type == 'chessboard':
objp[:, :2] = np.mgrid[0:pattern_size[0], 0:pattern_size[1]].T.reshape(-1, 2)
else:
# For circles grid → apply spacing correction
objp[:, :2] = np.zeros((pattern_size[0] * pattern_size[1], 2), np.float32)
for i in range(pattern_size[1]):
for j in range(pattern_size[0]):
objp[i * pattern_size[0] + j, 0] = j * 1.0 + 0.5 * (i % 2)
objp[i * pattern_size[0] + j, 1] = i * (np.sqrt(3) / 2)
objpoints = []
imgpoints = []
# === Load images ===
images = glob.glob(image_pattern)
print(f'Found {len(images)} images.')
# === Configure Blob Detector ===
params = cv2.SimpleBlobDetector_Params()
params.filterByCircularity = True
params.minCircularity = 0.7
params.filterByArea = True
params.minArea = 100
params.maxArea = 10000
params.filterByInertia = True
params.minInertiaRatio = 0.5
detector = cv2.SimpleBlobDetector_create(params)
# === Loop with progress bar ===
for fname in tqdm.tqdm(images, desc='Processing images'):
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret = False
corners = None
# === Perform Blob detection ===
if blob_detector:
# Blob detection
keypoints = detector.detect(gray)
print(f"Detected {len(keypoints)} blobs")
# Draw blobs for verification
im_with_keypoints = cv2.drawKeypoints(gray, keypoints, None, (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imshow("Blobs", im_with_keypoints)
cv2.waitKey(0)
cv2.destroyAllWindows()
if pattern_type == 'chessboard':
ret, corners = cv2.findChessboardCorners(gray, pattern_size, None)
if ret:
cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1),
(cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1))
elif pattern_type == 'circles':
ret, corners = cv2.findCirclesGrid(gray, pattern_size, flags=cv2.CALIB_CB_SYMMETRIC_GRID)
elif pattern_type == 'asymmetric_circles':
ret, corners = cv2.findCirclesGrid(gray, pattern_size, flags=cv2.CALIB_CB_ASYMMETRIC_GRID)
if ret:
objpoints.append(objp)
imgpoints.append(corners)
cv2.drawChessboardCorners(img, pattern_size, corners, ret)
cv2.imshow('Detected Pattern', img)
cv2.waitKey(200)
else:
print(f'[WARN] Pattern not found in {fname}')
cv2.destroyAllWindows()
# === Run Calibration ===
if len(objpoints) < 5:
print('ERROR: Not enough valid images for calibration (need at least 5).')
sys.exit()
print('\nRunning calibration...')
ret, camera_matrix, dist_coeffs, rvecs, tvecs = cv2.calibrateCamera(
objpoints, imgpoints, gray.shape[::-1], None, None
)
# === Output Results ===
print('\n=== Calibration Result ===')
print(f'Reprojection error: {ret}')
print('\nCamera matrix:\n', camera_matrix)
print('\nDistortion coefficients:\n', dist_coeffs.ravel())
print('\n===========================')
yaml_data = {
'pattern_type': pattern_type,
'pattern_size': pattern_size,
'reprojection_error': float(ret),
'camera_matrix': camera_matrix.tolist(),
'distortion_coefficients': dist_coeffs.ravel().tolist()
}
with open(output_yaml, 'w') as f_yaml:
yaml.dump(yaml_data, f_yaml)
with open(output_json, 'w') as f_json:
json.dump(yaml_data, f_json, indent=4)
np.savez(output_npz,
camera_matrix=camera_matrix,
distortion_coefficients=dist_coeffs,
rvecs=rvecs,
tvecs=tvecs)
print(f'Calibration parameters saved to:\n {output_yaml}\n {output_json}\n {output_npz}')
# === Undistort test image ===
# Select test image (use "*test*.jpg" first if exists)
test_images = glob.glob(os.path.join(image_folder, '*test*.jpg'))
if len(test_images) > 0:
test_img_path = test_images[0]
print(f'[INFO] Using TEST image: {test_img_path}')
else:
test_img_path = images[0]
print(f'[INFO] No TEST image found. Using first image: {test_img_path}')
# Load image
test_img = cv2.imread(test_img_path)
h, w = test_img.shape[:2]
new_camera_matrix, roi = cv2.getOptimalNewCameraMatrix(
camera_matrix, dist_coeffs, (w, h), 1, (w, h)
)
# Undistort
undistorted_img = cv2.undistort(test_img, camera_matrix, dist_coeffs, None, new_camera_matrix)
# Save
original_output_image = output_image.replace('undistorted_example.jpg', 'original_example.jpg')
cv2.imwrite(original_output_image, test_img)
cv2.imwrite(output_image, undistorted_img)
print(f'Original image saved to: {original_output_image}')
print(f'Undistorted example image saved to: {output_image}')
# === Perspective warp (only applied to chessboard) ===
if pattern_type == 'chessboard':
gray = cv2.cvtColor(test_img, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, pattern_size, None)
if ret:
print('[INFO] Perspective transform in progress...')
square_size = 50
dst_points = np.zeros((pattern_size[0] * pattern_size[1], 2), np.float32)
dst_points[:, 0] = np.tile(np.arange(pattern_size[0]), pattern_size[1]) * square_size
dst_points[:, 1] = np.repeat(np.arange(pattern_size[1]), pattern_size[0]) * square_size
H, _ = cv2.findHomography(corners, dst_points)
out_w = int(pattern_size[0] * square_size)
out_h = int(pattern_size[1] * square_size)
warped_img = cv2.warpPerspective(test_img, H, (out_w, out_h))
perspective_output_image = output_image.replace('.jpg', '_perspective.jpg')
cv2.imwrite(perspective_output_image, warped_img)
print(f'Perspective corrected image saved to: {perspective_output_image}')
cv2.imshow('Perspective Corrected Image', warped_img)
else:
print('[WARN] Could not detect corners in test image for perspective transform.')
# === Display results (exit with "q" key) ===
cv2.imshow('Original Image', test_img)
cv2.imshow('Undistorted Image', undistorted_img)
print('\nDisplaying images. Press "q" to exit.')
while True:
key = cv2.waitKey(0) & 0xFF
if key == ord('q'):
break
cv2.destroyAllWindows()
Thanks, I was actually asking too much. It’s obvious that getting to a good set of grid samples requires a certain amount of strategy coming from a deeper understanding of all target parameters of the calibration. The usual voodoo says things like “make sure to have 10-15 degrees tilt variation” and so on, but I find this part from where you point at (calib.io best practices) potentially a more explainable element to employ:
Analyse the individual reprojection errors. Their direction and magnitude should not correlate with position, i.e. they should point chaotically in all directions.
At first glance this seems like a metric that’s sufficiently invariant to be (hopefully) sufficiently generically applicable.
Initially I think that each distortion coefficient may warrant more criteria to the strategy gauging how much our set of images is sufficiently variable, but I have to think it through some more.
Otherwise it’s possible to get a good rms which is only the result of not presenting sufficient sample variety to the optimizer.
I’ll think it through, simplify it.
Thanks for your code, I already use similar techniques for avoiding a blackbox research process.
Please don’t go into trouble dissecting this, it’s more of a note to myself now.
There’s a lot of elements to it. It’s best to understand as much as possible about the nature of the solving process that the calibration is.
Having realized how to properly provide the grid dimensions, I easily get around 200 rms over a camera resolution 4096x2160 after each calibration session of 100 accepted views, a nice start but ChatGPT thinks a 1-2 px can be typically reached by a good enoug calibration.
You can get that reprojection rms, at least in my experiment at the ~200 value range, even if your calibration is very off, if you didn’t provide sufficiently enabling samples in terms of distance and tilt. This is easily confirmed also in practice, by calibrating with only images of a single position holding your board steady at one pose and position.
I think the difficulty of reaching good rms in some settings like a webcam setting, is why “camera calibration” is seldom done without ground truth ranges such as you have in a lens or camera production environment, where you can also throw away the lens unless you manage getting a plausible calibration for it.
I’m happy to learn differently, though I’m getting down to 130 rms without going into almost any image selection logic whatsoever (which is lazy).