Uniqueness of cameraMatrix from calibration


I am attempting to perform a ChArUco calibration per OpenCV: Calibration with ArUco and ChArUco. I have borrowed and modified a code from the internet [Camera calibration using CHARUCO — Scientific Python: a collection of science oriented python examples documentation] with the exception that I am using four different calibration patterns (properly printed at the correct sizes) from calib.io rather than the pattern provided with that code.

I created a set of images for each calibration pattern and am properly creating each of the boards (AFAIK) at runtime to feed to the calibration routine. I am an experienced programmer and have reviewed the code and ran it with breakpoints, it appears to be doing the “right thing” given my understanding of OpenCV function (which is at a beginner level, honestly)… That is, it properly finds corners and IDs of the ArUco markers, and provides them to the calibrateCameraCharuco function.

My expectation is that the camera intrinsic parameters (returned as cameraMatrix) should be relatively… the same… between all four sets of calibration images. But instead, they are quite different. My expectation is correct, yes?

I’m not sure what exactly to ask… where am I going wrong? I’m using the latest OpenCV from the Anaconda conda-forge repository.

Thanks for any & all help.

please show us your pictures and other data. you need to provide everything required for debugging, or some of that at least. the tutorial code is unlikely to have issues.

Edited: here are the images, shared on Google Drive. They are taken with a Logitech BRIO 4K.
Looking at these, not all are spectacular. I recall noticing that the camera at times would pause and refocus, and some pictures were taken during before the refocus was complete - you can tell, they’re burry…

The camera matrices obtained are:

709.172 0 2176.41
0 1510.16 1294.61
4915.34 0 614.306
0 5360.43 2819.08
1456.88 0 991.235
0 4938.38 3456.08
497.565 0 1489.41
0 3674.71 1767.23

I can post the distortion coefficients if needed but I figured mtx was most important. I have omitted the final 1×3 row of each matrix which of course is just (0, 0, 1).

I will post code soon, or tomorrow… Thanks again.

Original message: [Hey, thanks for a response at all… that doesn’t always happen on forums! I will post materials ASAP tomorrow.]

Here’s the code… If you compare this to the link I first posted you’ll notice that I “functionalized” the two most important parts of the tutorial code so that I could call them repeatedly, and I also prevented it from passing the flags as set (including cv2.CALIB_FIX_ASPECT_RATIO)… I was just trying different things (it didn’t help)… Lemme know what else I can provide and thanks again for any & all insight.

from numpy import savez
import numpy as np
import cv2
from os import environ, mkdir, walk
from os.path import isdir
from glob import glob

def read_chessboards(images, aruco_dict, board):
    Charuco base pose estimation.
    allCorners = []
    allIds = []
    decimator = 0
    criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.00001)

    for im in images:
        print("=> Processing image {0}".format(im))
        frame = cv2.imread(im)
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        corners, ids, rejectedImgPoints = cv2.aruco.detectMarkers(gray, aruco_dict)

        if len(corners)>0:
            for corner in corners:
                cv2.cornerSubPix(gray, corner,
                                 winSize = (3,3),
                                 zeroZone = (-1,-1),
                                 criteria = criteria)
            res2 = cv2.aruco.interpolateCornersCharuco(corners,ids,gray,board)
            if res2[1] is not None and res2[2] is not None and len(res2[1])>3 and decimator%1==0:


    imsize = gray.shape
    return allCorners,allIds,imsize

def calibrate_camera(allCorners, allIds, imsize, board):
    Calibrates the camera using the dected corners.

    cameraMatrixInit = np.array([[ 1000.,    0., imsize[0]/2.],
                                 [    0., 1000., imsize[1]/2.],
                                 [    0.,    0.,           1.]])

    distCoeffsInit = np.zeros((5,1))
    #flags = (cv2.CALIB_RATIONAL_MODEL)
    (ret, camera_matrix, distortion_coefficients0,
     rotation_vectors, translation_vectors,
     stdDeviationsIntrinsics, stdDeviationsExtrinsics,
     perViewErrors) = cv2.aruco.calibrateCameraCharucoExtended(
     #                flags=flags,
                      criteria=(cv2.TERM_CRITERIA_EPS & cv2.TERM_CRITERIA_COUNT, 10000, 1e-9))

    return ret, camera_matrix, distortion_coefficients0, rotation_vectors, translation_vectors

myname = environ['COMPUTERNAME']
workdir = './working/'
datadir = './imdata/'
if not isdir(workdir):
if not isdir(datadir):

mydirs = ('IMG-5x7_20,16_DICT6x6', 'IMG-5x7_25,19_DICT6x6',
dict4 = cv2.aruco.Dictionary_get(cv2.aruco.DICT_4X4_50)
dict6 = cv2.aruco.Dictionary_get(cv2.aruco.DICT_6X6_50)
mydict = (dict6, dict6, dict6, dict4)
myboard = (cv2.aruco.CharucoBoard_create(5, 7, 20, 16, dict6),
    cv2.aruco.CharucoBoard_create(5, 7, 25, 19, dict6),
    cv2.aruco.CharucoBoard_create(5, 7, 32, 25, dict6),
    cv2.aruco.CharucoBoard_create(8, 11, 20, 16, dict4))
for i, (curdir, curdict, curboard) in enumerate(zip(mydirs, mydict, myboard)):
    images = []
    dirnam = f'imdata/CalibrationImageSets/{curdir}'
    for root, _, _ in walk(dirnam, followlinks=False):
        for imgnam in glob(f'{root}/*.jpg') + glob(f'{root}/*.png'):
        allCorn, allIds, imsize = read_chessboards(images, curdict, curboard)
        ret, mtx, dist, rvecs, tvecs = calibrate_camera(allCorn, allIds, 
            imsize, curboard)
        savez(workdir + f'{myname}-CamCal{i}.npz', ret=ret, mtx=mtx, dist=dist,
            rvecs=rvecs, tvecs=tvecs)

any change in focus will mess everything up, especially focal lengths.

fix that focus, i.e. disable autofocus. then set the focus manually to a value that’s giving sharp images for the distances you need. don’t be surprised if that turns out to be a value of 0…2 from a range of 0…255 (Logitech C920).

those camera matrices look like you can safely discard them. focal lengths are usually equal because sensor pixels are square. yours aren’t even close, and they aren’t consistent either.

any time you hold the pattern parallel to the image plane, that’s a waste of a photo.

just throw all those pictures away. then read Calibration Best Practices – calib.io thoroughly.

you should start with ideal camera matrices. cx,cy should be (w-1)/2, (h-1)/2 and then calculate fx=fy from a manual measurement:

  • place an object of known length (yard stick, or your pattern) a known distance away from the camera
  • take picture
  • measure width of object in pixels (pick in photoshop or wherever)
  • calculate f [px] = width [px] * distance [m] / width [m]

so if you have a pattern with 20mm squares, and you can measure a length of 11 squares, that’s 220 mm (0.22m). if you put that 1.0 meter away, and measure in the picture that this is 594 pixels, you calculate 594 * 1.0/0.22 = 2700.

that’s good enough for a first estimate

they say the brio has “adjustable” field of view, 65/78/90 degrees diagonally. at full 4K resolution, I’d expect a focal length of about 3500/2700/2200 or thereabouts. it’s hard to be sure because that’s assuming no lens distortion (impossible to guess)

Superb. Thank you for the guidance, I will try this out soon. I’m still going to go for four sets, and they should yield consistent results, yeah?

Oh and I’ve already interacted with Logitech support, their FOV “adjustment” is just interpolation on the 90° FOV so I’ve set the camera to that FOV.

Hi there. I’m back. If you can again critique this process, I’d appreciate it.

New calibration images: CalibrationImages2 - Google Drive . I must admit this is still a little quick & dirty. Wanted to see if it would improve… I did follow (I think?) best practices #1-#8, I didn’t mount the board (held it) but I did remove a few bad images.

BRIO set to not autofocus, no auto exposure, no auto white-balance: all manual. Focus appears to be good from 4" - 12" (region of use). HDR off. No filters. 90° FOV.

Four focal length images at 2", 4", 6", 8" of a 20mm / 16mm ChArUco board. These images are also in the directory above. From these images (using MATLAB imtool()):

Distance (mm) Width of 20mm box (px) dist(mm)/width(mm) f[px)
101.6 (4") 412 5.08 2092
152.4 (6") 283 7.62 2156
203.2 (8") 220 10.16 2235
Mean f(px)… 2161

Initial camera matrix:

2161 0 1919.5
0 2161 1079
0 0 1

I realized that the code above actually had cx, cy swapped (on the webpage too) so I fixed that (didn’t make much difference)… The calibration photo sets are of 20mm/16mm and 32mm/25mm ChArUco boards.

This yielded:

512.903 0 1380.12
0 512.903 1403.11
0 0 1
661.402 0 1743.22
0 661.402 1866.18
0 0 1

…so… better? I think? Next steps are (I guess?) I’ll…

  1. get a mount for the calibration target
  2. do whatever suggestions you have for better images? Change my target size?
  3. be more precise in my measurements (I wasn’t terrible this time, but it was quick & dirty as I mentioned before)…
  4. Sacrifice an old webcamera to the great camera gods in the sky?

I got to be honest here I really thought this was going to be easier, but hey I’m a theoretician not an experimentalist and I bet that’s why had this idiot idea that it was going to be easy.

the sheet of paper does not lie flat in some pictures. it must.

use double-sided sticky tape. don’t just clamp the sheet of paper onto a clipboard or whatever.

Thank you. Any concerns about how the “measured” fx, fy are so different from the iterated fx, fy? This is due to the sample image problems? And should the cx, cy be closer to the actual image dimensions? It is confusing to me that they would diverge so much.

More photos on Wednesday.

Edit: Other research is pressing… this may take until next week. Thanks for your continued help. I’ll DEFINITELY get back to this, just not as quickly as I hoped.

I see that you are having trouble calibrating your camera, but I don’t see anything obviously wrong. Yes, a flatter target will help, but I don’t think that is the reason for your wildly varying focal lengths. If you did your estimates correctly and got 2161, the calibration results of 512 and 661 suggest something in the calibration process is going pretty wrong. I would suggest drawing the detected markers to your input images and saving them so you can see what’s going on. Also I find it helpful to draw the calibration target chessboard corners to your images with the camera matrix/distortion coefficients and the corresponding rvec/tvec - sometimes it can give hints.

The first question I have is why your focal length estimates are so different than the calibration estimates? An obvious reason would be if the images were created with different settings - like one was full res and the other was down-sampled. (Or the FOV adjustment), but it looks like you’ve got that under control.

The second question is why your two calibration runs produce such different results. The image center and the focal length are quite different, which is pretty fishy. Again, I suggest calling drawDetectedMarkers onto the images and see what that looks like:

In your readChessboards function, after detectMarkers, call drawDetectedMarkers. I’d also probably draw the rejectedImgPoints to the image (using a different color)

After interpolateCornersCharuco, I would draw the point (res2[1]) to the image.

(And maybe also draw your corners before/after cornerSubPix)

You are right, this shouldn’t be so hard.

Christoph, Steve, thank you again for your support here.

We created some gatorboard-backed calibration targets, measured them again (yep they are printing out at the right size), mounted them on our optics breadboard setup, made sure the BRIO autofocus was disabled, and started taking pictures.

During the process, I would move the target some… and noticed that the damn BRIO was still changing focus. So even with autofocus turned off via Logitech software, the thing is still changing its focal length. I strongly suspect this is the problem.

I need to code up drawing the detected markers on a set of processed images to make sure that is going OK… and I’m gonna try to code up some way to adjust (and constantly set) a specific focal length in my acquisition loop via the set(cv2.CAP_PROP_FOCUS, focus_value) method of a VideoCapture object… maybe if I am setting it before every freaking acquisition it will work.

If it doesn’t, I’ll roof test the BRIO, get a cheaper webcam, and start over.

Unless you have a specific reason to use that or other webcams (for example, you are developing software that you want “anyone” to be able to use with an off-the-shelf camera), I would consider getting a different camera with a fixed lens - it makes things a lot easier. If you have an appetite for that option, I might be able to recommend a lens if you tell me your requirements - I have evaluated many dozens of lenses and have notes on most of them.

Be aware that, even if you can “lock” the focus setting via software, the actual focal length might vary from day to day or maybe minute to minute. I presume your webcam employs a voice-coil focusing system, and I have run into some repeatability issues on similar cameras. While testing one camera (voice coil focus, OV5640 sensor) I found that the focal length is pretty consistent, but does change over time (maybe thermal?). The bigger problem I ran into was significant image center shift over the first 30-90 seconds of operation. I attributed this to thermal changes as the camera heated up from use, but I don’t know if it was related to the voice coil focusing or something else about the specific camera. I don’t recall exact numbers, but I think the image center was shifting 5-10 pixels, which was more than enough to blow through my error budget.

If you stick with the webcam, you might want to query the CAP_PROP_FOCUS and CAP_PROP_AUTOFOCUS in addition to setting them. I seem to remember running into problems when setting the manual focus value and it not “sticking”. My memory is really foggy here, but, for example, if the camera reverts to auto-focus mode it might ignore any manual focus commands. Again, a camera that can’t possibly change the focus has advantages.

If you get desperate, you might be able to glue (epoxy) the lens in a fixed position. Get it focused how you want and carefully glue the lens around the perimeter, leave it alone for a lot longer than 5 minutes (overnight).

1 Like

Steve, thanks.

Indeed during my thought process for revising my acquisition loop it occurred to me that the VideoCapture object should also have a get() method and indeed it does. So my loop gets the focus (and autofocus) setting values before entering the loop and uses these values to “re-set” them just prior to each read() call. I also set up two more key == ord() checks to see if I could adjust / improve the focus on the calibration board prior to any image acquisition. Of course, whatever my focus setting will be during acquisition will need to be used from then on.

Thank you for your comments on warmup. Interesting. Indeed if there are metal or glass parts then as they warm they will change slightly in dimension… which will affect focus! Argh!

I have some meetings but hope to test this this afternoon. If it doesn’t work, I’m all ears for the best fixed-focal-length webcam!! We need to capture tags from approximately 2"-12" away from the camera. We had assumed that higher resolution would yield better precision (less uncertainty) in pose, so we went with this 4K BRIO beast. We’d prefer the highest resolution fixed-focal-length webcam possible, if you know what that might be.

I don’t use webcams, so I can’t recommend anything. It’s probably hard to find a fixed focus webcam.

Higher resolution is better in theory, everything else being equal, but your end result is going to depend on a lot of factors. The quality of the optics is going to have a significant impact on image quality, which in turn will affect your calibration accuracy and the detection accuracy of the tags. Cheaper optics can suffer from chromatic aberrations, usually worse the further you get from the center of the image…the image gets “muddy” and your feature localization accuracy suffers, etc. Another consideration is that higher resolution almost always means physically smaller pixels on the sensor. Smaller pixels lead to more noise, and image quality can suffer. A lot of progress has been made in this area in recent years, and I’m not an expert, but I still prefer larger pixels when I can get them. The list goes on. Resolution isn’t a panacea. Also if processing speed / performance is a consideration, higher res is going to come with a penalty.

Good luck with your project. I think you can probably get something working with a webcam, but that wouldn’t be my choice.

Hey Steve

I definitely prefer larger pixels for acquisition (i.e. APS sized or larger sensor). We have a few DLSRs around that we’re looking to see if we can stream video with, but if you have non-webcam suggestions, I’m all ears.


I mostly work with custom cameras, so I’m not very familiar with the commercial offerings these days. I’ve used Point Grey cameras in the past - they had a lot of choices (sensor size, packaging, connectivity), and were well priced - but they appear to have been purchased by FLIR, so the price is almost certainly not low anymore. I’ve had some luck with Basler cameras, too, but I seem to recall that some of the features are only accessible through their front end (or maybe with a library they provide?) - so that might be a factor. There are also a blue million bare board options out there - many just rehashes of some original design, it seems. I do all of my camera control directly through V4L2, so I can’t speak to how well supported these are via OpenCV video capture interface, but I would expect all of the standard features to work fine.

OK, another attempt, another set of crappy calibration matrices. They are more consistent using the code that constantly calls the set() method to supposedly maintain focus but I could still see the camera trying to change as the process went forward.

We’re going to halt work until we figure out an appropriate camera (and the next round of funding comes in - April!).

In the meantime Steve if you have time I would appreciate seeing a working setup. I will reach out to you via a direct message. Thanks.