Do I even need camera calibration for length measurements with one camera?

Sounds reasonable. I will take two sets of calibration-pictures and try to compare them :slight_smile:

I would really appreciate any textobook recommendations or anything similar.

Sentences like this are really offensive in my opinion!
(I don’t want to be mean and I really appreachiate your replys and your help. But I don’t see you arguing at all with any literature! Its more your “wisdom”. I dont want to challange you. But still I would highly appreciate, if you could provide me a textbook name and a page.)

I value that you are pointing out misunderstandings and flaws in my argumantation and in my writings. Thats why I am here and I guess thats the reason forms like this even exist in the first place.

@Steve_in_Denver Thanks for your insights.

Regarding your concerns about the image: The image was created by myself. I am explaining that a real pinhole camera has a “circle of confusion (CoC)” and that this CoC also depends on the distance of the projected object. So objects farther away appear sharper. I havent intended to show any optical distortion.

I am going to wrap my head around the rest of the topics.

FYI: Quick summary on why focussing is bad, for future readers:


In the top picture on the left the lens is out of focus.
The top right picture shows how you need to move the lens (or sensor) in order to be in focus. The focal length of a lens stays fixed, but the focal distance changes and so the camera intrinsics.

In the bottom picture you can observe that the image field of view changes even threw its a prime lens!

Thanks to @Steve_in_Denver and @crackwitz for pointing that out.

Furthermore I wasn’t able to find that particular explanation in any literature or calibration documentation. Maybe its regarded common knowledge in the field of imaging? But you could highly help me out if you provide me with a book covering this, because I am in need of a good primary source.

TLDR: Focussing changes intrinsics. So turn on manual focus.

Correct. But if you know the Z value (in camera coordinates) of the world point, then you can determine the X and Y values. Or if you know the world plane that it is on you can determine X,Y,Z for the world point.

And if you can move your camera…

That looks like an interesting article - I hadn’t seen that before.

I might want to retract the statement about different distances. I don’t have any strong intuition about why this is important, so it seems that I might just be repeating things I have read (as opposed to stating things I know.) It is important to have different depths within an individual image (angled target), but I’m not sure that different depths among the input images buys you anything fundamental. Maybe the advice is just a trick to help ensure you get enough samples with enough density? If you do find a reference that provides justification, please share.

1 Like

Just a quick thing that came into my mind today:

Smartphone cameras and the Sony camera I am using do have optical image stabilization. I guess this shifts the optical center?!

I guess that this may be caused by the active image stabilization.
I did 4 new calibrations using 8 pictures each. (OpenCV standard settings.)
(Camera settings: 1/200, f22, ISO 32000, Focus-Distance 1m) (re := reprojection error)
2 with OIS off:

  • 4454 and 4531 (1) (8 of 8 images detected) (re = 4.66)
  • 4521 and 2828 (2) (8 of 8 images detected) (re = 4.71)

2 with OIS on:

  • 4740 and 4154 (3) (7 of 8 images detected) (re = 4.03)
  • 4605 and 4090 (4) (8 of 8 images detected) (re = 4.61)

“Expected”: v_0x = 4752 and v_0y = 3168.

With my “experiement” I would have expected that the image stabilisation has an impact. But the data doesn’t seem to prove that. Furthermore, even keeping the test variables as lose to constant there is quite a decent fluctuation.

Possible reasons:

  • I took random 8 (differing) pictures. But not always in the same 8 poses.
  • 8 pictures is too less.
  • Manual focussing by moving the camera might be too blurry

I would be gratefull about any advice.

Furthermore:
Does someone have any experience of optical image stabilization with respect to calibratoin? (I now turned this function off on my camera. But on a smartphone I am unaware of the ability to turn it off. So I guess calibrating an smartphone camera with OIS may yield interesting results.) Also I guess one can’t guarantee that the sensor will stay fixed when (re-)starting the camera. Some cameras check their system on startup and move the sensor (e.g. Olympus Pen E-P1). If this is the case the sensor may be repositioned (at least a bit) after a restart of the camera.

Example:

P.S.: Charuco-Code is under construction :wink:

I don’t know if this is relevant, but maybe some out there might find it interesting:

Following:

point 3 “high feature count”.
I tried a more dense checker board (26x39 to be precise.)

(Camera settings: 1/200, f22, ISO 32000, Focus-Distance 1m, ois off) (re := reprojection error)


(7/8 detected)
-5124 and 2462
And I did another one:
(8/8 detected)

  • 4582 and 2917

The modeled camera centers seem to have a very big variation depending on the pictures :confused:

Not a good result, since estimating a distortion model is curcial for my purpose of measuring.

@Steve_in_Denver I still didn’t manage to get a working charuco detector script for python :confused:

that is a wavy sheet of paper just lying around.

why do you think that’s not gonna end in bad data?

I’m too lazy to check but there’s a high chance that that was mentioned already, either in this thread, or in any linked articles.

Fair point!

I performed another test, using my phone as a checkerboard. Which I would consider quite flat :smiley:
(Camera settings: 1/200, f22, ISO 32000, Focus-Distance 400mm, ois off)
Getting:

(7/8 images)

  • 4529 and 3194
    (6/8)
  • 4481 and 2916
    (6/8)
  • 4797 and 2979

Didn’t expected that the (quite small) bulging of the paper has such a huge impact! The results are showing much more consitant values. Highly appreciate your valuable tip!

Example set:

The “rejected” image:


I don’t see a flaw with it. Might be too blurry.
Definition of rejected: The algorithm couldn’t find all corners.

Also I noticed the following tradeoff:
Filling the image with the checkerboard and the ability to move the camera freely. I changed the focus distance from 1m to 400mm in order to have the checkerboard fill the image area. But I noticed some angles and viewpoints aren’t possible, because the checkerboard wouldn’t fit in the frame anymore. (The reason I am working towards a Charuco code!)

But I may try going up to 500mm and increasing the number of pictures. Then “filling” every corner part by part. (First the checkerboard in the left image side and then the right. To cover all corners.) What do you think of this approach?

(Explanation picture:)

So rater filling the total image in every shot, risking that some angles are not possible. Or having more freedom in the perspective but in need to take multiple pictures to fill the frame?

Furthermore:
Do you have any practical experiance regarding this? (Also theoretical hints are welcome.)

I would recommend using the Charuco calibration pattern and functions since you don’t need to see the full target in your calibration images. It makes life so much easier.

I would probably have more out-of-plane rotation in at least some of your images. You might have depth of field issues at some point, but again this isn’t an issue with the Aruco calibration. If some points are too blurry to detect, it doesn’t matter. An out-of-plane rotation angle of somewhere between 20-40 degrees seems to work best in my experience.

Don’t worry about filling any given image with the calibration pattern, instead worry about getting full coverage of your camera FOV across all images.

1 Like

Thank you for your reply and the valuable tips.

Yes you are totally right. But sadly I can’t get the charuco code working in python.
Do you have a working python script by any chance for open cv 4.10.0?
(But any other version would be fine too, as I am able to switch.)

I already asked for help here and here.

You are referring to more “steep” pictures? That was the “tradeoff” I described with steep angles vs not filling the whole image plane.

Okay, then I will prefer this approach and try to have as much of the pattern in my FOV as possible.

P.S.: One may ask why I am not using 450 mm as focus distance. The answer is that I have markings on my dial at 400 and 500, so I can repeat measurments here. Where 450 lies would just be a guess, since the scale is also non linear.