Hello,
I am trying to use the OpenCV functions to calibrate a camera (extrinsic, intrinsic parameters, distortion coefficients, focal lengths, camera center, etc).
However I am not using several images of a checkerboard of known pattern and size at a finite distance, but rather a crosshair projected by a collimator.
A point (the crosshair center) is projected at infinity by a collimator. The camera is mounted on a goniometer stage. I have recorded a set of (X,Y) coordinates corresponding to the crosshair center, and the corresponding (thetaX, thetaY) goniometer angles.
When using the OpenCV camera calibration functions, I need to build the objectPoints giving the object coordinates in the real 3D world, however in this case the points do not represent a physical object.
Does anyone have an idea how to proceed?
Thanks for your help!
The physical setup you describe isn’t clear to me. I’m imagining something based on your description, but I’m not confident that what I’m envisioning is the same as what you have. Maybe some pictures or diagrams?
If I am envisioning the correct physical setup, my instinct says that there isn’t a direct way to use the calibration algorithms in OpenCV to operate on the data you have collected. It seems that you are relying on the goniometer stage as the basis for your ground truth data, but I wonder about the rotation axes and the nodal point of the camera - it seems like there is an unknown translation from camera origin to the rotation axes that would have to be determined somehow. Maybe this can be calibrated independently (or simultaneously with the camera calibration), but it doesn’t seem to me like that could be readily adapted to the standard camera calibration functions. (It probably could be solved with similar approaches used in the camera calibration functions, but you’d be walking your own road, I think.)
Maybe now is a good time to ask why you are trying to calibrate the camera this way instead of just using the tried and true methods that are readily available? I’m not saying you don’t have a good reason for it, but if you could share some details about the fundamental problem you are trying to solve, maybe there are other approaches worth considering?
Hello Steve,
Thanks for the answer.
I’ll share a diagram tomorrow, but the principle is quite simple:
The target (the crosshair) is at the focal plane of a lens (focal 200mm in this case, but can be different). The ilage of the target is therefore at infinity, and projected onto the camera sensor by the camera objective. The assembly sensor + objective is the object I’m trying to calibrate.
I am trying to calibrate this way because the objective is a satellite payload, for which no image is sharp unless the target is several kilometers away…
Therefore I am unable to image a physical object, only a projected image, which position I can tune on the sensor by playing on the angle of incidence (the goniometer).
As you said, rotation and translation between sensor and center of rotztion can be measured, either optically or mechanically.
I’ll send a diagram tomorrow to illustrate.
Have a nice day,
Hello,
I’m back with diagram and measurements data.
So as I was saying, a target is projected to infinity by a lens. A objective - camera system, mounted on a goniometer stage, produces an image.
The image produced on the sensor is as follow:
And an image is saved for a lot of different angles of the goniometer stage, moving the crosshair on the sensor. At the end, we have a set of angles thetaX, thetaY correlated with a crosshair center px, py as shown below:
And those coordinates suffer from the system distortion (which is very small in usual designs)., and would be the input we give to construct all camera parameters.
could you produce plot of all the intersection points you’ve determined?
generally we can work better with data. screenshots may help in communication but do nothing for debugging.
hi,
Yes of course, I wasn’t expecting someone to directly dig into the data.
Here I have them into a CSV file, sorted as:
INDEX, IMAGENAME, PX, PY, THETAX, THETAY
Where PX, PY are the X,Y coordinates of the center of the crosshairs, THETAX, THETAY are the corresponding goniometer angles.
The file is available following this link:
https://drive.google.com/file/d/1Up1jrgS9m8RpU3VzRvYVrfBYDEYj1HVe/view?usp=sharing
Here is for visualization a plot of all the crosshair centers measured on the sensor.
The sensor is not fully scanned on purpose, we only want to assess system calibration on a cropped region.
that looks fairly regular. I see some slight rotation. I see some deviations in the grid, which may be due to plotting inaccuracy (rounding to whole pixels in the library, or bad subpixel rendering, which is likely).
usually, people fail to get samples in the corners of the view. that’s where distortion from the lens is worst, and that is where the distortion model, or rather, the numerical optimization, will introduce even worse distortion, if there are no points to constrain it.
if you want reliable results in a certain region, you need to calibrate on more than that region.
another matter: OpenCV’s calibrations always assume both a camera/lens and a lens distortion… i.e. they estimate both the focal length and the distortion.
for that (the focal length), they require perspective foreshortening. part of the calibration is to estimate the pose of the calibration pattern. that only works reliably with clearly evident perspective foreshortening. that means the pattern has to be presented at an angle relative to the optical axis, i.e. some out of plane rotation.
your setup shows a frontal/topdown view. no foreshortening. you might not even have a focal length, i.e. the lens is telecentric/orthographic.
you’ll probably have to formulate and optimize/solve your own distortion model.
A few comments.
-
The image locations in your data are integers. You should be able to estimate the intersection point to a subpixel location. I’m not sure how you are getting the points, but if I were doing it I’d be shooting for a minimum of one digit after the decimal place.
-
To use the standard calibration function in OpenCV directly, you would have to (somehow) assign world points to the corresponding image locations. I’m not sure if I’m thinking about it correctly, but it seems to me that you would have to already know the lens parameters (along with the angles of the goniometer).
-
I tend to agree with crackwitz on this - you’ll most likely have to come up with your own model and way to solve the parameters with the data you have.
-
If it’s going to observe the earth, maybe calibrate it after it’s launched? You can get images of physical things with known 3D locations and use that as your input to the calibration algorithms. You’ll need a range of depths, so a mountain range would probably be a good place.
-
Would it be useful to only be able to calibrate the distortion? Maybe there is a way to get the distortion model if you have a good estimate of the focal length / image center?
It’s an interesting problem. I wish I had better ideas to offer.