Hi, I have a question about the Principle Point and the cx、cy in intrinsic matrix K. first, let me have a description about my application scenarios. Look at the figure 1 bellow, the camera Z axis ( Optical axis ) point to the Top, so it can rotate 360 degree along the Z axis. In this case, the corners in image that corresponding to the same world point (homonymy point) should be on the circle path centered at Principle Point in image coordinate frame. After calibration, I take two images, Look at the figure 2 bellow, before(X1 axis) and after(X2 axis) rotating 180 degree, with Z axis point to the Top, that is Optical axis is perpendicular to the ceiling. In this case, the image will be rotate 180 degree about the Principle point.
Through finding the homonymy point , we can find the true Principle Point. it can be compared with the cx, cy in intrinsic matrix K.
The coordinate of pixel 1 is (u1,v1), pixel 2 is (u2,v2), so, the true Principle Point is {(u1+u2)/2, (v1+v2)/2}. We will find the offset of principle point after Calibration with it compared with the cx, cy in intrinsic matrix K.
Here is one of my results:
P1: 484 557
P2: 385 183
Mean of p1、p2:434.5 370
Cx\cy in K:434.1 362.2
So,this phenomenon confused me!,can u give some advice?
i also want to know how to improve the calibration accuracy to make Mean of p1、p2 coincidence with the Cx\cy in K. because,if so i find my application will have better result.
First: You are doing good work to run experiments like this to reinforce and/or challenge your understanding. What you learn will deepen your understanding and strengthen your intuition. This is very valuable and worthwhile work.
Some thoughts:
The optical axis of the camera is defined by (cx,cy) and the sensor plane, how can you rotate about this axis if you don’t know (cx,cy) already?
Even if you do know (cx,cy) how do you physically rotate the camera so this point on the sensor does not move? I think it will be very difficult physically to do a pure rotation about the optical axis - there will be some translation in the plane.
Thanks for your reply! first, i had realized that problem. so i kept the camera sensor horizontal using gradienter. Supposed that if the optical axis is perpendicular to the ceiling, meanwhile not considering the translation, is the conclusion above valid? - that should Mean of p1、p2 be equal to Cx\cy in K?
If I understand your experiment correctly, and you are able to execute it well (rotation is about the axis perpendicular to the sensor plane at CX,Cy etc…) then I think your logic is correct - the image location of a world point projected at different rotation angles should form a circle centered at Cx,Cy.
With your experiment you took two pictures with 180 deg rotation about Z between the two and averaged the points. That average should be fairly close to your Cx,Cy point. Your result is pretty close, but you want it to be closer.
The thoughts that come to mind for me are:
How did you calibrate the camera initially, and is your confidence in the result justified? (how repeatable is the result, what is the calibration error, did you use enough input images with appropriate angles to the camera, etc)
How well did you conduct the experiment. Was the rotation around an axis parallel with the optical axis of the camera (this seems difficult to guarantee). Was the rotation angle exactly 180 degrees?
Did you undistort the image points using the calibrated distortion model, or did you use the raw (distorted) point locations?
I would consider doing the same experiment but taking many pictures (say 36 at 10 degree rotation increments) and getting image points for multiple features in each image. Then fit a circle to the image points for each feature you tracked. They should all generate a circle center that is pretty close. If not then something is wrong and there might be some clues as to what is wrong by looking at how the various circles differ.