A few comments.
-
The image locations in your data are integers. You should be able to estimate the intersection point to a subpixel location. I’m not sure how you are getting the points, but if I were doing it I’d be shooting for a minimum of one digit after the decimal place.
-
To use the standard calibration function in OpenCV directly, you would have to (somehow) assign world points to the corresponding image locations. I’m not sure if I’m thinking about it correctly, but it seems to me that you would have to already know the lens parameters (along with the angles of the goniometer).
-
I tend to agree with crackwitz on this - you’ll most likely have to come up with your own model and way to solve the parameters with the data you have.
-
If it’s going to observe the earth, maybe calibrate it after it’s launched? You can get images of physical things with known 3D locations and use that as your input to the calibration algorithms. You’ll need a range of depths, so a mountain range would probably be a good place.
-
Would it be useful to only be able to calibrate the distortion? Maybe there is a way to get the distortion model if you have a good estimate of the focal length / image center?
It’s an interesting problem. I wish I had better ideas to offer.