SolvePNP and world relative rotation

Hello.
I am using SolvePNP and rotation is (seems for me) to be relative to camera “look at” orientation instead of camera axis, which are fixed to the world. So for example in order to make sure that rotation is 0 I need look directly with my object to the camera. While I’d like to have angle to be absolute.
Can you pls explain how I can get rotation relative to camera axis?

Untitled

Thanks,
Greg

no, the object pose is always relative to the camera frame.

there is no “world” in this relation. there is no “lookat”.

if the object’s axis is not parallel to the camera’s optical axis, it will have rotation.

please present a MRE with data. synthetic data is preferred because it can be made easy to read.

Hello. I want to provide the best info to get feedback, but I’m not sure how I can provide MRE for this cause I really talk about the desired effect. The problem is to get the angle relative to the camera’s optical axis (which I call absolute angle in the picture) while I have angle relative to camera direction like in the image (which I call camera angle).

I am using only SolvePNP and after that CvInvoke.Rodrigues so this is pretty standard thing and it works. The problem is about desired effect so not sure if mre will help here.

Can you let me know what do you think?
Thanks

problem is still: you dont have the required data, the camera pose in world coords, and computer-vision alone cannot tell you, it would need gps data or similar to know, where your scene is “in the world”.

however, once you have this information, it’s just another matrix multiplication (and that’s where we would need your MRE )

I see I have expressed myself incorrectly. I am not concerned with world coordinates but with optical axis as shown in the post below. That and only that.

I am calling below methods which very standard code. My problem is that rotation is not related to optical axis (rotation_rod). But rather like the axis were looking at the object.

bool success = CvInvoke.SolvePnP(objectPoints, imagePoints, camera_matrix, dist_coeffs,
                     rotation_vector, translation_vector, false, Emgu.CV.CvEnum.SolvePnpMethod.IPPESquare);

                Mat rotation_rod = new Mat(new Size(3, 3), Emgu.CV.CvEnum.DepthType.Cv64F, 1);
                CvInvoke.Rodrigues(rotation_vector, rotation_rod);

Here is better visualisation. I’d like to have angle in red axis (camera) but instead I am getting in blue axis that are made in way that Z is pointing to the object.
Untitled

the camera’s direction IS its optical axis. you talk as if those were different things.

the rvec of a pose expresses the object’s rotation relative to the camera frame. it does not express the object’s position.

an object’s orientation is different from its position.

this is all explained in most textbooks on computer vision. it might even be explained in opencv documentation. please obtain proper materials.

I am only talking about proper angle/rotation. I drew red and blue axis and my problem is that object rotation from SolvePNP is like it was versus blue axis instead of red. Blue axis are made as if Z was pointing directly at the object while red are fixed with the camera. That is the problem I am trying to solve but it doesn’t seem to have any solution.

I added angle so it is clear to what I am getting (blue) vs what I want to have (red).

Untitled

Are the magnitudes of the angles shown in the drawing representative of what you are getting in your experiment. In the drawing it looks like solvePNP is giving you an angle of about 20 degrees, and you are expecting 0 degrees.

I’m asking if the magnitudes are representative (as opposed to being exaggerated to illustrate the problem) because if the angles are small, it’s possible that the effect you are seeing is related to the optical axis of the camera not being aligned with the nominal axis of the physical camera / enclosure.

But maybe I’m getting ahead of myself. First, can you confirm that you have calibrated the camera (intrinsics) before calling solvePNP. If so, inspect the camera matrix and see how far the calibrated image center (Cx, Cy) is from the nominal image center (width-1/2, height-1/2). Consider this delta, along with calibrated focal length (represented in pixels), to get a sense of how far off the optical axis is from the mechanical axis (for lack of a better term.)

Another way to come at this is to move your object left/right and see how the rotation from solvePNP changes. I think you are supposing the angle changes depending on where the object is - try it and see (while only translating) If it does change, then post the code and input images you are using, along with the camera intrinsics / distortion model.

Not to harp on the bit about calibrating the intrinsics, but it is important. In addition to calibrating the image center and focal length, you will get distortion coefficients. If your lens has noticeable distortion, you will absolutely need to correct your image points before feeding them to solvePNP.

As for the MRE, it might not always be obvious how it will help, but if you provide code and images for what you are doing (however you are getting the results you believe are bogus), it can reveal so many things that might be more difficult to discover through a back and forth conversation.

Thanks for your reply, Steve. I will conduct checks as you suggested.

I said

If your lens has noticeable distortion, you will absolutely need to correct your image points before feeding them to solvePNP.

To clarify, you just need to pass in the calibrated camera matrix and distortion coefficients to solvePnP - you don’t need to manually correct the points that you pass in. If you do pass in undistorted points, you’ll need to pass in an empty vector for the distortion coefficients (so the distortion isn’t applied twice)

I hope that is clear.