# Homography from projection Matrix

Hello,
I would like to know, if it’s possible to obtain the 3x3 homography matrix from a 4x4 projection matrix. I obtained the 4x4 projection matrix by multicalibration and now I need to have the homography matrix for warping the images.

my math is rusty so beware.

I see two possibilities. you could either put four points in the scene, then project them through the 4x4 matrix, and then give the point pairs into `getPerspectiveTransform`.

that’s probably the most straight-forward method.

it should also be possible to “squeeze” the 4x4 matrix into 3x3 because in a homography, Z is 0 (X-Y plane object)… but that’s Z in object space. you’d need to multiply an object pose matrix onto your 4x4 matrix, or maybe even “insert” some stuff between factors of your matrix… here’s where my experience gets thin. I explored this once but that was years ago.

Hi,
thanks for your advices. I tried the first one but it did end up in bad results for the homography. I don’t really understand the second one. What is an object pose matrix?

computer graphics, think OpenGL or Direct3D. object (model) vertices are transformed into world space (and then into camera space) using matrices.

when you say that you tried something and it “didn’t work”, that’s a useless statement because you haven’t given all the details we need to help you. anticipate what others need to know. don’t make it like pulling teeth.

I picked 4 Points of the first frame, which are defined by the x and y values and put them into Vector, lets call it Vec1. Since I use a stero camera, I can also get the z value of the point. So each picked point is defined by x,y,z and 1. After that I multiplied my projection Matrix, which is an 4x4 Matrix, with each picked point of Vec1, in order to obtain the points of the second frame. Then I picked the x and y values of the second frame and put them into a Vector, Vec2. In the end I put Vec1 and Vec2 as an input into the function getPerspectiveTransform(), to obtain the homography. When I used the function warpPerspective(), the image was very distorted after warping.

source code please. words are imprecise.

x and z in pixel coordinates and y in meters? that won’t work.
edit: I meant to say x,y and z…

you should post data (pictures) to work with.

``````                    //x,y coordinates of my first frame
Point2f p1_1 = { 294.332,204.907 };
Point2f p1_2 = { 568.93, 153.152 };
Point2f p1_3 = { 568.93, 153.152 };
Point2f p1_4 = { 322.765, 202.582 };
vector <Point2f > Vec1= { p1_1,p1_2,p1_3,p1_4 };
vector<Point2f> Vec2;
for (int i=0;i<Vec1.size();i++){
Point3f p3d;
p3d.x=Vec1[i].x;
p3d.y=Vec1[i].y;
p3d.z= depth_img1.at<double>(Vec[1]) // z coordinate at point x,y
double arr[4]={p3d.x, p3d.y,1. , p3d.z}
Mat p4d(Size(1,4),CV_64F,arr);
Mat p_img2=ProjectionMat * p4d; // compute points of the second frame
Point2f pt2d=  { (float)p_img2.at<double>(0,0), (float)p_img2.at<double>(1,0)}; //pick the x and y coordinates
Vec2.push_back(pt2d);
}
Mat H=getPerspectiveTransform(Vec2,Vec1);``````

Hi Fuchs, let’s review the basics

There is a 2D space in pixel scale, points in homogeneous coordinates are 3-vectors, often with last element value 1.

There is a 3D real space in meters or mm, points in homogeneous coordinates are 4-vectors, often with last element value 1.

Homography (aka perspective transformation) is a 3x3 matrix, mapping from 2D to 2D. Resultant last element often differs from 1.

Pose matrix (aka euclidean transformation, or rototranslation) is a 4x4 matrix, mapping from a 3D points from one reference system to another. Usually it maps from an arbitrary “world” reference system to the camera reference system.

Projection matrix is a 3x4 matrix, mapping 3D points from an arbitrary reference system in real space, to 2D points in the image reference system. Projection matrix mapping 3D points in the camera reference system can be constructed from 3x3 camera matrix, adding a fourth column with 0. Projection matrix for other 3D reference system can be constructed multiplying the above mentioned by a 4x4 rototranslation matrix.

Once you have you 3x4 projection matrix mapping from world reference system in m or mm to the image coordinate system in pixels, you can get the 3x3 homography matrix by stripping the third column. This homography will map XY plane in world reference system to image.

Stripping the 3rd column is magic , no one really knows how it works, but some have pointed out that if Z is always 0 in the 3D point, that column is useless.

I wish you luck, magic, and a solid knowledge of the underlying math.

Let me know if you get it working.

That’s my code. proRMat is my 4x4 matrix

``````        Point2f p1_1 = { 294.332,204.907 };   //picking four pts of the first frame
Point2f p1_2 = { 568.93, 153.152 };
Point2f p1_3 = { 568.93, 153.152 };
Point2f p1_4 = { 322.765, 202.582 };
vector<Point2f> Vec1 = { p1_1,p1_2,p1_3,p1_4 };
vector<Point3f> p3d;
vector<Point2f> Vec2;
for (int i = 0; i < Vec1.size(); i++) {
Point3f p_tmp;
p_tmp.x = Vec1[i].x;
p_tmp.y = Vec1[i].y;
p_tmp.z = DimgR.at<double>(Vec1[i]); //get the z coordinate of the depth image
double arr[4] = { p_tmp.x,p_tmp.y,p_tmp.z,1. };
Mat p1_4d(Size(1, 4), CV_64F, arr);

Mat p2_4d = proRMat * p1_4d;

Point2f p2_tmp = { (float)p2_4d.at<double>(0,0),(float)p2_4d.at<double>(1,0) };

Vec2.push_back(p2_tmp);``````

Hi Alejandro,
do you start counting at 0 when you say 3rd column?
I have the follwing pose matrix :
[0.9676308, 0.22736079, 0.10953418, -0.8434149,
0.087446354, 0.10507571, -0.99061203, 9.7156219,
-0.23673572, 0.9681251, 0.081792623, 0.55466884,
0, 0, 0, 1]
And now I stripped out the last row and the column [0.109…,-0.9906…,0.0081]
But the results aren’t really good

I dont know why but the admin blocks the code I want to post here

@Fuchs

No, the third column has index 2. That’s the one multiplied by Z, the one we don’t need.

But you must do that on the 3x4 projection matrix, not on the 4x4 pose matrix.

``````Yes, it's odd you aren't allowed to write code.
Try beginning the line with 4 spaces.``````
``````			Point2f p1_1 = { 294.332,204.907 };
Point2f p1_2 = { 568.93, 153.152 };
Point2f p1_3 = { 568.93, 153.152 };
Point2f p1_4 = { 322.765, 202.582 };
vector<Point2f> Vec1 = { p1_1,p1_2,p1_3,p1_4 };
vector<Point3f> p3d;
vector<Point2f> Vec2;
for (int i = 0; i < Vec1.size(); i++) {
Point3f p_tmp;
p_tmp.x = Vec1[i].x;
p_tmp.y = Vec1[i].y;
p_tmp.z = DimgR.at<double>(Vec1[i]); //get the z coordinate of the depth image
double arr[4] = { p_tmp.x,p_tmp.y,p_tmp.z,1. };
Mat p1_4d(Size(1, 4), CV_64F, arr);
Mat p2_4d = proRMat * p1_4d;

Point2f p2_tmp = { (float)p2_4d.at<double>(0,0),(float)p2_4d.at<double>(1,0) };

Vec2.push_back(p2_tmp);
}``````

And then I tried Mat H=getPerspectiveTransform(Vec2,Vec1);

Oh I see, so to obtain the projection Matrix I need the camera matrix.
In my example its
[2631.7708, 0, 624.08966;
0, 5541.3672, 349.4791;
0, 0, 1]
Now I add a fourth column on that matrix so I get:
[2631.7708, 0, 624.08966,0;
0, 5541.3672, 349.479,01;
0, 0, 1,0]
And now I need to transform it into the image coordinate system? Am I right?

@Fuchs

You are right.

There’s something odd in your camera matrix. at least to me. I believe your image is 1280x720 pixels. Are those pixels rectangular? They can be, but they are usually square.

For square pixels fx and fy have the same value. In you case fy = 2*fx almost. It’s strange to me, but there are a lot of cameras I don’t know about.

Yes its 1280x720 pixels. It is a intel realsense d435. The pixels are squares. 3 um X 3 um pixel size. I obtained the camera matrix by doing the multicalibration method of opencv. Do I need to divide fx and fy by the pixel size to get the projection matrix in image coodinates?

weird. that shouldn’t happen. this forum has some automated spam defenses that have false positives occasionally. a picture of the source will do in this circumstance (usually text is preferred)

@Fuchs

Your problem is about one camera only. If you are into it, try to calibrate only one camera and see if fx = fy.