Perspective transform on single coordinate

I created a transform matrix using findHomography() and used warpImage() to make the change, so far so good.
But I found a point on the original image. I want to know the equivalent coordinate on the warped image. I thought it would be as simple as multiplying by the inverse transform matrix
[[x2]
[y2] = H**-1 * [[x1][y1][1]]
[1]]

So I get the coords of the contour center in a matrix and then transform:
M = cv2.moments(cB)
coord = [[int(M[“m10”] / M[“m00”])],[int(M[“m01”] / M[“m00”])],[1]]
warpedCoord = np.matmul(np.linalg.inv(h), coord)
warpedCoord = warpedCoord *(1/warpedCoord[2][0]) #EDIT: this line was missing before
cv2.circle(oImg,(int(warpedCoord[0][0]),int(warpedCoord[1][0])), 5, (255,0,0), -1)

However the results are off:

What am I doing wrong?

From OpenCV: Geometric Image Transformations
The function warpPerspective transforms the source image using the specified matrix:

I tried it out:
ccBT = [[(ih[0][0]*ccB[0][0]+ih[0][1]*ccB[1][0]+ih[0][2])/(ih[2][0]*ccB[0][0]+ih[2][1]*ccB[1][0]+ih[2][2])],
[(ih[1][0]*ccB[0][0]+ih[1][1]*ccB[1][0]+ih[1][2])/(ih[2][0]*ccB[0][0]+ih[2][1]*ccB[1][0]+ih[2][2])],
[1]]

But the result is still off (I’ll post an image in the next reply since i can only upload one image per post)

1 Like

Here is an image of the result of the second method in the previous reply

in python with numpy, matrix multiplication isn’t simply *. it is np.dot or np.matmul or the @ infix operator.

it is also a bad idea to expand matrix multiplication into huge expressions. treat it as a basic operation, a black box. that “expression” in the docs is nothing more than matrix multiplication followed by the usual division (coordinates are (x,y,1) * w) since a homography happens in a projective space. that expression however is absolutely useless and even harmful to understanding. it is code, it obscures reality.

then, please post plain images (not screenshots) and your homography matrix.

1 Like

EDIT: I think the next post is much clearer, this one is TL,DR

Thanks crackwitz, here is all the relevant info (I believe)

Original quadrilateral ‘Q’:
[[ 233. 276.]
[ 919. 210.]
[1167. 814.]
[ 40. 773.]]

Trued quadrilateral ‘TQ’:
[[ 0. 0.]
[706. 0.]
[706. 706.]
[ 0. 706.]]

Transform Matrix ‘H’:
[[ 2.25471460e+00 8.75573275e-01 -7.67006726e+02]
[ 3.94879833e-01 4.10435705e+00 -1.22480955e+03]
[ 6.88273492e-04 2.26888508e-03 1.00000000e+00]]

Transform Matrix (inverted) ‘invH’:
[[ 4.32488665e-01 -1.64356117e-01 1.30416774e+02]
[-7.77780993e-02 1.74836513e-01 1.54485105e+02]
[-1.21200915e-04 -2.83561997e-04 5.59728642e-01]]

Point ‘P’:
[[536], [423], [1]]

Transformed Point ‘TP’:
[[780.93336898]
[498.24655164]
[ 1. ]]

image
To get TP from P - this must be wrong
TP = np.matmul(np.linalg.inv(H), P)
TP = TP*(1/TP[2][0])

Feeding the original quadrilateral through that formula the results are all off by varying amounts, so there must be another matrix multiplication I need to do

Here is the original img:


Transformed img (it’s an extra 300px wider so you can see the transformed points):

EDIT: I think I figured it out - I was using the inverse of the transform matrix when I should have been using the non-inverted transform matrix! DURR.
I also found this which was kind of interesting and useful opencv - Displaying stitched images together without cutoff using warpAffine - Stack Overflow
So yeah I’ll check for sure when I have access to my camera, but I think it’s SOLVED

I have made some code that makes the problem more clear, it findsHomography then uses the matrix to transform each corner of the original quadrilateral. I would hope the result was the trued quadrilateral but the results are off:

import numpy as np
import cv2

#Original quadrilateral q
q = np.array([[ 233, 276],
[ 919, 210],
[1167, 814],
[ 40, 773.]], dtype=np.float32)

#Trued quadrilateral tq
tq = np.array([[ 0, 0],
[706, 0],
[706, 706],
[ 0, 706]], dtype=np.float32)

#Transform Matrix h
h, maskg = cv2.findHomography(q, tq, cv2.RANSAC)

#Warped, wonky quadrilateral (should be true)
wq = np.zeros((4,3,1), dtype=np.float32)

#for x,y coordinate of the original quadrilateral q:
for i,p in enumerate(q):
##begin indent##
#put the coordinate in the appropriate form
v = np.array([[p[0]],
[p[1]],
[1]], dtype=np.float32)
#transform
v2 = np.matmul(np.linalg.inv(h), v)###########something wrong
v2 = v2/v2[2][0]##############################something wrong
wq[i,:] = v2
##end indent##

print(wq)