Homography having only 2 aruco markers as a reference

Hello everyone,

I’m just a newbie in computer vision but I try to improve myself :slight_smile:
I’m currently facing a problem and I don’t know how to deal with it.
I have a target picture that I want to project (homography) on a paper board containing aruco markers. (just something like what can be seen on this site https://learnopencv.com/augmented-reality-using-aruco-markers-in-opencv-c-python)
However I have a strong constraint that only allows me to have 2 markers instead of 4.
One of the marker is at the top left corner and the other one is somewhere between the 2 right corners of the ABCD frame I want to project to (see picture below)


What kind of operation should I use to get my 4 frame points in the camera preview from the markers ref points knowing that I of course know the distance between the markers and the points A, B, C, D on my sheet of paper?
Hope my question is clear … :crazy_face:
Thanks in advance

The homography function needs the correspondence of 4 points.
The linked tutorial uses the first corner for every marker to get the 4 points. However if you take all the 4 corners, you can obtain a total of 8 corners from the 2 markers. That should be enough to compute the homography. You just have to know the exact position of the marker edges w.r.t. the ABCD points.
The downside of this approach is that the corners (of a marker) are quite close, so the corner detection errors will have more effect on the result.

Thanks kbarni for your answer!
You’re right, I did not consider using all the whole 8 points of my 2 markers to get a better result.
However, my problem is more how to get A, B ,C, D (the destination points for my homography) knowing my 8 points when I move the sheet of paper in front of the camera.
I guess there should be a function that allows to compute A, B, C, D by taking into account the axis of the detected markers.
Of course I know the distance between the markers and the A, B, C, D points on my sheet of paper but I need the A, B, C, D values w.r.t. the markers in the camera picture.
Hope you understand what I mean …

The function you are looking for is findHomography().

Detecting ArUco markers and extracting positions of 8 corners in image you get imagePoints. Knowing the actual position of corners on plane (world coordinates) you get objectPointsPlanar.

Mat H = findHomography(objectPointsPlanar, imagePoints);

Using output matrix H, witch contains homography transform, in combination with known positions of ABCD corners (world coordinates) objectPointsABCD, you get positions of ABCD points in image.

perspectiveTransform(objectPointsABCD, imagePointsABCD, H);

Also remember that this solution only applies on planar objects (Z == 0). If you want to place ArUco markers or ABCD corners in different Z, you would need to use solvePnP() function in combination with projectPoints(). In that case you will also need to calibrate the camera.

Thanks FilipBaas!!
This indeed seems to be what I’m looking for.
I suppose I only have to provide 4 points (out of the 8 I have from markers) to the findHomography function for objectPointsPlanar right?
something like:
np.array([[x1,y1], [x2,y2], [x3,y3], [x4,y4]])
with 1-2 linked to marker1 and 3-4 linked to marker2 …
By the way, imagePoints are defined in pixel but what should it be for objectPointsPlanar?
Provided the markers are perfectly aligned with [AB] segment, would it be too risky and inaccurate to only use a single marker?

findHomography() is able to calculate transformation with any number of points. The more points you use, the better will the result be. So do not worry about using both ArUco markers, they will make detection more robust and accurate.

objectPointsPlanar can be defined in any units you prefer (mm, km, inch, miles) and the result of projectiveTransform() will be in same units.

OK thanks!
I’m bit ashamed to say that but I was not able to correctly give input points to the findHomography function.
I tried different things …

to define my planar points corresponding to the markers on the sheet of paper I tried this:

realWorldMarkerCorners = np.array([[0, 0], [0, 30], [30, 30], [30, 0]])

to get my image points I tried this:

markerCorners, markerIds, rejectedCandidates = cv2.aruco.detectMarkers(frame, dictionary, parameters=parameters)
index = np.squeeze(np.where(markerIds==85))
markerCorners = np.squeeze(markerCorners[index[0]])

but I got errors returned by the function:
" Bad argument (The input arrays should be 2D or 3D point sets) in findHomography"

I should definitely go deeper in numpy arrays learning :thinking:

By the way, as my ABCD can be on the left side of the marker, can I define negative points for the planar points of my rectangle?

try

realWorldMarkerCorners.shape = (-1, 1, 2)

C++ API cv::Mat commonly has to be a column vector of multi-channel elements. this will give it a column vector shape and a 2-channel element appearance, when translated from numpy into the cv::Mat that’s expected by the OpenCV function.

-1 rows for “fill this in”, 1 for one column, 2 “channels”.

OpenCV is a bit peculiar there.

Hi crackwitz,

you were right, this solved my problem!! :clap:
I did not understand the whole thing but it works.
I finally decided to use 2 markers instead of 1 and it is much stable now.

So to sum up, in my case I use:

  1. findHomography to get matrix1 for “real world markers” to “image markers”
  2. perspectiveTransform to get the projection of “my real world ABCD” to “image world ABCD”
  3. findHomography again to get matrix2 for “my picture to project” to “image world ABCD”
  4. warpPerspective to apply my picture to the current frame based on matrix2

I did not think I would need 2 x findHomography but I don’t see how to do it differently …

However, I encounter 2 new problems now: :pensive:

  1. the frame rate is just horrible (probably due to a huge processing time for each frame)
  2. the picture that is being projected in warpPerspective (which is a PNG) is completely chopped, just as if resolution was very low)

Just a precision for point 1
I use a raspi 3B in console mode (to lower CPU usage) and python with “picamera” package for preview and overlays and opencv for image processing of course

Hi there,

Any updates on that please?
I’m still stuck on my problems described in my previous post :confused:

Thanks :wink: