Influence of interpolation on measuring with homography

Hello,

I am aiming to measure objects within a plane.
So starting with this:


I am able to transform it to this:

(For testing purposes I used the (arbitrary) quadrilateral with known points.)

So now I am able to measure angles and distances in this plane. (At least the process of measuring is simplified, because I guess one could also measure in the original image with enough effort and maths.)

Back to my topic: The “cv.warpPerspective” function interpolates pixels. (Because the homography maps pixels, which may be considered as integers to real numbers (floats). So e.g. pixel 102, 506 gets “warped” to 300.5 and 736.9. Then one is in need to interpolate the values at 300 and 737 from the surrounding “neighbors”.)

Now my question(s):
(1) Since the picture is getting “strechted” in some regions more than in others I am in fear that this interpolation introduces an measurement error?

(2) Does anyone got experiance how this influences the measuring?

(3) Does it intruduce errors at all?

(4) And which interpolation method would be the most exact/preferred and why?

(5) Which resolution to choose for the final picture?

My guess would be that the maximum error is a one “square” pixel-region. Because thats the “uncertainity” of the warping (123, 234 → 100.6 and 300.2 could land at 100/ 101 and 300/301). Might that be true?

Some comments

  1. If the intention was to warp the image so the printed page was rectangular after warping, the result wasn’t very good. I’m assuming the four squares near the corner of the paper form a rectangle. The angles in the top left/bottom right were 85 deg / 95 deg. I’m sure you know that, but it leads to my next comment.
  2. The perspective warp itself is probably a much larger source of error vs interpolation. I would focus on getting good data for your perspective warp and not worry too much about the effects of interpolation at this point.
  3. Ideally you should be locating your features in sub-pixel units, not whole pixels. I’m not sure your quad vertices would work well with cornerSubpix (try it?) but there are ways to get better than mouse-click resolution to estimate the image locations of the vertices. (Fit lines to the edges and compute their intersection?)
  4. I would try to compute my perspective transform from many more than 4 points if possible. Use some method that discards outliers and computes the transform only using the inliers.
  5. If you can find the features in the warped image and then transform the point locations (vs undistorting the whole image and finding features there), you could avoid any error that might be induced by the image warp. Of course you have to be careful that your feature localization works well (accurately) in the perspective-distorted space.
1 Like

once again, that sheet of paper isn’t lying flat at all.

Thank you for pointing that out, but please consider these pictures just as a “prove of concept”. These won’t be my final images! I am going to use a detection algorithm.

Hello Steve,

thank you very much for your detailed reply.

1: Yes that was my intention. These pictures were just as an example. But I guess I should have chosen better ones. Which software did you use to calculate the angles?

2: Okay so I will focus on the accurate detection of the points of interest and not further dig into interpolation.

3: Is there any feature you could recommend out of your experiance? Best would be one that is stable and already implemented in OpenCV. Maybe 4 aruco markers in the corners?

4: I am planing on using a big charuco board. This gives me many of these desired references.

Your point 5 sounds very interesting, but I wasn’t able to follow all of it. As far as I understand you mean that I should find features, calculate the homography and apply it. Without unditorting the image in the first place?
Is that right, could you explain it, maybe using an example?