Pac-man Effect when using warpPerspective

I’ve bumped your user level by one. maybe that helps with the post limit.

if you need quick calibration, you could get multiple boards plus some sticks or long and narrow boards to help you put those charucos into a defined arrangement to each other.

that’s then like one big charuco board, so you don’t need views to overlap so much anymore that you can actually see a whole board in each overlapping piece.

or maybe go get a tarp and have it printed or stick your (larger) calibration patterns on that.

I’m not affiliated with them but maybe hit up calib.io because making calibration patterns is their business.

We’ll find a bigger version if manage to get a working proof of concept. We don’t need too much precision though, as long as the end result (stitched images) overlap enough that lines touch each other (even with one pixel). That seems to be possible with the current board even, the issue is indeed the pacman effect. The issue is that we cannot manually define the polygon for the right part of the output, as the camera position might suddenly change.

I’m just trying to play along with current (v4.9) aruco in OpenCV… and I just ran into an issue that shouldn’t happen. might be due to the python API. I don’t know. the docs are very unhelpful about this. I submitted a bug. let’s see what they’ll make of it. OpenCV’s aruco module being broken in random ways has been the perpetual state for years. one thing gets fixed, another thing gets broken again. and the docs… oh the docs…

that’s with the “new” API anyway. the “deprecated” one works just fine. :man_facepalming:

I got far enough with the “old” aruco API that I have the board’s pose. now I need to find a way to generate a homography for that.

I assumed some moderate focal length of 2000 for your pictures. I can’t know if that’s accurate.

I found the writeup I gave about a specific issue when generating homographies from 3D rotations. since I juggle with 4x4 matrices there, I think some of the thinking can be applied/transferred to generating them under arbitrary 3D transformations.

I will give that a try. Thank you so much!

Edit:
Using the rotation matrix to calculate the homography does not work for this use-case sadly.

I also gave vanishing points a go, but I doubt that’s an option as not all cameras are parallel to the horizon.

I may have explained that wrong. I’m not even sure that vanishing points (of lines) are an adequate explanation. perhaps “vanishing line” (of a plane) comes closer to the truth.

imagine 2D space. a camera and its viewing frustum just look like this ASCII “art”:

<

this camera maps 2D space onto 1D space.

imagine the camera looking at a plane. the equivalent to a plane (dividing all space in two) in that space is a line (divides 2D space). the camera looks at an infinite line.

<   /

usually, that plane/line would intersect the viewing frustum entirely ahead of the camera.

if the plane/line is positioned just right, it also intersects it behind the camera. that is the case when you see the “horizon” in the picture.

<   _

that’s when that “pacman” thing happens. what is supposed to be behind the camera is mapped just the same as all in front of it, mathematically.

in computer graphics, those cases are handled by figuring out what part of the plane (line) lies in front of the camera, and only drawing that part.

anyway, I think this issue has ways of managing it. they just need to be figured out. might be as simple as investigating the homography to derive the line splitting the image into “ahead” and “behind” the camera. masking the “behind” part out of the output is then just exercise.

1 Like

I have the same problem and find this post on Google. I will give a short answer bellow for anyone who may come here like I do.

The cause

The part of source image that goes behind the camera may get projected to the otherside of the image.

The solution

The ultimate solution is for cv2.warpPerspective to include an option to exclude such points that get projected to the other side of the image (or precisely, those points whose z values are negative).

The workaround

Mask out the part in dst image that get projected to the other side.

Below is how you do it in python.

warp=... # a 3x3 matrix
dst=cv2.warpPerspective(src, warp, size)
h,w=dst.shape[:2]
dst[np.arange(w).reshape(1,-1)*warp[2,0]+warp[2,2]>np.arange(h).reshape(-1,1)*-warp[2,1]]=0

Hi! I know it’s been a while since this post was created or since your reply. But if anyone has the same issue or is wondering what the solution to this was. I’ve published the code and explanation of the approach on GitHub: