Pac-man Effect when using warpPerspective


I’m using a ChArUco board to calibrate three cameras. The input of the cameras will be used to create a stitched top-down view. So far, everything seems to work fine, except for one part: the perspective warping.

Whenever I try to warp the images, I seem to get the “pacman effect”.

Pacman Effect: The belief that someone attempting to go over the edge of the flat Earth would teleport to the other side.

I have made sure that all points are in the right order, so that shouldn’t be the problem.

You can check the code at Calibration · GitHub.

Would greatly appreciate any insights or suggestions on what could be causing this problem.
Thanks in advance!

An example of what the points look like:

The source points are green, and the destination points are red.

the homography matrix got mangled.

that usually happens because:

  • not enough features
  • bad features
  • bad matching
  • bad filtering of matches
  • bad homography estimation from matches

· · · ⍩⃝

The code currently uses 4 corners (for both the source and destination) to calculate the homography. The corners that I have seem to be correct, and in the right order. Is this an issue from my side or is there a bug within OpenCV causing this behaviour?

that was a lot of code and none of it looks like a war crime, as I’m flying over it at high altitude.

in that picture with the red and green dots, I see green dots 0,1,2,3, but only red dots 1,2,3. where’s red dot 0?

do those pictures even belong together? I see a circular painted line in one picture but not the other. homographies don’t turn straight lines into curves, or curves into straight lines.

The red dot 0 is below the green dot 0. They are indeed different images (same input size, same camera position, just a different location). The same issue happens with the other image.

This is the image that is warped in the first example. I also just noticed it’s a different camera, but each camera gets their own matrix. The picture with the corners is just to show the transformation done.

taking the ground view picture with the charuco and the four pairs of points… IF you mapped from one set to the other, you’d get a possibly wildly warped view but it will have the board presenting top-down in the indicated area.

if you applied that homography the other way (inverted), or the pairing was flipped (red → green or green → red), then at least you’d see part of the board and part of the road, in the other quad.

your first exercise should be to take a picture of the board or anything else that’s rectangular, then pick/define the corners of the quadrilateral in that picture, then define the corners in a desired top-down view (i.e. literally rectangular), and then figure out the code to perspective warp that so it works.

assuming it’s some vehicle with a bunch of cameras looking out at weird angles, I’d recommend taping a big grid on the ground, and using that for “calibration” and fusion of the perspectives into one big ground plane composite.

then take pictures from all cams, find those known points, and pair them up with points in the model ground plane.

the point of this is to get points that properly span the whole view. that little charuco you got there will give you a lot of uncertainty because its image (in the camera picture) is so small.

the cameras should be calibrated intrinsically and the pictures unwarped (lens distortion), or the optics should be tolerably close to good.

I’ll give that picture with the charuco a try. I think that’s got enough info to get a halfway decent ground plane out of it.

As you can see in the top right, the board itself is also warped, this is likely due to the board not being great, but should still work “somewhat” decent.
Since I can only upload one image at a time. Here is part 1.

Part 2, the same image.

The warped version of this image (see post 2, the one with the dots):

Regarding the charuco board, it’s a proof of concept. Once we are sure a system like this can work, we will invest in a better and bigger board.

This is possible:

The board warps of course if you’re just a pixel off with your picked points. you look at that board from near ground level. that’s a severe warp, numerically very difficult. you need better data. wider baseline, i.e. your reference points need to span the view a lot better.

I’d recommend running ECC refinement on this homography. You will need to prepare a picture that shows the board in its intended view, as clean data, with all the arucos matching the ones on the cardboard. the ECC refinement will then gradient-descend the homography so the warped picture matches the model picture.

I also suspect that you didn’t tape the printout onto the cardboard across the printout’s entirety, or that the cardboard is warped even just a little, or both.

use double-sided tape for this, not any glue that would soften the paper up and make it lose dimension.

get cardboard from the hardware store. not plywood, that’s too unruly thanks to its grain. MDF is good. other materials are also good. plexiglass/acrylic can be quite flat. or literal glass. but that’s heavy and may break easily.

So the way I do it right now is simply not possible? Also, what is causing it to have that “pacman effect”?

the pacman effect is a mathematical fact of life. at the center of this X shape, that’s a vanishing point. the “other” side just results from the math.

this “other side”, if you follow it back into the picture, is literally not on your ground plane.

you’d want to place the vanishing point and “other side” off-picture, or mask it off before compositing.

Is it possible to calculate the vanishing point? Or calculate where the “properly” warped part is?

the way you do it right now is causing you very very hard problems. I would recommend revising the approach.

you could calculate homographies from poses, and estimate the poses using a different pattern that’s 3D. that’s just arucos placed onto the three walls of a perfect corner.

calculating a homography from a camera’s pose is something I managed to do at one time but I’m not practiced with it. somewhere on SO, there’s an answer by me. I might see if I can find it.

Yeah, tried calculating it from poses before, didn’t work out well. Hence why I went for this approach by simply getting some corners and warp them to the real-world corners. While I do get the pacman effect, the part on the ground plane does seem to be correct. I do not need the results to be perfect, as long as lines are somewhat on each other.

It seems I’m limited to X amount of posts per day. So I cannot make any further replies now.

The issue is that we might need to re-calibrate multiple times, and even within a very short time span sometimes. So manually defining the corners is not an option, hence the charuco board. Getting an improved version of the charuco board is definitely an option though!

pacman effect: you could manually define the polygon for the “right” half of the output, for each output. do that generously. the black area needs handling for the composite anyway. more on that if you need it. in short: extend the source to 4-channel, 4th channel all 255, then warp it all. the warp result will have a 4th channel being a pretty good alpha channel that you can use for compositing.

if you aren’t married to the handy charuco board, I really would urge you to get a roll of white tape (or even retro-reflective!) and a tape measure or other ranging instrument. anything that helps make right angles is also helpful (laser line thingies).

to start, find a flat surface and put down four little marks in a 10x10 meter square. make sure the angles are right angles (diagonal is 14.1… meters). put the vehicle in the middle. see if the cameras can pick those little dots up (or they’re too small/viewed too shallowly). see if that spans enough, or too much, of the view. adjust. then subdivide so that each camera sees points spanning a good area of its view. they needn’t be in a rectangle either. any four points that you can identify are good.