Perspective Transformed Image Edge Becomes Crisp

Hi~!

I am trying to get an image perspectively transformed.

I followed the exact step on this tutorial page on opencv.

If you look at the warped image closely, diagonal lines become really crisp.

I used the very same code for my project

const src = cv.matFromImageData(srcImgData);
const dst = new cv.Mat();
const dstSize = new cv.Size(width, height);
const srcCoordsConverted = cv.matFromArray(4, 1, cv.CV_32FC2, srcCoords);
const dstCoordsConverted = cv.matFromArray(4, 1, cv.CV_32FC2, dstCoords);
const transformData = cv.getPerspectiveTransform(srcCoordsConverted, dstCoordsConverted);

cv.warpPerspective(src, dst, transformData, dstSize, cv.INTER_LINEAR, cv.BORDER_CONSTANT, new cv.Scalar());

and got this image

and this one also has very crisp edges.

Is there a way to make the edges smoother??

You can always tailor the code to meet your preferences, for exanple change the interpolation method from linear to, for example, INTER_LANCZOS4, which should give smoother results - but also is slower

Hmm… cv. INTER_LANCZOS4 doesn’t seem to change the quality that much :frowning:

“crisp” is somewhat ambiguous here. “crisp” is usually a desirable quality, the opposite of “blurry”. when you speak of jagged/aliased lines, “crisp” is not the word you should use if you want to get your point across.

OpenCV does blend with a borderValue. the default is 0, so the image would have nice antialiased edges against black background.

I think the javascript variant of OpenCV deals with RGBA images usually… so an all-zero borderValue would represent transparent black. you need to try transparent white, i.e. (255,255,255,0). that, blended with the source image, looks better against white background.

OpenCV is not alpha-aware. it does not understand alpha blending, or what an image needs to be like to be alpha-blendable. it merely does interpolation on all channels.

the issue is that in these edge pixels, “background” shouldn’t be mixed in. only the alpha channel should change. OpenCV doesn’t do that, it mixes all channels. that means, whatever borderValue you have, will get mixed into those edge pixels.

that is why you got some black shining through in those edge pixels.

some python, not sure if that’s illuminating or not. just imagine that this happens to BOTH the alpha channel AND the color channels.

>>> im = np.full((4,4), 99, 'uint8'); M = np.eye(3); M[0:2,2] = (2.5,2.5); cv.warpAffine(src=im, dsize=(10,10), M=M[:2])
array([[ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0],
       [ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0],
       [ 0,  0, 25, 50, 50, 50, 25,  0,  0,  0],
       [ 0,  0, 50, 99, 99, 99, 50,  0,  0,  0],
       [ 0,  0, 50, 99, 99, 99, 50,  0,  0,  0],
       [ 0,  0, 50, 99, 99, 99, 50,  0,  0,  0],
       [ 0,  0, 25, 50, 50, 50, 25,  0,  0,  0],
       [ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0],
       [ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0],
       [ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0]], dtype=uint8)

>>> im = np.full((4,4), 99, 'uint8'); M = np.eye(3); M[0:2,2] = (2.5,2.5); cv.warpAffine(src=im, dsize=(10,10), M=M[:2], borderMode=cv.BORDER_CONSTANT, borderValue=99)
array([[99, 99, 99, 99, 99, 99, 99, 99, 99, 99],
       [99, 99, 99, 99, 99, 99, 99, 99, 99, 99],
       [99, 99, 99, 99, 99, 99, 99, 99, 99, 99],
       [99, 99, 99, 99, 99, 99, 99, 99, 99, 99],
       [99, 99, 99, 99, 99, 99, 99, 99, 99, 99],
       [99, 99, 99, 99, 99, 99, 99, 99, 99, 99],
       [99, 99, 99, 99, 99, 99, 99, 99, 99, 99],
       [99, 99, 99, 99, 99, 99, 99, 99, 99, 99],
       [99, 99, 99, 99, 99, 99, 99, 99, 99, 99],
       [99, 99, 99, 99, 99, 99, 99, 99, 99, 99]], dtype=uint8)
1 Like

Wow!!

That’s awesome! Thanks for the suggestion!

Also, thanks for explaining the concept of “crisp” in openCV.

It works great now :slight_smile:

Hmm… not related to this topic, but

in below example,

let src = cv.imread('canvasInput');
let dst = new cv.Mat();
let dsize = new cv.Size(src.rows, src.cols);
// (data32F[0], data32F[1]) is the first point
// (data32F[2], data32F[3]) is the sescond point
// (data32F[4], data32F[5]) is the third point
// (data32F[6], data32F[7]) is the fourth point
let srcTri = cv.matFromArray(4, 1, cv.CV_32FC2, [56, 65, 368, 52, 28, 387, 389, 390]);
let dstTri = cv.matFromArray(4, 1, cv.CV_32FC2, [0, 0, 300, 0, 0, 300, 300, 300]);
let M = cv.getPerspectiveTransform(srcTri, dstTri);
// You can try more different parameters
cv.warpPerspective(src, dst, M, dsize, cv.INTER_LINEAR, cv.BORDER_CONSTANT, new cv.Scalar());
cv.imshow('canvasOutput', dst);
src.delete(); dst.delete(); M.delete(); srcTri.delete(); dstTri.delete();

how do I wait until the process is done?

How do I know if openCV finished drawing? I don’t think putting await works here.

await cv.warpPerspective(src, dst, M, dsize, cv.INTER_LINEAR, cv.BORDER_CONSTANT, new cv.Scalar());

it’s simple serial code. it’s done when it’s done. one statement executes after another.

no, await doesn’t work there.

that’s a general javascript question. if you need help with the semantics of javascript, better ask where people deal with javascript specifically.

1 Like

hmm… alright :slight_smile:

but is there a way to find out when it’s done?

Well, hmm using async/await or using warpPerspective really depends on how the function is written. So I thought this might be the right forum to ask :frowning:

Anyway, thank you for your help! The crisp part worked out really fine! :smiley: