Canny differences between Python & Android

I’m trying to use Canny Edge detection in an Android App.

I tested in Python first using the below steps:

gray = cv.cvtColor(img, cv.COLOR_RGB2GRAY)

kernel = np.ones((5,5),np.uint8)

dilate = cv.morphologyEx(gray, cv.MORPH_CLOSE, kernel, iterations= 30)

blurred = cv.GaussianBlur(dilate, (5, 5), 0)

edges = cv.Canny(blurred, 50, 150)

Which resulted in a solid outlined image of a piece of paper.

However, when I run the similar steps through on the Android App in Kotlin using this code

val tmp = Mat(tempImage.width , tempImage.height , CV_8UC1 )
Utils.bitmapToMat(tempImage, tmp)
//Convert to Grey
cvtColor(tmp, tmp, COLOR_RGB2GRAY )
//dilate
val kernel = Mat.ones(5, 5, CvType.CV_8U )
val point = Point(-1.0,-1.0)
morphologyEx(tmp, tmp, MORPH_CLOSE , kernel, point, 30)
//Blur
GaussianBlur(tmp, tmp, Size(5.0, 5.0), 0.0)
//Canny
Canny(tmp,tmp,50.0,150.0)

The resulting image is as follows
Android
Which is a dashed outline of a page instead of a solid one

The images up to the Canny Step are identical across the two.

Python and Android are both using 4.8

W/H swapped, should be:

Mat(H,W,type)

also, this would throw an exception, if tmp really was CV_8U only (Utils.bitmapToMat probably reallocated it !)

Thanks, it didn’t make a difference.
Here are the images comparisons step by step

MRE required. screenshot of a bunch of plots may be tolerable illustration but it’s neither input nor output data.

the axis labels in your small screenshot of that plot are barely readable. I can barely make out that your picture is sized about 3000 by 4000 pixels.

your code does not contain any means to display or save your data. since you didn’t disclose how you made your picture, we are left to guess.

I’m guessing you used imshow(), and you’re wondering why the single-pixel lines in your huge resolution image don’t show up completely on your screen, which is smaller than the picture.

yes, that’s a deficit in imshow(). it’s using nearest neighbor resampling. that’s the cheapest option. when downscaling, that will entirely drop some pixel rows and columns of the image to be displayed.

if you’d save the image (NOT a screenshot), then look at it, and zoom in, you’ll see it’s all there.

the problem itself, as well as the difficulty in debugging your problem, are caused by you sharing and looking at small representations of the actual data.