[opencv.js] support for findHomography()

can anyone point me to a working example of calling findHomography() using opencv.js? I believe it is part of the subset.

1 Like

example here: iCollect

I’m coming to the conclusion that findHomography() doesn’t work in current 4.5.1 version of opencv.js . Homography matrix is always empty using RANSAC

moving on to test JSFeat to see if its a better solution.

pick knnMatch and AKAZE descriptors. that site’s defaults don’t work
 and maybe the site is broken. repeated “align” works first, then fails.

wow, you are correct. I just fixed the defaults for the example.

however, the homography (h) matrix is formed now but when applied as a transformation to warpPerspective() it doesn’t seem to work.

//Reference: https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#gaf73673a7e8e18ec6963e3774e6a94b87
            let image_B_final_result = new cv.Mat();
            cv.warpPerspective(im1, image_B_final_result, h, im2.size());
            cv.imshow('imageAligned', image_B_final_result);

here is code below


// This code is based on the article here: https://learnopencv.com/image-alignment-feature-based-using-opencv-c-python/
// You can see the c++ commented from the article above the Javascript

       function Align_img() {

            let detector_option = document.getElementById('detector').value;
            let match_option = document.getElementById('match').value;
            let matchDistance_option = document.getElementById('distance').value;
            let knnDistance_option = document.getElementById('knn_distance').value;


            //im2 is the original reference image we are trying to align to
            let im2 = cv.imread(image_A_element);
            //im1 is the image we are trying to line up correctly
            let im1 = cv.imread(image_B_element);

            //17            Convert images to grayscale
            //18            Mat im1Gray, im2Gray;
            //19            cvtColor(im1, im1Gray, CV_BGR2GRAY);
            //20            cvtColor(im2, im2Gray, CV_BGR2GRAY);
            let im1Gray = new cv.Mat();
            let im2Gray = new cv.Mat();
            cv.cvtColor(im1, im1Gray, cv.COLOR_BGRA2GRAY);
            cv.cvtColor(im2, im2Gray, cv.COLOR_BGRA2GRAY);

            //22            Variables to store keypoints and descriptors
            //23            std::vector<KeyPoint> keypoints1, keypoints2;
            //24            Mat descriptors1, descriptors2;
            let keypoints1 = new cv.KeyPointVector();
            let keypoints2 = new cv.KeyPointVector();
            let descriptors1 = new cv.Mat();
            let descriptors2 = new cv.Mat();

            //26            Detect ORB features and compute descriptors.
            //27            Ptr<Feature2D> orb = ORB::create(MAX_FEATURES);
            //28            orb->detectAndCompute(im1Gray, Mat(), keypoints1, descriptors1);
            //29            orb->detectAndCompute(im2Gray, Mat(), keypoints2, descriptors2);

            if (detector_option == 0) {
                var orb = new cv.ORB(5000);
            } else if (detector_option == 1) {
                var orb = new cv.AKAZE();
            }

            orb.detectAndCompute(im1Gray, new cv.Mat(), keypoints1, descriptors1);
            orb.detectAndCompute(im2Gray, new cv.Mat(), keypoints2, descriptors2);

            //31            Match features.
            //32            std::vector<DMatch> matches;
            //33            Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");
            //34            matcher->match(descriptors1, descriptors2, matches, Mat());

            let good_matches = new cv.DMatchVector();

            if(match_option == 0){//match
                var bf = new cv.BFMatcher(cv.NORM_HAMMING, true);
                var matches = new cv.DMatchVector();
                bf.match(descriptors1, descriptors2, matches);

                //36            Sort matches by score
                //37            std::sort(matches.begin(), matches.end());
                //39            Remove not so good matches
                //40            const int numGoodMatches = matches.size() * GOOD_MATCH_PERCENT;
                //41            matches.erase(matches.begin()+numGoodMatches, matches.end());
                console.log("good_matches: ", good_matches);
                console.log("matches.size: ", matches.size());
                for (let i = 0; i < matches.size(); i++) {
                    if (matches.get(i).distance < matchDistance_option) {
                        good_matches.push_back(matches.get(i));
                    }
                }
                if(good_matches.size() <= 3){
                    alert("Less than 4 matches found!");
                    return;
                }
            }
            else if(match_option == 1) { //knnMatch
                var bf = new cv.BFMatcher();
                var matches = new cv.DMatchVectorVector();

                bf.knnMatch(descriptors1, descriptors2, matches, 2);

                for (let i = 0; i < matches.size(); ++i) {
                    let match = matches.get(i);
                    let dMatch1 = match.get(0);
                    let dMatch2 = match.get(1);
                    if (dMatch1.distance <= dMatch2.distance * knnDistance_option) {
                        good_matches.push_back(dMatch1);
                    }
                }
            }

            //44            Draw top matches
            //45            Mat imMatches;
            //46            drawMatches(im1, keypoints1, im2, keypoints2, matches, imMatches);
            //47            imwrite("matches.jpg", imMatches);
            let imMatches = new cv.Mat();
            let color = new cv.Scalar(0,255,0, 255);
            cv.drawMatches(im1, keypoints1, im2, keypoints2, good_matches, imMatches, color);
            cv.imshow('imageCompareMatches', imMatches);

            //50            Extract location of good matches
            //51            std::vector<Point2f> points1, points2;
            //53            for( size_t i = 0; i < matches.size(); i++ )
            //54            {
            //55                points1.push_back( keypoints1[ matches[i].queryIdx ].pt );
            //56                points2.push_back( keypoints2[ matches[i].trainIdx ].pt );
            //57            }
            let points1 = [];
            let points2 = [];
            for (let i = 0; i < good_matches.size(); i++) {
                points1.push(keypoints1.get(good_matches.get(i).queryIdx ).pt );
                points2.push(keypoints2.get(good_matches.get(i).trainIdx ).pt );
            }

            //59            Find homography
            //60            h = findHomography( points1, points2, RANSAC );
            let mat1 = cv.matFromArray(points1.length, 3, cv.CV_32F, points1);
            let mat2 = cv.matFromArray(points2.length, 3, cv.CV_32F, points2); //32FC2
            console.log("mat1: ", mat1, "mat2: ", mat2);
            //Reference: https://docs.opencv.org/3.3.0/d9/d0c/group__calib3d.html#ga4abc2ece9fab9398f2e560d53c8c9780
            let h = cv.findHomography(mat1, mat2, cv.RANSAC);
            if (h.empty())
            {
                alert("homography matrix empty!");
                return;
            }
            else{console.log("h:", h);}

            //62          Use homography to warp image
            //63          warpPerspective(im1, im1Reg, h, im2.size());
            //Reference: https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#gaf73673a7e8e18ec6963e3774e6a94b87
            let image_B_final_result = new cv.Mat();
            cv.warpPerspective(im1, image_B_final_result, h, im2.size());
            cv.imshow('imageAligned', image_B_final_result);

            matches.delete();
            bf.delete();
            orb.delete();
            descriptors1.delete();
            descriptors2.delete();
            keypoints1.delete();
            keypoints2.delete();
            im1Gray.delete();
            im2Gray.delete();
            h.delete();
            image_B_final_result.delete();
            mat1.delete();
            mat2.delete();
        }

the homography didn’t work with AKAZE and knnMatch if you delete the objects at the end of the script

if you look at the console.log you will see the matrix as this:

h: 
Mat {$$: {
}}
cols: [Exception: BindingError {name: "BindingError", message: "cannot call emscripten binding method Mat.cols getter on deleted object", stack: "BindingError: cannot call emscripten binding metho
cv.js:30:7596785)↔    at Mat.o (<anonymous>:1:83)"}]
data: [Exception: BindingError {name: "BindingError", message: "cannot call emscripten binding method Mat.data getter on deleted object", stack: "BindingError: cannot call emscripten binding metho
cv.js:30:7596785)↔    at Mat.o (<anonymous>:1:83)"}]
data8S: [Exception: BindingError {name: "BindingError", message: "cannot call emscripten binding method Mat.data8S getter on deleted object", stack: "BindingError: cannot call emscripten binding metho
cv.js:30:7596785)↔    at Mat.o (<anonymous>:1:83)"}]
data16S: [Exception: BindingError {name: "BindingError", message: "cannot call emscripten binding method Mat.data16S getter on deleted object", stack: "BindingError: cannot call emscripten binding metho
cv.js:30:7596785)↔    at Mat.o (<anonymous>:1:83)"}]
data16U: [Exception: BindingError {name: "BindingError", message: "cannot call emscripten binding method Mat.data16U getter on deleted object", stack: "BindingError: cannot call emscripten binding metho
cv.js:30:7596785)↔    at Mat.o (<anonymous>:1:83)"}]
data32F: [Exception: BindingError {name: "BindingError", message: "cannot call emscripten binding method Mat.data32F getter on deleted object", stack: "BindingError: cannot call emscripten binding metho
cv.js:30:7596785)↔    at Mat.o (<anonymous>:1:83)"}]
data32S: [Exception: BindingError {name: "BindingError", message: "cannot call emscripten binding method Mat.data32S getter on deleted object", stack: "BindingError: cannot call emscripten binding metho
cv.js:30:7596785)↔    at Mat.o (<anonymous>:1:83)"}]
data64F: [Exception: BindingError {name: "BindingError", message: "cannot call emscripten binding method Mat.data64F getter on deleted object", stack: "BindingError: cannot call emscripten binding metho
cv.js:30:7596785)↔    at Mat.o (<anonymous>:1:83)"}]
matSize: [Exception: BindingError {name: "BindingError", message: "cannot call emscripten binding method Mat.matSize getter on deleted object", stack: "BindingError: cannot call emscripten binding metho
cv.js:30:7596785)↔    at Mat.o (<anonymous>:1:83)"}]
rows: [Exception: BindingError {name: "BindingError", message: "cannot call emscripten binding method Mat.rows getter on deleted object", stack: "BindingError: cannot call emscripten binding metho
cv.js:30:7596785)↔    at Mat.o (<anonymous>:1:83)"}]
step: [Exception: BindingError {name: "BindingError", message: "cannot call emscripten binding method Mat.step getter on deleted object", stack: "BindingError: cannot call emscripten binding metho
cv.js:30:7596785)↔    at Mat.o (<anonymous>:1:83)"}]
$$: {ptrType: RegisteredPointer, ptr: undefined, count: {
}, smartPtr: undefined}
__proto__: ClassHandle

however, if I don’t delete the objects at the end of the script, I don’t get errors in the matrix and the hemography matrix forms but it does not transform the image in warpPerspective()

commenting out:

            /*
            matches.delete();
            bf.delete();
            orb.delete();
            descriptors1.delete();
            descriptors2.delete();
            keypoints1.delete();
            keypoints2.delete();
            im1Gray.delete();
            im2Gray.delete();
            h.delete();
            image_B_final_result.delete();
            mat1.delete();
            mat2.delete();
             */

here is the 3x3 matrix that forms:

h: 
Mat {$$: {
}}
cols: 3
data: Uint8Array(72)
data8S: Int8Array(72)
data16S: Int16Array(36)
data16U: Uint16Array(36)
data32F: Float32Array(18)
data32S: Int32Array(18)
data64F: Float64Array(9)
matSize: Array(2)
rows: 3
step: Array(2)
$$: {ptrType: RegisteredPointer, ptr: 35449296, count: {
}}
__proto__: ClassHandle

please elaborate.

that is just its structure, not the actual contents of the matrix (coefficients).

<It doesn’t work> sorry, what I meant was that the matrix forms but when you use it in warpPerspective() as outlined in the example on: OpenCV: Basic concepts of the homography explained with code (see c++ code) the resulting image gets created but seems to be blank (a white 600x600 image). i put a box below in the screenshot for where the image is to be displayed as a result of the warpPerspective() call.

here is a screenshot. just go to: iCollect and press ‘align’ then f12 (in chrome) to get dev tools and you can open the h-matrix and navigate through it.

to make it easier to view I printed the matrix out in console.log
you can see it here in this screenshot:
image

access issue. the data is float64, not float32. that website decodes the matrix h as float32. step array indicates that the data is 8-byte-strided (so float64).

image

the float64 interpretation of the matrix still looks weird so there was probably another issue between matching descriptors and findHomography, in between steps or in either step.

mat1 contains a lot of NaN values
 so that is a problem. I don’t know what data type it’s supposed to be. step says 4-byte but that’s all I can see. seems to be a 28 byte pattern
 not sure what that should mean.

so it must be how i’m forming mat1 and mat2. it must not be CV_32F

let mat1 = cv.matFromArray(points1.length, 3, cv.CV_32F, points1);

reading here OpenCV: Camera Calibration and 3D Reconstruction

it says:

srcPoints Coordinates of the points in the original plane, a matrix of the type CV_32FC2 or vector Point2f
dstPoints Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or a vector Point2f

but when i use cv.CV_32FC2 versus cv.CV_32F as the third argument in cv.matFromArray the homography fails with an unhandled exception. there are still NaN’s in mat1

also:


or it could be how I am forming the points1 and points2 arrays (see 2 different ways below). Neither works, but it leads me to believe I may not be formulating the points1 and points2 arrays correctly.

	let points1 = [];
	let points2 = [];
	for (let i = 0; i < good_matches.size(); i++) {

		//Option 1:
		//points1.push(keypoints1.get(good_matches.get(i).queryIdx ).pt );
		//points2.push(keypoints2.get(good_matches.get(i).trainIdx ).pt );                
		//  This array elements look like this:
		//    {x: 190.8971710205078, y: 298.37286376953125}
		//Option 2:
		//  This array elements look like this:
		//    Point {x: 190.8971710205078, y: 298.37286376953125}
		points1.push( new cv.Point(keypoints1.get(good_matches.get(i).queryIdx).pt.x, keypoints1.get(good_matches.get(i).queryIdx).pt.y));
		points2.push( new cv.Point(keypoints2.get(good_matches.get(i).trainIdx).pt.x, keypoints2.get(good_matches.get(i).trainIdx).pt.y));
	}

        //I've also tried 2 and 3 for the second argument to matFromArray() below
	let mat1 = cv.matFromArray(points1.length, 2, cv.CV_32F, points1);
	let mat2 = cv.matFromArray(points2.length, 2, cv.CV_32F, points2); 
	console.log("mat1: ", mat1, "mat2: ", mat2);
	let h = cv.findHomography(mat1, mat2, cv.RANSAC);

I think it is a type issue some how so I am testing types in the console log. they were not consistent.
so as a test, i changed mat1 and mat2 to cv.CV_8U (even though doc says they need to be 32FC2)

let mat1 = cv.matFromArray(points1.length, 2, cv.CV_8U, points1);
let mat2 = cv.matFromArray(points2.length, 2, cv.CV_8U, points2);
let h = cv.findHomography(mat1, mat2, cv.RANSAC);

and I use the gray image in warpPerspective.
and now i’m getting a result.
but its a bad result.

cv.warpPerspective(im1Gray, image_B_final_result, h, im2.size());

I uploaded this to iCollect

no NaN in mat1 this time.

those homography coefficients still look like nonsense. still type issues. it contains float64 data. see stride, it’s 8 and 24, that’s one f64 per element and three per row.

the values interpreted as f64 are also wrong but not type-wise wrong but in magnitude. they’re all near 0 except for the mandatory 1 in the bottom right.

you want to see something that is basically an identity matrix with some rotation in the top left 2x2 and some more translation in the top right 2x1 part. the sample picture is just rotation and translation, no scaling shearing or projection, so that’s it.

check that your points arrays make sense. those coordinates should not be u8 type. figure out what shapes you can make. a Nx1 vector of CV_32FC2 would be a python/numpy shape of (rows, 1, 2). alternatively, these functions should support single-channel two-column matrices, so 32FC1 or 32F, but shape (N,2).

this opencv.js stuff needs its matrices to come with some prettyprinting. digging around in seemingly variant objects is tiresome. I don’t want to guess the contained type and I don’t want to look up the matrix’s shape before interpreting a flat list of numbers.

1 Like

@crackwitz - I understand most of your comments but the one I’m struggling with is this: “check that your points arrays make sense. those coordinates should not be u8 type. figure out what shapes you can make”

In stepping through each steps data (see below) what step do you think I’m screwing up the datatype: 3, 4, 6 or 7 ?

in steps 1 - 3 the data is 8U and then in steps 4 and 6 data goes to something that looks like 32F: “258.982177734375”

The console.log for https://icollect.money/opencv_align# lines up with the comments below.

STEP 1: READ IN IMAGES

Images read in channels=4 type:24 cols:600 rows:600 depth:0 colorspace:RGBA or BGRA type:CV_8U

STEP 2: CONVERT IMAGES TO GRAYSCALE

Images converted to BGRA2GRAY channels=1 type:0 cols:600 rows:600 depth:0 colorspace:GRAY type:CV_8U

STEP 3: DETECT FEATURES & COMPUTE DESCRIPTORS

Used AKAZE to find keypoints1 & keypoints2 and descriptors1 & descriptors2

descriptors1 - cols:61 rows:303 type:0 depth:0 channels=1 colorspace:GRAY type:CV_8U

Here is an example of the resulting data of the descriptors Mat:
		Mat {$$: {
}}
		cols: 61
		data: Uint8Array(18483)
			[0 
 9999]
				[0 
 99]
				0: 180
				1: 6
				2: 244
				3: 161
				etc...
		data8S: Int8Array(18483)
		data16S: Int16Array(9241)
		data16U: Uint16Array(9241)
		data32F: Float32Array(4620)
		data32S: Int32Array(4620)
		data64F: Float64Array(2310)
		matSize: Array(2)
		rows: 303
		step: Array(2)
		0: 61
		1: 1
		length: 2
descriptors2 - cols:61 rows:266 type:0 depth:0 channels=1 colorspace:GRAY type:CV_8U

Step 4: Match Features

Used knnMatch to find matches and then built an array of good matches.  Here is an example of the resulting data:

good_matches:  	[0] {queryIdx: 0, trainIdx: 15, imgIdx: 0, distance: 174.21824645996094}
				[1] {queryIdx: 1, trainIdx: 13, imgIdx: 0, distance: 219.84767150878906}
				....
				[196] 

STEP 5: DRAW TOP MATCHES AND OUTPUT IMAGE TO SCREEN
STEP 6: EXTRACT LOCATION OF GOOD MATCHES AND BUILD POINT1 and POINT2 ARRAYS

build points1[] and points2[] arrays.  Here is an example of the data in points1[]:
		[0 
 99]
		0: {x: 371.2216796875, y: 258.982177734375}
		1: {x: 348.1711730957031, y: 262.8340759277344}
		2: {x: 376.22442626953125, y: 266.4711608886719}
		3: {x: 300.205810546875, y: 274.12786865234375}
		4: {x: 197.02357482910156, y: 314.02392578125}

STEP 7: CREATE MAT1 and MAT2 FROM POINT1 and POINT2 ARRAYS

building mat1 and mat2 to feed to findHomography using cv.matFromArray(points1.length, 2, cv.CV_32F, points1);
        
resulting mats have: channels=1 type:5 cols:2 rows:196 depth:5 colorspace:GRAY type:CV_32F
		cols: 2
		data: Uint8Array(1568)
			[0 
 99]
				0: 0
				1: 0
				2: 192
				3: 127
		data8S: Int8Array(1568)
		data16S: Int16Array(784)
		data16U: Uint16Array(784)
		data32F: Float32Array(392)
		data32F: Float32Array(392)
			[0 
 99]
				0: NaN
				1: NaN
				2: NaN
		data32S: Int32Array(392)
		data64F: Float64Array(196)
			[0 
 99]
				0: 2.247117487993712e+307
				1: 2.247117487993712e+307
				2: 2.247117487993712e+307
		matSize: Array(2)
		rows: 196
		step: Array(2)
			0: 8
			1: 4
		length: 2

STEP 8: CALCULATE HOMOGRAPHY USING MAT1 and MAT2

		h = cv.findHomography(mat1, mat2, cv.RANSAC);
	result:
			h = [ 	0.002668781266586141 , -0.008741017900773997 , 2.027788870199133
					0.000003222589240634258 , -0.0000024563896431059195 , 0.0004692269844962694
					0.0023526158100672843 , -0.0047952311495643545 , 1 ]

CV_8UC4 is encoded as 24, so that sounds consistent.

edit: nevermind this, channel and element type are already there: for your convenience, see if you can figure out the mapping from number back to type. the element type is encoded in the lower three bits (0b000), the number of channels in higher bits, (nch-1) << 3 == 0b11 << 3. it should be “maketype” or something in the opencv source/docs, but it’s probably not a public API.

some json: {"0": "CV_8UC1", "1": "CV_8SC1", "2": "CV_16UC1", "3": "CV_16SC1", "4": "CV_32SC1", "5": "CV_32FC1", "6": "CV_64FC1", "8": "CV_8UC2", "9": "CV_8SC2", "10": "CV_16UC2", "11": "CV_16SC2", "12": "CV_32SC2", "13": "CV_32FC2", "14": "CV_64FC2", "16": "CV_8UC3", "17": "CV_8SC3", "18": "CV_16UC3", "19": "CV_16SC3", "20": "CV_32SC3", "21": "CV_32FC3", "22": "CV_64FC3", "24": "CV_8UC4", "25": "CV_8SC4", "26": "CV_16UC4", "27": "CV_16SC4", "28": "CV_32SC4", "29": "CV_32FC4", "30": "CV_64FC4"}

looks good.

I believe 61 bytes might be correct for an akaze descriptor. source says 486 bits, which is 60.75 bytes. it’s a binary descriptor, so U8 is right.

303 and 266 rows means that many keypoints and descriptors respectively. I wouldn’t call that GRAY though, it’s binary data.

note that the descriptors contain no position information (no coordinates). that comes separately in the keypoint objects.

looks plausible. I like that this gives you actual objects with sensible fields (query/trainIdx and distance!). 197 matches is also a good sign, relative to how many descriptors you started with.

let’s assume that worked, since we saw that it worked before.

“good” matches requires Lowe’s ratio test, which requires flann and the two nearest matches for every query.

what you have is just the nearest/best match and from that you can’t judge the quality of a match.

let’s ignore that for now. RANSAC should handle this very neat synthetic situation. we’ll see.

there I see issues. type 5 is CV_32F(C1) and single channel. docs say to give CV_32FC2. see if you can make a single-column two-channel matrix. the API should like that a lot better.

I am puzzled as to why the data32F dump shows NaNs. that shouldn’t be there. you just copy those x and y values from step 6, right?

step 8 should work
 as soon as the input data isn’t full of NaNs.

1 Like

I am puzzled as to why the data32F dump shows NaNs. that shouldn’t be there. you just copy those x and y values from step 6, right?

here is how I build the point arrays prior to sending them to the cv.matFromArray() to build the Mat:

            let points1 = [];
            let points2 = [];
            for (let i = 0; i < good_matches.size(); i++) {
                points1.push( new cv.Point(keypoints1.get(good_matches.get(i).queryIdx).pt.x, keypoints1.get(good_matches.get(i).queryIdx).pt.y));
                points2.push( new cv.Point(keypoints2.get(good_matches.get(i).trainIdx).pt.x, keypoints2.get(good_matches.get(i).trainIdx).pt.y));
            }

here is an example of the beginning of point1 array:

[0 
 99]
	0: Point
		x: 371.2216796875
		y: 258.982177734375
		__proto__: Object
	1: Point {x: 348.1711730957031, y: 262.8340759277344}
	2: Point {x: 376.22442626953125, y: 266.4711608886719}
	3: Point {x: 300.205810546875, y: 274.12786865234375}
	4: Point {x: 197.02357482910156, y: 314.02392578125}
	5: Point {x: 183.9698486328125, y: 324.82049560546875}

and there are no NaN’s in either point1 or point2 arrays

see if you can make a single-column two-channel matrix. the API should like that a lot better.

OK, I changed matFromArray to create type=13 (CV_CF_32FC2) with 1 column

let mat1 = cv.matFromArray(points1.length, 1, cv.CV_32FC2, points1);
let mat2 = cv.matFromArray(points2.length, 1, cv.CV_32FC2, points2); 

still get the NaN’s in the Mat data32F dump. I did try

mat1 = cv.patchNaNs(mat1, 0.0);

but the API doesn’t exist in the JS version
I made these changes on the hosted page https://icollect.money/opencv_align#

I’d say matFromArray can’t handle you giving it objects, i.e. lists of cv.Point. it probably wants plain arrays containing only numbers, not objects. you need to find out how smart or dumb matFromArray is, and what precisely it is specified to handle.

yes, opencv.js gets little to no maintenance. if you are displeased, as you should be, feel welcome to look for issues on the github or open one about its non-existent documentation.

not displeased–as devs we feel around in the dark quite a bit
 I’ve read and poked at the alternatives. Opencv.js would be the most powerful javascript library for this sort of thing out there if more people used it but I think most try, are disappointed and move on
 but you are correct that the lack of docs makes it difficult. what makes it more difficult though is the lack of experience with the javascript library and that takes time and effort. I’ve also filed a bug in the past on opencv.js and it’s not moved other than being checked in so I was thinking there was no one working on the code at this time.

in regards to your point about the array, I thought the same thing and also tried this:

let points1 = [];
let points2 = [];
	for (let i = 0; i < good_matches.size(); i++) {
		points1.push(keypoints1.get(good_matches.get(i).queryIdx ).pt );                
		points2.push(keypoints2.get(good_matches.get(i).trainIdx ).pt );

and it creates this:

[0 
 99]
	0:
		x: 371.2216796875
		y: 258.982177734375
		__proto__: Object
	1: {x: 348.1711730957031, y: 262.8340759277344}
	2: {x: 376.22442626953125, y: 266.4711608886719}
	3: {x: 300.205810546875, y: 274.12786865234375}

but gives same result. I uploaded that just now in case you were interested.

also, your comments about “good” matches requires Lowe’s ratio test, which requires flann and the two nearest matches for every query. what you have is just the nearest/best match and from that you can’t judge the quality of a match.

I was following this code OpenCV: Feature Matching with FLANN and here is how i’m doing it. but DMatchVectorVector() doesn’t seem to takes any arguments like cv.FLANNBASED as outlined in the example code.

	let bf = new cv.BFMatcher();
	let matches = new cv.DMatchVectorVector();	
	bf.knnMatch(descriptors1, descriptors2, matches, 2);

	let counter = 0;
	for (let i = 0; i < matches.size(); ++i) {
		let match = matches.get(i);
		let dMatch1 = match.get(0);
		let dMatch2 = match.get(1);
		if (dMatch1.distance <= dMatch2.distance * parseFloat(knnDistance_option)) {
			good_matches.push_back(dMatch1);
			counter++;
		}
	}

where is the opencv.js code? is this it? opencv/modules/js/src at master · opencv/opencv · GitHub I wasn’t finding functions in there that I’m using like drawMatches() so I didn’t think I was in the right place


it’s the right place. all the magic uses emscripten to compile the actual code (C++) into js. there is no actual js source code of these APIs.

I don’t understand a lot of the design decisions in that module. data structures such as DMatchVectorVector could just be matrices or vectors/lists. having structures for that is needless complication.

I think you’ll understand the principle of these APIs and what data they spit out better if you explore them in python (or C++ if you have to). with that understanding I think you could better navigate and investigate how opencv.js wants it done.

1 Like

OK, I’m very close. The result image is a bit twisted but it looks pretty good.

I changed how I built the points1 and points2 arrays.

	let points1 = [];
	let points2 = [];
	for (let i = 0; i < good_matches.size(); i++) {
		points1.push(keypoints1.get(good_matches.get(i).queryIdx ).pt.x );
		points1.push(keypoints1.get(good_matches.get(i).queryIdx ).pt.y );
		points2.push(keypoints2.get(good_matches.get(i).trainIdx ).pt.x );
		points2.push(keypoints2.get(good_matches.get(i).trainIdx ).pt.y );
	}

I updated iCollect

2 Likes