@crackwitz - I understand most of your comments but the one I’m struggling with is this: “check that your points arrays make sense. those coordinates should not be u8 type. figure out what shapes you can make”
In stepping through each steps data (see below) what step do you think I’m screwing up the datatype: 3, 4, 6 or 7 ?
in steps 1 - 3 the data is 8U and then in steps 4 and 6 data goes to something that looks like 32F: “258.982177734375”
The console.log for https://icollect.money/opencv_align# lines up with the comments below.
STEP 1: READ IN IMAGES
Images read in channels=4 type:24 cols:600 rows:600 depth:0 colorspace:RGBA or BGRA type:CV_8U
STEP 2: CONVERT IMAGES TO GRAYSCALE
Images converted to BGRA2GRAY channels=1 type:0 cols:600 rows:600 depth:0 colorspace:GRAY type:CV_8U
STEP 3: DETECT FEATURES & COMPUTE DESCRIPTORS
Used AKAZE to find keypoints1 & keypoints2 and descriptors1 & descriptors2
descriptors1 - cols:61 rows:303 type:0 depth:0 channels=1 colorspace:GRAY type:CV_8U
Here is an example of the resulting data of the descriptors Mat:
Mat {$$: {…}}
cols: 61
data: Uint8Array(18483)
[0 … 9999]
[0 … 99]
0: 180
1: 6
2: 244
3: 161
etc...
data8S: Int8Array(18483)
data16S: Int16Array(9241)
data16U: Uint16Array(9241)
data32F: Float32Array(4620)
data32S: Int32Array(4620)
data64F: Float64Array(2310)
matSize: Array(2)
rows: 303
step: Array(2)
0: 61
1: 1
length: 2
descriptors2 - cols:61 rows:266 type:0 depth:0 channels=1 colorspace:GRAY type:CV_8U
Step 4: Match Features
Used knnMatch to find matches and then built an array of good matches. Here is an example of the resulting data:
good_matches: [0] {queryIdx: 0, trainIdx: 15, imgIdx: 0, distance: 174.21824645996094}
[1] {queryIdx: 1, trainIdx: 13, imgIdx: 0, distance: 219.84767150878906}
....
[196]
STEP 5: DRAW TOP MATCHES AND OUTPUT IMAGE TO SCREEN
STEP 6: EXTRACT LOCATION OF GOOD MATCHES AND BUILD POINT1 and POINT2 ARRAYS
build points1[] and points2[] arrays. Here is an example of the data in points1[]:
[0 … 99]
0: {x: 371.2216796875, y: 258.982177734375}
1: {x: 348.1711730957031, y: 262.8340759277344}
2: {x: 376.22442626953125, y: 266.4711608886719}
3: {x: 300.205810546875, y: 274.12786865234375}
4: {x: 197.02357482910156, y: 314.02392578125}
STEP 7: CREATE MAT1 and MAT2 FROM POINT1 and POINT2 ARRAYS
building mat1 and mat2 to feed to findHomography using cv.matFromArray(points1.length, 2, cv.CV_32F, points1);
resulting mats have: channels=1 type:5 cols:2 rows:196 depth:5 colorspace:GRAY type:CV_32F
cols: 2
data: Uint8Array(1568)
[0 … 99]
0: 0
1: 0
2: 192
3: 127
data8S: Int8Array(1568)
data16S: Int16Array(784)
data16U: Uint16Array(784)
data32F: Float32Array(392)
data32F: Float32Array(392)
[0 … 99]
0: NaN
1: NaN
2: NaN
data32S: Int32Array(392)
data64F: Float64Array(196)
[0 … 99]
0: 2.247117487993712e+307
1: 2.247117487993712e+307
2: 2.247117487993712e+307
matSize: Array(2)
rows: 196
step: Array(2)
0: 8
1: 4
length: 2
STEP 8: CALCULATE HOMOGRAPHY USING MAT1 and MAT2
h = cv.findHomography(mat1, mat2, cv.RANSAC);
result:
h = [ 0.002668781266586141 , -0.008741017900773997 , 2.027788870199133
0.000003222589240634258 , -0.0000024563896431059195 , 0.0004692269844962694
0.0023526158100672843 , -0.0047952311495643545 , 1 ]