Nn_index.h:71: error: (-215:Assertion failed) in Java

I have the below error in Java. However, it only occurs when I evaluate many frames in parallel in many threads and the load level rises above 30:

opencv-4.6.0/modules/flann/include/opencv2/flann/nn_index.h:71: error: (-215:Assertion failed) queries.cols == veclen() in function ‘knnSearch’

hmm, careful. is it actually thread-safe ?

however, the error complains, that your query size did not match the train data.

can you show, how you call it ?
add more own tests for this (and propagate those up the call stack) ?
what kind of data are you indexing / searching ?

My programming yes. Tested several times and using known and proven mechanisms in Java. Especially since it would then also have to occur in other places.

Ok, I have not been able to interpret that, as I have not yet looked into the C code of OpenCV. I will check this with the corresponding debug output.

Here is the corresponding (somewhat shortened call):

MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
Mat descriptors1 = new Mat();
detector.detectAndCompute(img1, new Mat(), keypoints1, descriptors1);
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.FLANNBASED);
List<MatOfDMatch> knnMatches = new ArrayList<>();
matcher.knnMatch(descriptors1, descriptors2, knnMatches, 2);

what kind of data are you indexing / searching ?

I just have 2 images from a video that I compare. So their points determined from that. It is ALWAYS the same video in the test, so the results are ALWAYS the same.

The operation of OpenCV (see code above) always takes place in the same thread. This means the determination of the KeyPoints and their descriptors and their matching. The code is just executed massively in parallel (in many threads) for many frames.

Exactly what data should I have checked and output?

The error occurs when

descriptors1.size().width != descriptors2.size().width

I’ll see if I can find out if that happens.

1 Like

I have observed in my tests that the error does not occur if I do not create the detector in the respective threads (so that I can react flexibly to individual frames), but have already defined it when initialising the threads.

SURF detector = SURF.create(...)

However, this means that I have to use the same detector settings for all frames.

This means that I have far fewer detectors for a parallel evaluation, and they are already generated at the start of the work.

My assumption: With a high computing load, the detector is not completely generated and is already used in the corresponding thread.

Could this be a BUG?

See also: Thread Safety for SURF.create(...)

which settings would you vary here ? like ‘extended’ ?
(64 vs 128 feature length)

Yes, because otherwise the point pairs are not linked correctly.

SURF detector = SURF.create(100, 3, 3, true, false)

But I always use the SAME detector in an analysis.

As written in the post: If I put the detector on beforehand, there are no errors, even if I triple the computer load (approx. 80).

Here the little code:

SURF detector = SURF.create(100, 3, 3, true, false);
detector.detectAndCompute(img1, new Mat(), keypoints1, descriptors1);
detector.detectAndCompute(img2, new Mat(), keypoints2, descriptors2);
if (descriptors1.size().width != descriptors2.size().width) {

again, does that mean, different threads may have ‘extended’ or not ? (and thus different descriptor size ?)

Maybe I had misunderstood you. By “extended” I meant the SURF.create parameter.

All threads and descriptors have the (still) same parameters.

Each video is started in its own thread and processed completely in it. This thread starts another threads (for each frame one thread) to evaluate the individual frames. It doesn’t get any deeper than that. In order to make the detection more flexible, I have generated the detector in each sub-thread and used it there. Since this too often led to errors that could not be traced (only through the LoadAvarage), I now only create a detector in the main thread, which is then used by all sub-threads. This works perfectly.