My programming yes. Tested several times and using known and proven mechanisms in Java. Especially since it would then also have to occur in other places.
Ok, I have not been able to interpret that, as I have not yet looked into the C code of OpenCV. I will check this with the corresponding debug output.
Here is the corresponding (somewhat shortened call):
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
Mat descriptors1 = new Mat();
detector.detectAndCompute(img1, new Mat(), keypoints1, descriptors1);
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.FLANNBASED);
List<MatOfDMatch> knnMatches = new ArrayList<>();
matcher.knnMatch(descriptors1, descriptors2, knnMatches, 2);
what kind of data are you indexing / searching ?
[/quote]
I just have 2 images from a video that I compare. So their points determined from that. It is ALWAYS the same video in the test, so the results are ALWAYS the same.
The operation of OpenCV (see code above) always takes place in the same thread. This means the determination of the KeyPoints and their descriptors and their matching. The code is just executed massively in parallel (in many threads) for many frames.
I have observed in my tests that the error does not occur if I do not create the detector in the respective threads (so that I can react flexibly to individual frames), but have already defined it when initialising the threads.
SURF detector = SURF.create(...)
However, this means that I have to use the same detector settings for all frames.
This means that I have far fewer detectors for a parallel evaluation, and they are already generated at the start of the work.
My assumption: With a high computing load, the detector is not completely generated and is already used in the corresponding thread.
Maybe I had misunderstood you. By “extended” I meant the SURF.create parameter.
All threads and descriptors have the (still) same parameters.
Each video is started in its own thread and processed completely in it. This thread starts another threads (for each frame one thread) to evaluate the individual frames. It doesn’t get any deeper than that. In order to make the detection more flexible, I have generated the detector in each sub-thread and used it there. Since this too often led to errors that could not be traced (only through the LoadAvarage), I now only create a detector in the main thread, which is then used by all sub-threads. This works perfectly.