Dense features?

I’d like to look for similarities and differences in scenes with large lighting changes. I suspect comparing features (SIFT, ORB, … ) might work. Is there support for dense features?

what makes you think so ? , do you have some links, undermining that idea ?

no more, vanished with 2.x. use your own “grid” instead of detecting keypoints.

however, there are “global” image descriptors, like DAISY
(no need for sparse keypoints)

1 Like

My only evidence is that the non-sparse implementations have a hard time find features in a lot of my IR images. I assume some threshold is not being met because of some property of the IR images.
In my case, I expect the background to be most unmoving, so I’m interested in comparing respective features of the same points in both images.

IR is comparably featureless by physics, especially the thermal kind. analysis can’t just hallucinate what isn’t there.

actually, this is far too broad. what exactly are you trying to find out ?

I’d like to find new objects (new foreground blobs) and track them in color cameras with IR capabilities. Shadows, lighting changes, switching from color to IR should not identify lighting as foreground objects.

Very many of the evaluation sets ignore this (extremely) common scenario and pretty much all of the stock background subtraction algorithms can’t tell the difference between FG and BG when there are large lighting changes (e.g. someone turns on a light), and in particular when cameras switch to IR mode.

I’m thinking that looking at feature similarities may help identify what is the background vs foreground.

As a rule of thumb, if you can make sense of it, you should expect computer vision to be able to too.