Lucas Kanade implementation

I would like to know how OpenCV’s implementation of the lucas kanade algorithm returns the location of the tracked points in the next frame. If I’m not mistaken, the algorithm computes the flow vectors in the x and y directions at the points of interest. Are those flow vectors simply added to the input coordinates and returned?

that’s how optical flow works. there is no deeper state or history or “model” of these points. it’s just points on one frame, the local neighborhood for each point in that one frame, and their best matches on the next frame.

you shouldn’t ask about implementation first. you should just assume that any implementation sticks to the original paper, and any deviations would be documented.

Have a look at these computer vision courses:

The reference papers:

And of course:

For the LK algo in OpenCV, I think this paper can help:

1 Like