Sparse optical flow with moving camera

Hi guys,

I am trying to estimate the motion of a drone using the camera footage. My goal is to use the output vectors to track the drone’s movement. For this, I am using the Lucas-Kanade method, but I am facing some issues with the vector representation. When analyzing the video, I noticed that the vectors are only appearing in the trees and not on the ground. I tried changing the model’s parameters and cleaning up the video noise, but I still can’t seem to fix the issue. This is my first time working with optical flow, and I am still learning about it. Do you have any idea why this is happening, and if there is a solution for it? Additionally, if you have any other ideas on how to approach this, I would appreciate your input. Thank you.


PS: The image was taken from the video to clarify the issue I am having.

using goodFeaturesToTrack or something else?

it might be sensitive to absolute local contrast, hence staying away from the dark areas. if that’s the case, there should be a few ways to deal with this

Yes! Actually, I forgot to mention it but I am using goodFeaturesToTrack. And already tried to use cv2.warpPolar because of the rotation of the drone, but it made it worse.
Now that you mention it, it may be because the ground is so dark because, in other samples, it works relatively well.
Is there some solutions that you suggest to try to solve the problem?
Thank you so much!


  • use something other than GFTT()
  • manipulate image to suit GFTT’s filtering/ranking

you just need keypoints, without description. most feature descriptors come with an appropriate keypoint detector. try one of those.

options for manipulating the image:

  • simple value map (logarithm or sqrt or…)
  • local histogram equalization
1 Like

Thank you so much! I will explore those options!