# Visualize dense optical flow

I need to visualize a dense optical flow retrieved by Farnback’s algorithm.

My idea was to convert the algorithm’s output (2D coordinates) into color reprasenation (HSV) by mapping the angle between the vectors to hue, the length of the resulting vector to V and set saturation to 255 (just like in OpenCV’s example).

As I’m beginner in OpenCV, I’m very uncertain how to do it in C++.
The example is in Phyton, the python methods and classes (`MatVector`?) don’t all exist in C++.
Moreover in the documentation there are irritating hints on the costs of the ops.
So my question
about a good way to transfer the output of flow algorithm to HSV in C++?

you actually found the javascript version above, the python one is here

please show us, what you’ve got, so far, so we can try to fill in the blanks

also, there’s no ‘canonical’ way to visualize a vector field,
you could also just draw little arrows or such

last, visualizations like that rarely make it into production code, it’s mostly a debugging help for devs, don’t worry about the costs …

Thank you very much

Finally I was able to transfer it to C++.

``````Mat visualizeFrame,                         // ... and its visualization
splitArrays[2], u, v, mag, ang,
hsv0, hsv1, hsv2,
hsvVec [3];
const double dBrighter (60);

hsvVec [1] = Mat(refS.height, refS.width, CV_8UC1, Scalar(255));

int VisualizeFlow (const Mat& flow)
{
split (flow, splitArrays);
u = splitArrays[0];
v = splitArrays[1];
cartToPolar (u, v, mag, ang);
ang.convertTo (hsvVec [0], CV_8UC1, 180/CV_PI/2);

//    normalize (mag, hsvVec [2], 0, 255, NORM_MINMAX, CV_8UC1);    // OpenCV's sample version
mag += dBrighter;                                               // my version:
threshold (mag, mag, 255, 0, THRESH_TRUNC);
threshold (mag, mag, dBrighter, 0, THRESH_TOZERO);              // 0 shall stay 0
mag.convertTo (hsvVec [2], CV_8UC1);

cv::merge (hsvVec, 3, visualizeFrame);

cvtColor (visualizeFrame, visualizeFrame, COLOR_HSV2RGB);

//   imshow ("Flow", visualizeFrame);
//   waitKey(0);

outputVideo.write (visualizeFrame);

return 0;
}
``````

There is an adjustments I did:

Using `normalize()` for visualization is adaequate for emphasizing
the movements, qualitatively.
But it’s not good when you’re about to assess the values of it,
as the normalization makes the frames unrelated to each other.
(One effect is the extreme emphasizing of small artefacts in
a mostly still picture - see the border things in the example).
So I used a constant enhancement of brightness.

I agree to you that costs of visualization don’t matter - actually I need it only
to assess the data’s quantity to decide for further algs.

Second (and this is not about visualization, but computing the flow):
In the sample code (both javascrip and python) assigning prev to next frames
seems to be as deep copy.

So in C++ you would need an explicit, costly `clone()`.
Still the copying isn’t necessary at all, as you can do it
with a flip-flop `(nIndex+1)%2` (minimal non-trivial ring buffer ):

``````struct sFrameInfo {
Mat orig;
Mat flow;
...
};
sFrameInfo* pFrameInfoInterval;

Mat framePair [2];                          // pair for computing the flow

// preset the very first..
cvtColor(pFrameInfoInterval->orig, framePair [0], COLOR_RGB2GRAY);

// nIndex: 1..nNumberOfFrames-1
int ComputeFlow (const unsigned int& nIndex)
{
sFrameInfo* pTmpFrameInfo (pFrameInfoInterval + nIndex);

cvtColor(pTmpFrameInfo->orig, framePair [nIndex%2], COLOR_RGB2GRAY);

calcOpticalFlowFarneback (framePair [(nIndex+1)%2], framePair [nIndex%2], pTmpFrameInfo->flow, 0.5, 3, 15, 3, 5, 1.2, 0);

return 0;
}
``````

`calcOpticalFlowFarneback()` seems to be very costly itself:
Measuring it isolated it does in only 4.5 fps.

Any annotations/ hints etc. are welcome…

(post deleted by author)

In the code samples above the channel order should be BGR, not RGB.

Moreover the visualization is better, and still keeps the frames comparable by:

``````    mag *= 16;
threshold (mag, mag, 255, 0, THRESH_TRUNC);

mag.convertTo (hsvVec [2], CV_8UC1);
``````