Thank you very much
Finally I was able to transfer it to C++.
Mat visualizeFrame, // ... and its visualization
splitArrays[2], u, v, mag, ang,
hsv0, hsv1, hsv2,
hsvVec [3];
const double dBrighter (60);
hsvVec [1] = Mat(refS.height, refS.width, CV_8UC1, Scalar(255));
int VisualizeFlow (const Mat& flow)
{
split (flow, splitArrays);
u = splitArrays[0];
v = splitArrays[1];
cartToPolar (u, v, mag, ang);
ang.convertTo (hsvVec [0], CV_8UC1, 180/CV_PI/2);
// normalize (mag, hsvVec [2], 0, 255, NORM_MINMAX, CV_8UC1); // OpenCV's sample version
mag += dBrighter; // my version:
threshold (mag, mag, 255, 0, THRESH_TRUNC);
threshold (mag, mag, dBrighter, 0, THRESH_TOZERO); // 0 shall stay 0
mag.convertTo (hsvVec [2], CV_8UC1);
cv::merge (hsvVec, 3, visualizeFrame);
cvtColor (visualizeFrame, visualizeFrame, COLOR_HSV2RGB);
// imshow ("Flow", visualizeFrame);
// waitKey(0);
outputVideo.write (visualizeFrame);
return 0;
}
There is an adjustments I did:
Using normalize()
for visualization is adaequate for emphasizing
the movements, qualitatively.
But it’s not good when you’re about to assess the values of it,
as the normalization makes the frames unrelated to each other.
(One effect is the extreme emphasizing of small artefacts in
a mostly still picture - see the border things in the example).
So I used a constant enhancement of brightness.
I agree to you that costs of visualization don’t matter - actually I need it only
to assess the data’s quantity to decide for further algs.
Second (and this is not about visualization, but computing the flow):
In the sample code (both javascrip and python) assigning prev to next frames
seems to be as deep copy.
So in C++ you would need an explicit, costly clone()
.
Still the copying isn’t necessary at all, as you can do it
with a flip-flop (nIndex+1)%2
(minimal non-trivial ring buffer ):
struct sFrameInfo {
Mat orig;
Mat flow;
...
};
sFrameInfo* pFrameInfoInterval;
Mat framePair [2]; // pair for computing the flow
// preset the very first..
cvtColor(pFrameInfoInterval->orig, framePair [0], COLOR_RGB2GRAY);
// nIndex: 1..nNumberOfFrames-1
int ComputeFlow (const unsigned int& nIndex)
{
sFrameInfo* pTmpFrameInfo (pFrameInfoInterval + nIndex);
cvtColor(pTmpFrameInfo->orig, framePair [nIndex%2], COLOR_RGB2GRAY);
calcOpticalFlowFarneback (framePair [(nIndex+1)%2], framePair [nIndex%2], pTmpFrameInfo->flow, 0.5, 3, 15, 3, 5, 1.2, 0);
return 0;
}
calcOpticalFlowFarneback()
seems to be very costly itself:
Measuring it isolated it does in only 4.5 fps.
Any annotations/ hints etc. are welcome…