In opencv 2.4.9, I use the feature matching method SIFT.
there is a attr in class “KeyPoint” named “octave”, which represent the DOG layers where the feature point is extracted.
the value of “octave” should be 0, 1, 2…etc, but I get this value when I debug the project, like
so how could I convert this value to the right value(0, 1, 2…)?
that field might be unused (ignored) for SIFT, but used by other descriptor algorithms.
size
looks like the next best thing, and its value looks sensible.
I took a peek in the source. seems to mean something afterall!
>>> hex(11_469_311)
'0xaf01ff'
and it’s building that from three byte sized values, 0xFF (octave), 0x01 (layer?), 0xAF (xi??)
img.at<sift_wt>(r-1, c+1) + img.at<sift_wt>(r-1, c-1)) * cross_deriv_scale; float tr = dxx + dyy; float det = dxx * dyy - dxy * dxy; if( det <= 0 || tr*tr*edgeThreshold >= (edgeThreshold + 1)*(edgeThreshold + 1)*det ) return false; } kpt.pt.x = (c + xc) * (1 << octv); kpt.pt.y = (r + xr) * (1 << octv); kpt.octave = octv + (layer << 8) + (cvRound((xi + 0.5)*255) << 16); kpt.size = sigma*powf(2.f, (layer + xi) / nOctaveLayers)*(1 << octv)*2; kpt.response = std::abs(contr); return true; } namespace { class findScaleSpaceExtremaT {
} String SIFT::getDefaultName() const { return (Feature2D::getDefaultName() + ".SIFT"); } static inline void unpackOctave(const KeyPoint& kpt, int& octave, int& layer, float& scale) { octave = kpt.octave & 255; layer = (kpt.octave >> 8) & 255; octave = octave < 128 ? octave : (-128 | octave); scale = octave >= 0 ? 1.f/(1 << octave) : (float)(1 << -octave); } static Mat createInitialImage( const Mat& img, bool doubleImageSize, float sigma ) { CV_TRACE_FUNCTION(); Mat gray, gray_fpt;
Thanks for your reply.
I tried the func
crackwitz:
unpackOctave
, it seems to work.
and I get the data like this, the DoGlayer is computed by (octave + 1) * 3 + layer, seems to be right