Floating pt vs double precision

I’m using OpenCV C++ functions like findChessboardCorners and cornerSubPix. It seems the function used to find these corners is only supported as a std::vector<std::Point2f> corners function. <= i.e. it seems that it only supports floating point corner locations. Is there any way to implement double precision calculations for these calibration related functions?

you are asking to specify pixel locations at greater than 23/24 bits of precision.

how does this make practical sense in your application?

Maybe it’s my mistake - but the cornerSubPix function talks about iterating to find sub-pixel accurate locations. It just seems short sighted to enforce lower precision estimations early on when, later in the process, we (may) want to reasonably use double precision, forcing a cast later in the process.

What you are asking is similar to suggesting that we use micrograms to weigh people at the doctor because milligrams isn’t precise enough when the scales we use are only repeatable to the gram. A float already offers far more precision than is necessary to faithfully record the result of the algorithm / input data, and any apparent precision loss (double compared to float) is so far in the noise that there isn’t any actual loss at all. You get maybe 2 decimal digits of signal, the rest can be regarded as noise.

I take your meaning. However, the documentation around the cornerSubPix function doesn’t address the precision we should expect with respect to the ‘sub-pixel accurate location’ found. It might be nice if there was a line or two on why more than floating point precision isn’t necessary. (I know there’s a reference cited, but a few sentences from the author of the algorithm might more quickly allay fears of loss of precision)

And, yes, following your analogy, if all I wanted was the weight of a person, floating pt precision would be sufficient. However, in the case of my application, these initial pixel locations feed into camera calibrations (where matrix operations are used to back out calibration constants). And in these cases, rounding errors from floating point approximations can accumulate if not properly cast to a higher precision as needed.

But, again, it’s clear now that floating point precision is appropriate given the values calculated from the functions in question. Thank you both for your perspectives.

that is not specific to that algorithm, but general to all of image processing, all of signal processing, all of engineering, all of physics.

that would be silly to document with every single function because most of them would say the same thing.

you are operating in a field of engineering where some things are assumed to be understood. it’s background knowledge, foundational knowledge. you can find that in a book or course on the matter, and maybe in supplemental articles of a library’s documentation, but not in the API docs of some function in a library.

please be aware of the subtle differences in meaning of “resolution”, “precision”, “accuracy”. yes, the terms are often used interchangeably, especially when it comes to discussing floating point numbers, where “precision” and “resolution” are strongly related or proportional.

the pixel is the fundamental unit of position in an image. whole-pixel coordinates are natural. that is the “anchor” for any scale you might consider. you can guesstimate some more resolution from considering a neighborhood of pixels, and the pixels’ grayscale values (examples: marching squares/cubes, any kind of block matching, cornerSubPix). to consider a neighborhood is “lowpassing”/averaging. it’s a tradeoff that can be made under specific assumptions. it buys you precision under the assumption that there is noise, and that you can get better information from the larger structure (neighborhood), rather than fine (per-pixel) structure. there are always limits, laws of diminishing returns.

you are not ever gonna get arbitrary precision out of anything that is measured. nature does not allow it. out of a raster image, considering positions in pixel space, an order of magnitude more (than integer-pixel) is already great, two orders would be stellar.

so there you have 1-2 digits of fractional resolution. now out of a single float, you have about 5 digits left, which is plenty to allocate to the integer part of any pixel position in any image of a common size. of course you need to be aware of this, if you ever deal with images that are extremely large, like more than 100k pixels along a side (that’s gigapixel or more).

since infinite/arbitrary precision cannot be had, your “asking for more” (double over single precision floats) is pointless in itself, because you could always repeat that request, no matter what precision was initially offered.

you really need to obtain this understanding from a book or course on physics. physics covers this.

now you’re talking about numerics/numerical analysis. once again, the algorithm determines if errors accumulate or average out. you can’t just assert such things. in the case of camera calibration, it is definitely not the case that errors accumulate. it is “averaging”, because those algorithms either do least squares minimization or something more complex.

intermediate calculations will very likely be done at double precision, which gives plenty of room for calculations, if the inputs are single precision.

Please don’t lecture me about ‘all of engineering’ and ‘all of physics’. Your point is taken but the condescending nature of your tone is, frankly, stunning.

I have read many posts by @crackwitz and I would say that in no way was he trying to be condescending in the reply given. He replied in detail to you because you keep asking the same question.

I may not be trying, I may be trying not to be, but it can’t be avoided in some situations where me acting polite and humble just doesn’t get the point across because some person will not accept it if I, by being polite, erroneously suggest that there’s room for debate or it’s a matter of opinion.

anyway, I’m glad you see the point.

besides that, I guess there’s always the option of trying to extend the code to handle several data types. OpenCV doesn’t employ templates, not that thoroughly anyway, so that’ll probably cause the code to balloon.

also, fp32 calculations usually go a lot faster than fp64 calculations.