I’d trust that it’ll use the coefficients, if it takes them.
sources of error tend to come from rounding to whole pixels, and from the difficulty of determining subpixel precise coordinates. cameras love to sharpen their output images. that destroys all kinds of information. and then, black-white edges, in a gamma-compressed color space, have a different shape than they have in a linear color space, but most algorithms don’t care about these intricacies.
then you have blurriness. whatever finds these tags probably just does a threshold. that won’t give you edges where they’re supposed to be, but somewhere off, because of the blur and threshold.