Why does cvtColor in OpenCV 3.2.0 give different results to other versions?

I’ve recently been brought into an older codebase using OpenCV 3.2.0. Some of the newer features requested (and bug fixes desired) meant that we had to upgrade, so I’ve gone through the task of upgrading to 4.5.2.

Of course now some tests are failing and I can’t understand why…I’ve boiled down the issue to the following:

  • When converting a 3-ch 8x1 Mat from BGR2GRAY (1-ch), OpenCV 3.2.0 gives me a different result to other version of OpenCV.

Here’s my test program:

int main()
    auto test = [](int cols) {
        // Create a 3-ch Mat 1xN filled with [1.0, 1.0, 1.0]
        cv::Mat original = cv::Mat::zeros(1, cols, CV_32FC3) + cv::Scalar(1.0, 1.0, 1.0);

        // Set the element at position (0, 0) to [0.75, 0.75, 0.75]
        original.at<cv::Vec3f>(0, 0) = cv::Vec3f(0.75, 0.75, 0.75);
        std::cout << "Size: " << original.size() << "    | Type: 32FC3    | [0,0]: " << original.col(0).row(0) << std::endl;

        // Create a placeholder 1-ch Mat of Nx1 and call `cvtColor` to convert from BGR (3-ch) to grayscale (1-ch)
        const cv::Mat image_32FC1(1, cols, CV_32FC1);
        cv::cvtColor(original, image_32FC1, cv::COLOR_BGR2GRAY, 1);
        std::cout << "Size: " << image_32FC1.size() << "    | Type: 32FC1    | [0,0]: " << image_32FC1.col(0).row(0) << std::endl;


When run on OpenCV 3.2.0 I get the following result:

Size: [8 x 1]    | Type: 32FC3    | [0,0]: [0.75, 0.75, 0.75]
Size: [8 x 1]    | Type: 32FC1    | [0,0]: [0.75]

When run on any other version of OpenCV (including 3.x) I get the following result:

Size: [8 x 1]    | Type: 32FC3    | [0,0]: [0.75, 0.75, 0.75]
Size: [8 x 1]    | Type: 32FC1    | [0,0]: [0.75000006]

Notice in the second line the conversion from 3-ch to 1-ch has given us a different result (0.75000006). A lot of the tests in the system fail because they’re not expecting the additional precision.

So my questions are:

  1. Why am I getting a different result between 3.2.0 and other versions?
  2. Why do other versions have that additional precision?
  3. Oddly, if you change the dimensions of the Mat to 7x1 (test(7);), the issue disappears and the precision truncates to 0.75. It seems to only happen for Mats 1xN (where N >= 8)

Other information:

  • The build of 3.2.0 was built by me, but I haven’t enabled any options like FAST-MATH
  • I’ve tested against the following versions of OpenCV, and all but 3.2.0 give the second result
    • OpenCV 3.2.0 was the previous version used by the project (self built)
    • OpenCV 3.4.12 was the most recent OpenCV v3 I could easily find (conan)
    • OpenCV 4.1.2 was an early version of OpenCV v4 I could easily find (conan)
    • OpenCV 4.5.2 is the new version we’ve just upgraded to (self built)
  • I’ve uploaded a copy of getBuildInformation() from my v3.2.0 build here: OpenCV 3.2.0 Build Info - Pastebin.com

Thanks for your help.

i checked 4.1.0 (linux) and 4.5.3 (win) and could reproduce it,
test(n) gives 0.75000006 for any n > 7

there is a setUseOptimized() function, used like:

int main() {

i get a proper 0.75 as result

that’s a difference in the last bit of fp32 mantissa. I wonder what code path caused this and how. perhaps someone somewhere used some constants that aren’t quite “exact”… it’s floating point math. don’t expect exactness. IIRC, the coefficients for BGR2GRAY can’t be expressed exactly as binary floating point numbers.

I can’t reproduce this from python (windows), which I find odd.

a = np.full((8, 1, 3), 1.0, dtype=np.float32)
a[0,0,:] = 0.75
b = cv.cvtColor(a, cv.COLOR_BGR2GRAY)
assert b[0,0] == 0.75

run this and inspect the results:

a = np.eye(3, dtype=np.float32).reshape((3,1,3))
b = cv.cvtColor(a, cv.COLOR_BGR2GRAY)
print(b) # should give you the B,G,R factors for conversion
assert b.sum() == 1

Thanks for taking the time to reply.

I looked into setUseOptimized, and though you’re right that it does seem to give the same result, when I inspect cv::useOptimized() in my program linked against v3.2.0, it returns true, which means it was always giving me the truncated result even with optimisations turned on.
I went back to the OpenCV source and confirmed that optimisations were set to true by default in v3.2.0, so I don’t think this is the underlying cause

Thanks for your reply.

I ran your python example on opencv v4.5.3 on linux, and got the same result you did - i.e. it gave 0.75 exactly…Which is quite confounding…

I ported your python test for the BGR2GRAY consts over to C++ and ran it there with the following results on both v3.2.0 and v4.5.2:

int main()
    cv::Mat m(3, 3, CV_32FC3, cv::Scalar(0, 0, 0));
    m.at<cv::Vec3f>(0, 0) = cv::Vec3f(1, 0, 0);
    m.at<cv::Vec3f>(1, 1) = cv::Vec3f(0, 1, 0);
    m.at<cv::Vec3f>(2, 2) = cv::Vec3f(0, 0, 1);
    std::cout << cv::format(m, cv::Formatter::FMT_PYTHON) << std::endl;

    std::cout << "------------" << std::endl;

    cv::Mat m2(3, 3, CV_32FC1);
    cv::cvtColor(m, m2, cv::COLOR_BGR2GRAY, 1);
    std::cout << cv::format(m2, cv::Formatter::FMT_PYTHON) << std::endl;
[[[1, 0, 0], [0, 0, 0], [0, 0, 0]],
 [[0, 0, 0], [0, 1, 0], [0, 0, 0]],
 [[0, 0, 0], [0, 0, 0], [0, 0, 1]]]
[[0.114, 0, 0],
 [0, 0.58700001, 0],
 [0, 0, 0.29899999]]

So the underlying constants look to be the same in both versions…Not sure I can explain the behaviour difference between v3.2.0 and other versions (and now also as you’ve discovered, the fact that Python behaves differently too)

by the way, I used np.eye merely to construct a 3x1 vector of pure colors. no need for a 3x3 array.

there are many optimized paths in OpenCV. it may be hard to pin down what exactly is running. you say setUseOptimized makes a difference? that can narrow it down somewhat.

I’d suggest taking the original reproducing example code and opening an issue on OpenCV’s github. they are certainly interested in “bit exact” algorithms… but I don’t know if that applies to floating point math as well. floating point math will always have some numerical error.

1 Like

I’ve done more digging into this issue, and I’ve discovered that the problem was on my side.

I walked through the OpenCV source for the cvtColor routine and noticed a codepath fork for IPP vs CPU. This was the main clue.

I went back to the cvbuildinfo for the original compiled libs for v3.2.0 and confirmed they were compiled with IPP, which is a dependency I’d missed when compiling v4.5.2 (and pulling from Conan centre).

When recompiling OpenCV v4.5.2 with IPP, the results are consistent between the two versions.

Thanks everyone for your help. Much appreciated.

1 Like