Image color reproduction with matrix


I have image data to be demosaicked which I figured out how to do with OpenCV using cv::cuda::demosaicing().

After that, I adjust brightness and contrast with a quick hack I found on the Internet. This involves creating a new Mat and then conversion with alpha and beta constants:

double alpha = 17.5; /*< Simple contrast control */
int beta = 100;       /*< Simple brightness control */

cv::cuda::GpuMat new_image(dst.size(), dst.type());
dst.convertTo(new_image, -1, alpha, beta);

This looks reasonable after demosaicing, however there is a matrix of values to somehow apply to the image which is from a C6440 sensor.

1.32         -0.46       0.14
-0.36       1.25          0.11
0.08        -1.96        1.88

It seems like I have to convert the type (unsigned 16 bit int) to a float first, then apply these color corrections. I’m not sure how to do that, does anyone know how this can be done? (And what these values are?)

Thanks for any help,

cv::transform would apply such a matrix to the channels of every pixel:

I don’t know if an equivalent exists for CUDA. there might be, I wouldn’t know. worst case you’ll have to write a kernel. if this was numpy, I’d reshape the image to be 3xN (a long row of column vectors, one per pixel) and do a matrix multiply on that. the result has the same shape and you can reshape that back into the original picture.

That was very helpful, thanks! I’ve managed to learn a fair bit about OpenCV basics so I can follow along.

I might still be doing something wrong though, given my resulting image looks too “blue”. Perhaps I still need a few steps after demosaicing and color reproduction, or I have done the color reproduction incorrectly.

When you say 3xN, I reshaped to 3 rows keeping the same number of channels, whatever that was. I assume 3 channels, one for each of RGB and no alpha channel.

I converted my 16 bits unsigned pixel data to float values (or so they would be interpreted as 32-bit floats), then multiplied by the 3x3 matrix. I just wonder if the interpretation of that matrix was correct, and whether I should have arranged the top row as a column, moving from this:

1.32 -0.46 0.14
-0.36 1.25 0.11
0.08 -1.96 1.88

float matrix_data[] = { 1.32, -0.46, 0.14, -0.36, 1.25, 0.11, 0.08, -1.96, 1.88 };

to this:

1.32 -0.36 0.08
-0.46 1.25 -1.96
0.14 0.11 1.88

(Scratch that, I tried it and that made the image too pink).

I don’t suppose you know what’s going wrong when an image has too much blue? Maybe it’s just a matter of playing with all the blue pixels until it looks ok, or there are further steps after debayering/demosaicking that I am missing?

Thanks again,

if they give a matrix, they better be using usual math, and that means the “argument” is a column vector and it comes at the matrix from the right.

the most likely issue is you’re assuming RGB, and so might the matrix and a bunch of other things in life, while OpenCV’s native channel order is BGR.

sticking with BGR order would be a good idea if you’re gonna use OpenCV’s imshow or imwrite.

you can use cvtColor (or a cuda equivalent), or you can just permute the columns of your matrix so they match the order of values in those column vectors… if you need RGB output. if you want to stick with BGR output, you’ll have to permute the matrix’s columns still, but then also its rows. you can do that by building a transposition matrix (like an identity but diagonal goes top right to bottom left), and multiply (matrix multiply) that onto the color matrix from both sides (once to permute input, once to permute output). or flip horizontally and vertically. same thing. or rotate the matrix by half a turn (180 degrees). same thing again.