I’m familiar with OpenCV, but I never had to analyse how the cv::Mat container works.
I’m currently writing a ROS node to receive images over the network and feed them to an algorithm that my colleague has developed. His library is developed in C and uses a custom struct to contain the image data.
typedef struct {
int nb_row, nb_col, zoom;
float light;
float *r, *g, *b; // 0.0 to 1.0 r, g, and b
} t_image_rgb;
I’ve figured out that I’ll need to multiply the pixel values by a factor of 1./255 to convert from int 0-255 to float 0-1.
How can I access a single cv::Mat channel and convert it to an array? I could use two for loops and iterate using img.at() but I’m wondering if there’s a better way?
img.at(y,x) is quite slow operation. It is much faster to use pointer access img.ptr(y).
Normally the images are continuous (unless you are working on a part of the image), so the data is at img.ptr(0) over rowscolschannels*bytes/value.
As the color planes are interleaved (so in memory the values are arranged like B G R B G R ...), you must separate them first.
So there are two ways:
Split and copy
Mat channels[3];
split(image,channels);
b = (float*)malloc(nb_row*nb_col*sizeof(float)); //allocate the blue channel
memcpy(b,channels[0],nb_row*nb_col*sizeof(float)); //copy channel[0] to the blue channel
...
Pixel by pixel loop
float *bp,*rp,*gp; //pointers to the planes
//allocate the r,g,b buffers
bp=b;rp=r;gp=g;
for(int y=0; y<nb_rows;y++) {
Vec3f *line=img.ptr(y);
for(int x=0;x<nb_cols;x++){
*bp=*line[0];*gp=*line[1];*rp=*line[2];
bp++;gp++;rp++;line++;
}
}
In this case, you can even do the normalization to 0…1 in the inner loop.