How `cv::demosaicing` does linear interpolation

Hey everyone,

I’m trying to get a better more in-depth understanding of the demosaicing algorithm for converting BayerBG images to RGB in OpenCV.

I understand the basic concepts that I’ve seen around the internet such as summing up neighboring pixels of the target color and averaging them but I’m not sure if OpenCV is even doing that.

Here’s a Python example to demonstrate what I’m talking about:

We create a simple image with 3 rows and 4 columns in Bayer BG pattern:

src = np.uint8([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])

So this should follow a structure like this:

R G R G => 1  2  3  4
G B G B => 5  6  7  8
R G R G => 9 10 11 12

And then when I run this in the console:

>>> src = np.uint8([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
>>> cv.demosaicing(src, cv.COLOR_BayerBG2RGB)
array([[[6, 6, 6],
        [6, 6, 6],
        [7, 7, 7],
        [7, 7, 7]],

       [[6, 6, 6],
        [6, 6, 6],
        [7, 7, 7],
        [7, 7, 7]],

       [[6, 6, 6],
        [6, 6, 6],
        [7, 7, 7],
        [7, 7, 7]]], dtype=uint8)

This is output I wouldn’t actually expect!

Namely, let’s look at pixel [0][0].

We get a pixel value of array([6, 6, 6], dtype=uint8).

From what I understand, we shouldn’t be doing anything to the red channel here because that part of our Bayer pattern is supposed to already have the red component in it!

OpenCV claims this algorithm is supposed to follow a simple bilinear interpolation but it seems at odds with the results I’m getting.

Does anyone know what’s going on here?

1 Like

I have no idea what’s going on but I believe those are boundary effects. take a larger input and you’ll see it does react to the values.

I can certainly see if you don’t like its behavior on the edges of the picture. if you want to propose a different behavior, feel free to open an issue (someone might work on it) and/or work on a pull request.

some source: opencv/demosaicing.cpp at master · opencv/opencv · GitHub

>>> src = np.arange(256, dtype=np.uint8).reshape((16,16)); src
array([[  0,   1,   2,   3,   4,   5,   6,   7,   8,   9,  10,  11,  12,  13,  14,  15],
       [ 16,  17,  18,  19,  20,  21,  22,  23,  24,  25,  26,  27,  28,  29,  30,  31],
       [ 32,  33,  34,  35,  36,  37,  38,  39,  40,  41,  42,  43,  44,  45,  46,  47],
       [ 48,  49,  50,  51,  52,  53,  54,  55,  56,  57,  58,  59,  60,  61,  62,  63],
       [ 64,  65,  66,  67,  68,  69,  70,  71,  72,  73,  74,  75,  76,  77,  78,  79],
       [ 80,  81,  82,  83,  84,  85,  86,  87,  88,  89,  90,  91,  92,  93,  94,  95],
       [ 96,  97,  98,  99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111],
       [112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127],
       [128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143],
       [144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159],
       [160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175],
       [176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191],
       [192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207],
       [208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223],
       [224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239],
       [240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255]], dtype=uint8)
>>> cv.demosaicing(src, cv.COLOR_BayerBG2RGB)[:,:,1]
array([[ 17,  17,  18,  19,  20,  21,  22,  23,  24,  25,  26,  27,  28,  29,  30,  30],
       [ 17,  17,  18,  19,  20,  21,  22,  23,  24,  25,  26,  27,  28,  29,  30,  30],
       [ 33,  33,  34,  35,  36,  37,  38,  39,  40,  41,  42,  43,  44,  45,  46,  46],
       [ 49,  49,  50,  51,  52,  53,  54,  55,  56,  57,  58,  59,  60,  61,  62,  62],
       [ 65,  65,  66,  67,  68,  69,  70,  71,  72,  73,  74,  75,  76,  77,  78,  78],
       [ 81,  81,  82,  83,  84,  85,  86,  87,  88,  89,  90,  91,  92,  93,  94,  94],
       [ 97,  97,  98,  99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 110],
       [113, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 126],
       [129, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 142],
       [145, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 158],
       [161, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 174],
       [177, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 190],
       [193, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 206],
       [209, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 222],
       [225, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 238],
       [225, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 238]], dtype=uint8)

Ah, interesting.

I will say, I’ve done some reading on the algorithms and no one really mentions how they handle boundary conditions.

I wanna try hand-rolling some of these algorithms so I can deepen my understanding of the material. I wanted to compare my results to what OpenCV produces and got confused at what I saw for sufficiently small images.

Thanks for doing some digging into this!