Remove 6 bit from 16 bit depth pixels

I am a newbie to opencv, so I try to learn from forum posts and videos. But now I reached a deadlock. So I decided to ask the community. I have a camera which streaming raw 10 bit image via USB, I have created a firmware which can send this stream as a 16 bit YUY video(UVC not supports raw rgb). But now I have to remove the extra bits from every pixel to show the video without distortion. Is openCV the proper tool for the task?

It should be simple. You can create directly an image of arbitrary type from a raw buffer. You receive a W*H*2 bytes image in the buffer variable.

uchar *buffer;
int W,H;
network.receive(buffer,&W,&H); //or something like this
Mat depth_image(H,W,CV_16UC1,buffer); //create a 1 channel 16 bit Mat from the buffer

Voilà, that’s all!

Thank you for your response,
Maybe I misunderstand something, but I receive 3 channel color images(distorted but color). If I cast them into single channel, they will became grayscale, won’t they? Also can a 16 bit one channel image show the true colors of a 10 bit image?

Best Regards,

I don’t really understand what is the format of your image. This is the solution for a 16 bits/pixel YUV 4:4:4 encoding.
If you have 16 bits/channel color image (some kind of HDR image) or something similar, just create a Mat with the actual format, then use cvtColor or split to convert it to grayscale, which will contain the depth data.

Mat received_image(H,W,CV_16UC3,buffer);
Mat depth_image;

Thank you for your response, and sorry for my poor explanation as I mentioned i am pretty new to opencv and image formats. I found a python code which after a minimal modification resolved the image quality problem, but this code example reduce the channels and i got a grayscale image and if i try to recreate the colors I always got fake colors(the camera sensor original coloring a 10 bit raw RGB with bggr bayer). The code is well commented, but if i try to avoid the channel reduction i always messed up the image.
Used Python code:

# Fetch undecoded RAW video streams
cap.set(cv2.CAP_PROP_FORMAT, -1)  # Format of the Mat objects. Set value -1 to fetch undecoded RAW video streams (as Mat 8UC1)
while True:
    # Capture frame-by-frame
    ret, frame = into np array with [1,320000] h*w*2 byte
    if not ret:
    # Convert the frame from uint8 elements to big-endian signed int16 format.
    frame = frame.reshape(rows, cols*2) # Reshape to 400*800
    frame = frame.astype(np.uint16) # Convert uint8 elements to uint16 elements
    frame = (frame[:, 0::2] << 8) + frame[:, 1::2]  # Convert from little endian to big endian (apply byte swap), the result is 800x400.
    frame = frame.view(np.int16)

    # Apply some processing for disapply (this part is just "cosmetics"):
    frame_roi = frame[:, 10:-10]  # Crop 400x400 (the left and right parts are not meant to be displayed).
    # frame_roi = cv2.medianBlur(frame_roi, 3)  # Clean the dead pixels (just for better viewing the image).
    frame_roi = frame_roi << 6  # shift the 6 most left bits
    normed = cv2.normalize(frame_roi, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8UC3)  # Convert to uint8 with normalizing (just for viewing the image).

I attached 2 images before the python script and after:

Okay, now I think I understand. But still don’t get why you are using RAW frames…
In this case try to understand Bayer filters, RAW formats and color interpolation: wikipedia
Anyway, your conversion should be:

colorframe=cv2.cvtColor(frame,cv2.COLOR_BayerBG2BGR) send colorframe on the network...

Anyway, unless it’s absolutely needed, you can set the capture mode directly to RGB color.

Since the last post, I managed (with some help and hint and read) to create a C++ code snippet which is able to show the image and yes the COLOR_BayerGB2BGR create an almost proper image, only with a yellowish overlay. But as much I know it is a common problem with the de-bayered images. Now I am working on this.
Display COLOR_BayerGB2BGR_screenshot_28.03.20222

If the colors are off, you should check if you are using the correct debayering algorithm. Try the COLOR_BayerBG2BGR, COLOR_BayerGB2BGR, COLOR_BayerRG2BGR and COLOR_BayerGR2BGR until you get the correct colors.
I prefer to use color pencils, color checker board or a colorful image to check if the colors are good.

1 Like

picture would be “good”, if you subtracted roughly 0.5 and applied gamma compression for display (debayered values are probably still linear). that foamy stuff in the corners seems to be pink bubble wrap, and the top right corner shows some (bluish) daylight.

here’s the picture, [128,255] stretched to [0,255], and given an exponent of 0.45 (inverse of ~2.2):


1 Like

Thank you for this hint. I tired them out and still think the COLOR_BayerGB2BGR is the closest to the reality. Just need to tune the sensor or the image.

wow so many information, thank you! I still need to learn the language of image processing, but thnaks for these hints I started to work on these parameters. I am eager to learn about this topic I started a book about OpenCV, but now I feel like I also need one about digital photography or something similar. Share with me if you have any recommendatiton.
Also I created a dummy colorbar image:


Since my last post I worked around with the camera, and find out my pixel processing method was wrong and I lost too many information. And that was the cause of that pale color poor image. But with the reimplemented method I got pictures which are look like have too much gain I guess. In low light the colors are grate, but if I using any kind of light source the picutre saturate immedately.I attached a picture about the phenomena. Now I am working with the sensor gain ceiling and exposure ceiling parameters, but not sure about the reason of the picture error.