Hi, I am working on a setup which Interfacing a qvga sensor and it gives me pixel streaming in YUV2 format since its interfaced as uvc. But I am not getting any image. I am new to this. What I can dig more to get the image? Please suggest~
do you use cv::VideoCapture
?
@crackwitz Hi. I use cap = cv2.VideoCapture(0) to access the data stream through USB.
ok then everything, including pixel format conversion, should have been done for you so that cap.read()
gives you the usual BGR data.
you could provide more detail. don’t assume that “not getting any image” is clear or debuggable.
@crackwitz . I am getting this type data. the format seems yuv2 before sending to USB and this is what I am receiving at windows host application.
any suggestions please~
Regards.
please show your actual code
@berak @crackwitz
This is what I tried to get the raw gray-scale data, but no image is visible. please suggest~
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
cols, rows = 340, 240
cap.set(cv2.CAP_PROP_FRAME_WIDTH, cols)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, rows)
cap.set(cv2.CAP_PROP_FPS, 30)
cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter.fourcc('Y','1','6',' '))
cap.set(cv2.CAP_PROP_CONVERT_RGB, 0)
cap.set(cv2.CAP_PROP_FORMAT, -1)
while(cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
frame = frame.reshape(rows, cols*2)
frame = frame.astype(np.uint16)
frame = (frame[:, 0::2] << 8) + frame[:, 1::2]
frame_roi = cv2.medianBlur(frame_roi, 3)
frame_roi = frame_roi << 3
normed = cv2.normalize(frame_roi, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8U)
cv2.imshow('video',normed)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
This gives me below result of very very light shadow of hand (not clearly visible).
Thanks.
Best Regards.
that looks like one U16 value is interpreted as two U8 values beside each other.
I see that you assemble the high and low bytes by hand, with the shifting. that is possible and should have worked.
you could use .view()
instead of .astype
. view does the equivalent of a “reinterpret_cast”
could you use np.save
on the raw frame from cap.read
and upload that npy
file somewhere? this forum doesn’t support attaching that file type.
@crackwitz Thanks.
I tried and here is the raw frame Filebin | a4o0ksvxxhkyjyo6
.view() give me this.
Please suggest~
file appears to be just a numpy file header, and then no actual data. I can tell it was trying to save uint8 of shape (1, 163200)
could you try that again? it should have written a file that’s around 160 kB in size
Hello @crackwitz . Thanks.
yes, I tried again and attached it here Filebin | a4o0ksvxxhkyjyo6
please check!
Regards.
okay I played around with the data and it’s “weird”.
I can see a 4-byte stride pattern. that agrees with your statement that this is supposed to be “YUV2”. so I don’t believe it’s Y16 (grayscale) exactly. I’ve taken this data apart into four planes of 1-byte data strided 4-bytes.
if this were Y16, at least one “plane” would show some interesting data or darkness (high bits), or wraparound artefacts (low bits). I don’t see any of that.
any byte-slice of stride 2 or 4 shows either noise, or fairly flat grayish data with some quantization/banding. I believe offsets 0 and 2 (of 4) are chroma. they show banding and they’re centered around a medium value. luma is all over the place, both of them.
the frame data being 340x240… overscan. oh well.
I haven’t made the picture look like much yet. is it supposed to show anything? or just darkness/noise? what are the gain settings?
@crackwitz Thanks for your comment and sharing. The .npy file is the result of the code (shared in previous post) with ‘.astype’
Host application flow
-
Data format
- The actual data frame I am getting is in yuv2 format (2 byte).
- converting it to Y16 (raw) using opencv python application (shared)
- with no RGB, it gives me Y16 raw gray format.
-
Data order
- reshape it to 680*240
- frame array type uint16.
- byte shift (big endian order) & shape (340*240)
- applied medianBlur (clean dead pixels)
- bit shift (ROI) (in case work to being clean image)
- normalization with uint8
It shows grayscale pattern with very very light shadow of the image (ex. hand). I tried it with CLAHE too to clear the contrast but nothing much improved. I didn’t apply any of the gain settings now.
Please suggest~
Best Regards.
I was expecting to work with exactly the data that comes out of cap.read()
, with no alterations at all. I won’t spend that time investigating again.
Hello @crackwitz
yes, the data .npy is the direct data from “ret, frame = cap.read()
”.
then I don’t know what the data means. might be corrupted somehow. or you left the lens cover on the objective and the sensor raised the gain until it’s all just noise. impossible to tell.
@crackwitz Yes, there is lens cover on the the top of the sensor. Could it effect the data out this much? Do you think, should I test it without keeping lens on sensor? Does it make some sense?
Please suggest~