Green screen error in convert YUV to RGBA

convert YUV to RGBA
    if(bufferLength < height * width * 1.6f) {
        cv::cvtColor(mYuv, srcRgba, COLOR_YUV2BGRA_I420);
    }else {
        cv::cvtColor(mYuv, srcRgba, COLOR_YUV2BGRA_NV21);

I have used the above method to convert but in few device it shows a green scattered screen. I have attached the snapshot below

clearly that’s the wrong conversion for the data you get. give all relevant information… such as what devices show this (and everything else you can think of that might be useful to know)

 //LOGI("Received image with width: %d height: %d", width, height);

    uint8_t *srcLumaPtr = reinterpret_cast<uint8_t *>(env->GetDirectBufferAddress(bufferY));

    if (srcLumaPtr == nullptr) {
        LOGE("blit NULL pointer ERROR");

    cv::Mat mYuv(height + height / 2, width, CV_8UC1, srcLumaPtr);
    //LOGI("YUV : %d", mYuv.rows);

    ANativeWindow *win = ANativeWindow_fromSurface(env, surface);

    ANativeWindow_Buffer buf;

    int rotatedWidth = height; // 480
    int rotatedHeight = width; // 640

    ANativeWindow_setBuffersGeometry(win, width, height, 0);

    if (int32_t err = ANativeWindow_lock(win, &buf, NULL)) {
        LOGE("ANativeWindow_lock failed with error code %d\n", err);

    //    LOGI("buf.stride: %d", buf.stride);

    uint8_t *dstPtr = reinterpret_cast<uint8_t *>(buf.bits);
    Mat dstRgba(height, buf.stride, CV_8UC4, dstPtr); // TextureView buffer, use stride as width
    Mat srcRgba(height, width, CV_8UC4);
    Mat rotatedRgba(rotatedHeight, rotatedWidth, CV_8UC4);

    // convert YUV to RGBA
    if(bufferLength < height * width * 1.6f) {
        cv::cvtColor(mYuv, srcRgba, COLOR_YUV2BGRA_I420, 3);
    }else {
        cv::cvtColor(mYuv, srcRgba, COLOR_YUV2BGRA_NV21, 3);

    // Rotate 90 degree
    cv::rotate(srcRgba, rotatedRgba, cv::ROTATE_90_CLOCKWISE);

    assert(rotatedRgba.size().width == height);
    assert(rotatedRgba.size().height == width);

    //process image here
    //cv::circle(rotatedRgba, cv::Point(150,100), 5, Scalar(255,0,0,255), 2, LINE_8, 0);
    if (patternDetected) {
        if(isScanning) {
        //__android_log_print(ANDROID_LOG_VERBOSE, "patternDetector", "no image");

    cv::rotate(rotatedRgba, srcRgba, cv::ROTATE_90_COUNTERCLOCKWISE);

    // copy to TextureView surface
    uchar *dbuf =;
    uchar *sbuf =;
    int i;

    for (i = 0; i < srcRgba.rows; i++) {
        dbuf = + i * buf.stride * 4;
        memcpy(dbuf, sbuf, srcRgba.cols * 4); //TODO: threw a SIGSEGV SEGV_ACCERR once
        sbuf += srcRgba.cols * 4;

    //LOGI("Draw image with width: %d height: %d", srcRgba.cols, srcRgba.rows);

in log it sbows like this like this

Available Preview Size 176 x 144  Max =640x480  Aspect =4000x2250
Preview Size 640 x 480
onImageAvailable 640 x 480  imageBytes.length 981886  UV_rowStride 1024  Y_rowStride 1024  U_pixelStride 2  Y_pixelStride 1


I don’t know what are you capturing, but clearly the pixel format and image size are messed up.

You have to go step by step and debug the application, rather than trying to find the error in a large code (and it’s impossible for us to find what’s wrong - but probably a lot of things).

As it’s much more complicated to debug something on a phone and live camera data, I recommend to gather some raw data (save the contents of srcLumaPtr to a file). Write the conversion code on the computer, using this data. Look at pixel level data, image sizes and everything.

When it works well, you can reintegrate the code in the app.