Create openCV Matrix from rotated CVPixelBufferRef in iOS

I have a live video feed connected to an MLModel. In order to get the correct normalized coordinates from the MLModel’s detections, I set the AVCaptureConnection’s videoOrientation value to “Portrait”.

This works great and my detections are drawn correctly on the screen.

But I have a second stage where I must turn the CVPixelBufferRef into an openCV Matrix.

The problem is that my code which normally handles this produces garbage of interleaved pixels. After some investigation, I found that if I did not change the output’s videoOrientation (and thus it stays at “Landscape Right”), then converting the pixel buffer into an openCV matrix works as expected.

How can I modify the standard method of conversion (below) to correctly read the CVPixelBufferRef into the openCV Matrix?

CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddress(pixelBuffer);
        
mat = cv::Mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress, 0);
        
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

I’ve tried swapping width and height and tried the only two video formats that seem to work with the MLModel (kCVPixelFormatType_32BGRA and kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange)

Any help would be greatly appreciated! Attached is an image of the currently produced cv::Mat (small, corrupted image near top-left). If I don’t set the output orientation, the produced image is correct (although rotated).

solved, crosspost: