Hi there,
Has anyone managed to set the capture frame width and height for a VideoCapture object on iOS via C++?
The following works great on Windows, and Mac.
cv::VideoCapture vidCap {0};
vidCap.set(cv::CAP_PROP_FRAME_WIDTH, 1920);
vidCap.set(cv::CAP_PROP_FRAME_HEIGHT, 1080);
However, with iOS (iPhone 6 and Xs), although the above causes no error, this:
vidCap.get(cv::CAP_PROP_FRAME_WIDTH)
vidCap.get(cv::CAP_PROP_FRAME_HEIGHT)
still returns 480 and 360.
The technique of setting the WIDTH and HEIGHT to massive numbers like 50000, to coax the camera to the highest available resolution also does not work. The settings when using get are still 480 x 360.
I found this interesting, too:
cv::Mat mat;
VidCap >> mat;
std::cout << mat.cols << " x " << mat.rows;
Here I get “360 x 480” for cols and rows for suggesting a 90 degree rotation, which would be right for a photo taken portrait mode on a phone. I’m just a bit surprised that the capture data was reorientated when pushed to a cv::Mat .
I’ve been googling around for days. Even started poking at objective-c and Swift to try and work with the defaultAVCaptureSessionPreset. No luck with any of that. I’ve also searched for that term in the build directory of my iOS openCV build, and replaced values in a few files to see if there was any change. Again sadly not.
Any advice, suggestions or even sympathy would be greatly appreciated!
With many thanks, Jeff
Tested:
- OpenCV 4.5.1 (tagged release) and commit 9d89edff2f to see if this has been fixed since the last tagged release.
- Xcode 12.3
- iOS 14.0.1 (iPhone Xs) and 12.4.8 (iPhone 6)
- tried several other resolution options besides 1920x1080