Hey there,
I’m brand new to OpenCV,
I’ve created this following application to simply get acquainted with the library:
#include <chrono>
#include "opencv2/opencv.hpp"
#include "spdlog/spdlog.h"
int main()
{
// Initialize the logger
spdlog::set_pattern("[%H:%M:%S:%e] [%^%L%$] %v");
spdlog::info("OpenCV version: {0}", CV_VERSION);
spdlog::info("Starting video capture...");
// Initialize stopwatch
auto startTime = std::chrono::steady_clock::now();
auto endTime = std::chrono::steady_clock::now();
// Video capture parameters
int deviceId = 0; // 0 = open default camera
int apiId = cv::CAP_MSMF; // 0 = autodetect default API
// Initialize video capture device
cv::VideoCapture cap(deviceId, apiId);
endTime = std::chrono::steady_clock::now();
spdlog::info("Initialization time: {0} ms", std::chrono::duration_cast<std::chrono::milliseconds>(endTime - startTime).count());
startTime = endTime;
cap.set(cv::CAP_PROP_FRAME_WIDTH, 1920);
endTime = std::chrono::steady_clock::now();
spdlog::info("Set frame width time: {0} ms", std::chrono::duration_cast<std::chrono::milliseconds>(endTime - startTime).count());
startTime = endTime;
cap.set(cv::CAP_PROP_FRAME_HEIGHT, 1080);
endTime = std::chrono::steady_clock::now();
spdlog::info("Set frame height time: {0} ms", std::chrono::duration_cast<std::chrono::milliseconds>(endTime - startTime).count());
startTime = endTime;
cap.set(cv::CAP_PROP_FPS, 60);
endTime = std::chrono::steady_clock::now();
spdlog::info("Set FPS time: {0} ms", std::chrono::duration_cast<std::chrono::milliseconds>(endTime - startTime).count());
// Check if the video capture device was opened successfully
if (!cap.isOpened())
{
spdlog::error("Unable to open video capture device");
return -1;
}
spdlog::info("Video capture device opened successfully");
}
However everytime I access the cv::VideoCapture
variable, the application hangs for ~40s as seen below.
[21:48:39:879] [I] OpenCV version: 4.10.0
[21:48:39:880] [I] Starting video capture...
[21:49:20:281] [I] Initialization time: 40400 ms
[21:49:59:294] [I] Set frame width time: 39013 ms
[21:50:38:267] [I] Set frame height time: 38973 ms
[21:51:17:182] [I] Set FPS time: 38915 ms
[21:51:17:182] [I] Video capture device opened successfully
Am I doing something wrong or is this the expected behavior?
try DSHOW instead of MSMF
and don’t forget setting FOURCC to MJPG because USB 2 doesn’t do Full HD without compression
Hey there,
I tried DSHOW, the initialization is now near instantaneous however the FPS seems capped at 10 now?
The capture card I am using is this.
Which has a USB 3.0 port support 1080 @ 60 FPS (which I confirmed using OBS studio to read the stream)
In my use-case, I need that 60 FPS because I’ll be doing real-time analysis of the input. I also won’t be saving the input stream to a file, so I am not sure I need to encode anything (could be wrong).
Is it possible to increase DSHOW’s FPS cap?
reorder the set calls such that the FOURCC call is either first or last. one of the orders works, the other doesn’t.
some parts of OpenCV are for prototyping, such as the Video IO and GUI stuff. when you need top performance, expect to swap out a few things in place of OpenCV’s convenience facilities. in the case of VideoCapture, you might have to use V4L2 yourself. might. IDK if you will. OpenCV might just need to be used a certain way. I can’t tell, I don’t do your debugging and coding, you do.
VideoCapture always converts the source format, which may be some YUV, to RGB (BGR). that may cost time. you can try setting CAP_PROP_CONVERT_RGB
to false/0. then you’ll get the data without interpretation. IF (!!!) that speeds it up, great, now you know. if it doesn’t, then don’t do that.
ultimately, I can’t know what’s going on with your system, or your code. I’m guessing. none of this should be construed as “OpenCV doesn’t work”.
Well yes all this goes without saying, I thought we were just discussing. If you are under the impression that I am trying to get you to debug my code, that’s definitely not my objective here. How would I learn otherwise haha
Also, I’m not implying that OpenCV doesn’t work, I thought my implementation of it was dysfunctional.
In the meantime, I went ahead and made a direct MSMF implementation (non-related to OpenCV) to capture the input stream. And from there my idea was to then analyze that with OpenCV.
Reading what you just stated, I’m under the impression that I am in the right direction. Considering that Video IO is for prototyping (which I wasn’t aware).
Hopefully everything works out, otherwise I’ll come back to my post.
In any case, thank you for your time, much appreciated.
1 Like
yeah sorry I have had these very same discussions countless times. half the time, they go a certain way. I’m trying to head off certain lines of inquiry. perhaps that was hasty of me.
OpenCV is dysfunctional in some parts, especially the older and unloved/uncool parts. let’s say, handling OpenCV is a negotiation, avoiding its pot holes.
if you notice anything being (provably) broken/defective/hobbled in OpenCV, you can always start an issue on its github. maybe someone reported it already. maybe not. maybe someone would love to work on that, if they had known it was something to work on.
If you’re able to implement your own MSMF, that’s definitely immensely valuable to you and your application. you’ll know exactly how complex the task actually is, what all is involved, how to deal with it. and you have greater control over all of it, compared to having to dig around in OpenCV source and recompiling the library.
VideoCapture
abstracts over a lot of it, hiding both the hassle and the potential performance of the media APIs it’s using. it strives to be fast and efficient but… in the real world, that’s not necessarily the case. OpenCV has performance tests. IDK if those are checked in continuous integration, or whether there are tests that check VideoCapture performance on high data rate video sources. actual performance is usually only investigated and proven when someone comes along and needs it themselves, and injects some energy (discussion, issues, patches) into OpenCV.