I have tried OpenCV cv::VideoCapture on Ubuntu 20.04 and found out, that it uses gstreamer.
At least I get error messages and capturing not working when using two USB cameras attached (they both work separately up to some FPS issues, but that might be different issue).
So I wanted first to figure out if gstreamer setup is guilty in my issue and if it is possible to not use gstreamer at all and use the old method (accessing V4L directly, I guess) when using cv::VideoCapture?
on linux, either gstreamer or ffmpeg can be used to read video files (and none of them are mandatory)
(you can also use gstreamer with webcams, but normally it should use the V4l backend)
so, imho, you should not “roll back” anything.
you can specify, which backend to use, like VideoCapture(0, CAP_V4L)
you can set the OPENCV_VIDEOIO_DEBUG environment variable, to see, which backends are checked / selected
“not working” means ? it’s usually a bandwidth problem, a typical usb hub can process a single camera so so, but you might need to decrease the resolution for more than 1 camera on the same hub (or choose a different hub, if you have)
your FPS issues can stem from the USB hub/controller not being fast enough. that’s particularly the case for USB 2. cameras produce a lot of data. if you attach multiple to the same USB hub/controller, the first gets as much reserved data rate as it wants, while the second gets to fall back on some reduced/compressed mode. plug them into USB ports that are on different controllers.
Thanks for the quick answers. Switching to V4L backend solves the warnings issue (should have read the docs first :)). I have an issue with FPS on the slower camera (5 real FPS vs. 30 reported by cv::VideoCamera::get call), but that is another issue I guess.
I used USB 3.0 ports. Actually, I took wrong measurements. The slower camera (“typical” 640x480 30FPS webcam) was quite slow to get the very first frame, whereas the faster one (260FPS with approx. the same resolution as the first one) initialized quicker. Besides, the observation was just 50 frames, so it looked like as if two cameras used the bus sequentially, first the faster camera, and afterwards the slower one. I should have just made longer observations. However, I still have an issue with the slower camera reporting 30 FPS from the CV API, but delivering about 5 FPS in reality.
post code. as always, cameras produce frames on their own time. if you don’t read them at least as quickly as they’re produced, they WILL queue up and cause all kinds of problems. you absolutely MUST read them in a timely manner. reading from a camera MUST NOT involve any delays.
#include <opencv2/videoio.hpp>
#include <chrono>
#include <vector>
#include <stdio.h>
#include <string>
#include <thread>
#include <functional>
#include <unistd.h>
#include <assert.h>
#include "log.hpp"
#include <stdio.h>
namespace chr = std::chrono;
using sclk = std::chrono::steady_clock;
struct Capture
{
const int MaxCounter = 300;
const double cols = 640;
const double rows = 480;
const double Fps = 30.0;
std::string devFname;
int counter{};
void runCapture()
{
// sleep is here to give main thread more time for capture.set(...)
sleep(1);
LOG("Have set ", (int)capture.get(cv::CAP_PROP_FPS), "fps ", (int)capture.get(cv::CAP_PROP_FRAME_WIDTH), "x", (int)capture.get(cv::CAP_PROP_FRAME_HEIGHT), " for camera ", devFname);
sclk::time_point begin = sclk::now();
while (counter++ < MaxCounter)
{
if (! capture.read(mat))
{
LOG(devFname, ": Cannot read");
exit(1);
return;
}
}
sclk::time_point end = sclk::now();
const auto fps = 1000.0 * MaxCounter / chr::duration_cast<chr::milliseconds>(end - begin).count();
LOG("measured frame rate: ", fps);
}
Capture(std::string devFname_)
: devFname(devFname_)
, capture(devFname, cv::CAP_V4L2)
, thread(std::bind(&Capture::runCapture, this))
{
capture.set(cv::CAP_PROP_FRAME_WIDTH, cols);
capture.set(cv::CAP_PROP_FRAME_HEIGHT, rows);
capture.set(cv::CAP_PROP_FPS, Fps);
}
~Capture()
{
if (capture.isOpened())
{
capture.release();
}
}
void join()
{
thread.join();
}
cv::VideoCapture capture;
cv::Mat mat;
std::thread thread;
};
int main(int argc, char* argv[])
{
Capture n1(argv[1]);
n1.join();
}
I create one Capture instance per camera (only one is created here to test single camera). Running this code now with 300 frames actually shows 9.5FPS, but that is still not even close to 30. LOG is a function that acquires scoped lock on global mutex, but I got similar results if running without it.
So I guess this exposure_autoshould be set to Aperture Priority Mode.
However, cv::VideoCapture::get for cv::CAP_PROP_EXPOSURE and cv::CAP_PROP_EXPOSUREPROGRAM returns “-1”.
OpenCV may not expose all possible properties. it should be possible to use v4l-ctl to set those properties before or while OpenCV accesses the camera.
try disabling all auto modes (set to manual) and see if that does anything.
also point your camera at a brightly lit area or shine a flashlight into it. then cover it (darkness). if that makes a difference in frame rates, the camera probably reduces the frame rate to get a better picture.