I have tried OpenCV cv::VideoCapture on Ubuntu 20.04 and found out, that it uses gstreamer.
At least I get error messages and capturing not working when using two USB cameras attached (they both work separately up to some FPS issues, but that might be different issue).
So I wanted first to figure out if gstreamer setup is guilty in my issue and if it is possible to not use gstreamer at all and use the old method (accessing V4L directly, I guess) when using cv::VideoCapture?
on linux, either gstreamer or ffmpeg can be used to read video files (and none of them are mandatory)
(you can also use gstreamer with webcams, but normally it should use the V4l backend)
so, imho, you should not “roll back” anything.
you can specify, which backend to use, like
you can set the OPENCV_VIDEOIO_DEBUG environment variable, to see, which backends are checked / selected
“not working” means ? it’s usually a bandwidth problem, a typical usb hub can process a single camera so so, but you might need to decrease the resolution for more than 1 camera on the same hub (or choose a different hub, if you have)
your FPS issues can stem from the USB hub/controller not being fast enough. that’s particularly the case for USB 2. cameras produce a lot of data. if you attach multiple to the same USB hub/controller, the first gets as much reserved data rate as it wants, while the second gets to fall back on some reduced/compressed mode. plug them into USB ports that are on different controllers.
Thanks for the quick answers. Switching to V4L backend solves the warnings issue (should have read the docs first :)). I have an issue with FPS on the slower camera (5 real FPS vs. 30 reported by cv::VideoCamera::get call), but that is another issue I guess.
I used USB 3.0 ports. Actually, I took wrong measurements. The slower camera (“typical” 640x480 30FPS webcam) was quite slow to get the very first frame, whereas the faster one (260FPS with approx. the same resolution as the first one) initialized quicker. Besides, the observation was just 50 frames, so it looked like as if two cameras used the bus sequentially, first the faster camera, and afterwards the slower one. I should have just made longer observations. However, I still have an issue with the slower camera reporting 30 FPS from the CV API, but delivering about 5 FPS in reality.
post code. as always, cameras produce frames on their own time. if you don’t read them at least as quickly as they’re produced, they WILL queue up and cause all kinds of problems. you absolutely MUST read them in a timely manner. reading from a camera MUST NOT involve any delays.
namespace chr = std::chrono;
using sclk = std::chrono::steady_clock;
const int MaxCounter = 300;
const double cols = 640;
const double rows = 480;
const double Fps = 30.0;
// sleep is here to give main thread more time for capture.set(...)
LOG("Have set ", (int)capture.get(cv::CAP_PROP_FPS), "fps ", (int)capture.get(cv::CAP_PROP_FRAME_WIDTH), "x", (int)capture.get(cv::CAP_PROP_FRAME_HEIGHT), " for camera ", devFname);
sclk::time_point begin = sclk::now();
while (counter++ < MaxCounter)
if (! capture.read(mat))
LOG(devFname, ": Cannot read");
sclk::time_point end = sclk::now();
const auto fps = 1000.0 * MaxCounter / chr::duration_cast<chr::milliseconds>(end - begin).count();
LOG("measured frame rate: ", fps);
, capture(devFname, cv::CAP_V4L2)
, thread(std::bind(&Capture::runCapture, this))
int main(int argc, char* argv)
I create one Capture instance per camera (only one is created here to test single camera). Running this code now with 300 frames actually shows 9.5FPS, but that is still not even close to 30. LOG is a function that acquires scoped lock on global mutex, but I got similar results if running without it.
curious. what camera is that specifically? does it have any auto-features that could increase exposure time (reduce frame rate)?
what happens when you try to capture from that camera using ffmpeg, gstreamer, OBS Studio, …?
The camera is “Trust SpotLight Webcam 1.3 Megapixel USB 2.0”.
ffmpeg -y -f v4l2 -r 25 -i /dev/video0 out.mp4
it indeed recorded 25 fps video.
Surprisingly, now I run a capture using the posted code (but for 1000 frames) and get every time different average, from 7 to 28 FPS per trial.
As for the controls, v4l-ctl shows the following for the available streaming parameters:
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 30.000 (30/1)
Read buffers : 0
brightness 0x00980900 (int) : min=-64 max=64 step=1 default=0 value=0
contrast 0x00980901 (int) : min=0 max=64 step=1 default=32 value=32
saturation 0x00980902 (int) : min=1 max=128 step=1 default=64 value=64
hue 0x00980903 (int) : min=-40 max=40 step=1 default=0 value=0
gamma 0x00980910 (int) : min=72 max=500 step=1 default=100 value=100
gain 0x00980913 (int) : min=0 max=100 step=1 default=0 value=0
power_line_frequency 0x00980918 (menu) : min=0 max=2 default=1 value=1
1: 50 Hz
2: 60 Hz
sharpness 0x0098091b (int) : min=0 max=6 step=1 default=2 value=2
backlight_compensation 0x0098091c (int) : min=0 max=2 step=1 default=1 value=1
exposure_auto 0x009a0901 (menu) : min=0 max=3 default=3 value=3
1: Manual Mode
3: Aperture Priority Mode
So I guess this
exposure_autoshould be set to
Aperture Priority Mode.
However, cv::VideoCapture::get for cv::CAP_PROP_EXPOSURE and cv::CAP_PROP_EXPOSUREPROGRAM returns “-1”.
OpenCV may not expose all possible properties. it should be possible to use v4l-ctl to set those properties before or while OpenCV accesses the camera.
try disabling all auto modes (set to manual) and see if that does anything.
also point your camera at a brightly lit area or shine a flashlight into it. then cover it (darkness). if that makes a difference in frame rates, the camera probably reduces the frame rate to get a better picture.
Much obliged for the answers.