OpenCV - GStreamer - DeepStream on mulitple cameras

  • GPU : NVIDIA GeForce RTX 3060
  • Driver : 525.105.17
  • TensorRT : 8.4.1.5
  • Cuda : 11.7
  • DeepStream : 6.1.1

Hello,

I want to use Yolov5 or Yolov8 on multiple cameras (about 20 or more). When I use cv2.VideoCapture() I can’t get the last frame, frames comes from buffer. In order to solve this problem I used GStreamer pipeline, appsink drop=true, this works. But the problem is with this much cameras the CPU usage is too much. So I decided to use DeepStream’s plugin nvvideoconvert to use GPU. But this time GPU usage is too much and also frames comes corrupted.

My question is how can I reduce CPU and GPU usage and get frames with most efficient way?

The pipline that I use and works for cameras is below, when I try different decode plugins it won’t work for all.

gst-launch-1.0 -e rtspsrc location={uri} latency=100 ! decodebin ! nvvideoconvert ! appsink drop=true

The one using less GPU memory but not works for all:

gst-launch-1.0 rtspsrc location={uri} latency=100 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! nvvideoconvert output-buffers=15 ! videoscale ! video/x-raw,width=1024,height=720,format=BGR ! appsink drop=true

the dropping merely drops the decoded picture, if you can’t read fast enough from the appsink (e.g. because your AI processing is slower than the decoding)

control the decoding. don’t decode everything.

decode only intra-coded frames, no P/B-frames. that’s going to be a fixed amount of reduction but it’s an easy reduction to choose.

if neither your GPU nor your CPU can handle the load, perhaps distribute it across both? if that still doesn’t work, you’re gonna need more processing power.

That’s suprising, I would expect this to only use the dedicated hardware chip, are you sure GPU usage is not too much because of Yolo with the increased frame rate?

I would try cudacodec::VideoReader with allowFrameDrop==true if you can’t process the frames fast enough and you want a quick solution. Then if that works you may want to design your own pipeline as suggested by @crackwitz so you can decide which frames to drop.