- GPU : NVIDIA GeForce RTX 3060
- Driver : 525.105.17
- TensorRT : 8.4.1.5
- Cuda : 11.7
- DeepStream : 6.1.1
Hello,
I want to use Yolov5 or Yolov8 on multiple cameras (about 20 or more). When I use cv2.VideoCapture() I can’t get the last frame, frames comes from buffer. In order to solve this problem I used GStreamer pipeline, appsink drop=true, this works. But the problem is with this much cameras the CPU usage is too much. So I decided to use DeepStream’s plugin nvvideoconvert to use GPU. But this time GPU usage is too much and also frames comes corrupted.
My question is how can I reduce CPU and GPU usage and get frames with most efficient way?
The pipline that I use and works for cameras is below, when I try different decode plugins it won’t work for all.
gst-launch-1.0 -e rtspsrc location={uri} latency=100 ! decodebin ! nvvideoconvert ! appsink drop=true
The one using less GPU memory but not works for all:
gst-launch-1.0 rtspsrc location={uri} latency=100 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! nvvideoconvert output-buffers=15 ! videoscale ! video/x-raw,width=1024,height=720,format=BGR ! appsink drop=true