Real time inference on dual webcams in parallel

I have 2 webcams connected via USB to my PC which, taken individually, work perfectly, but when I go to “get” them with the py and openCV code I can never detect them together in parallel… here is the error:

 [ WARN:0@9.318 ] global cap_msmf.cpp:471 `anonymous-namespace'::SourceReaderCB::OnReadSample videoio(MSMF): OnReadSample() is called with error status: -1072875772
[ WARN:0@9.322] global cap_msmf.cpp:483 `anonymous-namespace'::SourceReaderCB::OnReadSample videoio(MSMF): async ReadSample() call is failed with error status: -1072875772
[ WARN:1@9.327] global cap_msmf.cpp:1759 CvCapture_MSMF::grabFrame videoio(MSMF): can't grab frame. Error: -1072875772

I also leave the code below to understand what I do but it’s very simple…
I really hope someone helps me because I can’t understand why they do this and I can’t find anything online, Thanks in advance everyone!

import cv2
import numpy as np
import threading
import os
from threading import Thread
from queue import Queue
from ultralytics import YOLO
import supervision as sv
import time
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"

def inference_webcam(webcam_id, model, cap):
     polygon = np.array([
         [50, 450], # top left
         [500, 450], # top right
         [500, 10], # bottom right
         [50, 10] # bottom left
     ])

     width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
     height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
     webcam_data = (width, height)

     zone = sv.PolygonZone(polygon=polygon, frame_resolution_wh=webcam_data)
     box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
     zone_annotator = sv.PolygonZoneAnnotator(zone=zone, color=sv.Color.green(), thickness=6, text_thickness=6,
                                             text_scale=4)

     while True:

         ret, frame = cap.read()
 
         if not ret:
             break
    
         results = model(frame)
        
         for result in results:
             frame = result.orig_img

             detections = sv.Detections.from_yolov8(result)

             labels = [
                 f"{tracker_id} {model.names[class_id]} {confidence:0.2f}"
                 for _, confidence, class_id, tracker_id
                 in detections
             ]

             frame = box_annotator.annotate(
                 scene=frame,
                 detections=detections,
                 labels=labels
             )

             zone.trigger(detections=detections)
             zone_annotator.annotate(scene=frame)
             print("END webcam cycle "+str(webcam_id) +"\n")

             cv2.imshow(f"yolov8-{webcam_id}", frame)

         if cv2.waitKey(1) & 0xFF == 27:
             break

     cap.release()
     cv2.destroyAllWindows()

def main():
     model = YOLO("yoloV8m.pt")

     webcam_id_1 = 0
     webcam_id_2 = 1

     cap2 = cv2.VideoCapture(1)# cv2.CAP_DSHOW
     cap = cv2.VideoCapture(0)# cv2.CAP_DSHOW

     thread2 = Thread(target=webcam_inference, args=(webcam_id_2, model, cap2))
     thread1 = Thread(target=webcam_inference, args=(webcam_id_1, model, cap))
 
     thread2.start()
     thread1.start()
    
     thread1.join()
     thread2.join()

if __name__ == "__main__":
     main()

this probably happens already w/o yolo or multithreading.
please check, and reduce code to an MRE
first guess here is, that both cams are running on the same usb bus, so one throttles the other.

try with reduced resolution for both cams, or put them on seperate usb hubs.

then:

  • are you sure, the ultralytics code supports multithreading (w. a single / same model) ?
  • opencv’s highgui functions (imshow(),waitKey(), etc) MUST stay on the main thread

First of all, thanks for replying, you really helped me!
Now, what I discovered is that the yolo model can be the same but you have to create a model instance for each thread you have and intend to use it.
At this point you just need to change the USB input plate (for example I connected a webcam to the front and one to the back of my PC) and everything works perfectly without changing anything else!
Thanks again so much.