I am using cv2.VideoCapture(0) to open the camera and read it’s frames in 2 different neural networks namely Object detection and Semantic segmentation.
I tried passing the frames to both of the networks at the same time, but due to a bug in tensorflow, I am now forced to run these network seperately. But unfornately, opencv doesn’t allow to run cv2.VideoCapture(0) in 2 different programs at the same time. I know that this is not possible, but is there any other alternative to make it work? Any help is appreciated.
I have a small update after using pyfakeWebcam. The output image of pyfakeWebcam is not np.ndarray as in opencv. Hence I couldn’t apply the same operations on the image. Please find the image attached.
Type of pyfakewebcam-> <class 'pyfakewebcam.pyfakewebcam.FakeWebcam'>
Traceback (most recent call last):
File "test_virtual_camera.py", line 15, in <module>
gray = cv2.resize(gray, (500, 300))
TypeError: Expected Ptr<cv::UMat> for argument '%s'