Note: sorry for string links - since I created this account recently, I can’t include more than 2 links in one post.
Note 2: this question was originally asked on OpenVINO github.
Using OpenVINO 2022.2 with OpenCV 4.7.0 (both built from source) and Python 3.9.16 on Ubuntu 22.04.
I have a Movidius MyriadX VPU and an IP camera and want to run the inference on a live stream using MyriadX for object detection. I have downloaded and converted the model ssd_mobilenet_v1_coco (docs[dot]openvino[dot]ai/2022.2/omz_models_model_ssd_mobilenet_v1_coco.html) with
omz_downloader --name ssd_mobilenet_v1_coco --precisions FP16 --output_dir /home/user0/Downloads/ssd_mobilenet_v1_coco_downloaded
and
omz_converter --download_dir /home/user0/Downloads/ssd_mobilenet_v1_coco_downloaded --output_dir /home/user0/Downloads/ssd_mobilenet_v1_coco_converted --name ssd_mobilenet_v1_coco --precisions FP16
Then trying to run code attached in openvino_error.zip (zip includes model files too; in code some info is redacted, like the IP camera stream URL). It includes the following relevant lines:
net = cv2.dnn.readNetFromModelOptimizer(bin = model_bin, xml = config_xml)
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)
...
video_stream = cv2.VideoCapture(filename = stream_url)
...
ret, frame = video_stream.read()
...
frame = cv2.resize(frame, (300,300), interpolation = cv2.INTER_AREA)
params = cv2.dnn.Image2BlobParams(scalefactor = 1.0/127.5, size = (300, 300), mean = 127.5, swapRB = True, ddepth = cv2.CV_32F, datalayout = cv2.dnn.DNN_LAYOUT_NHWC)
blob = cv2.dnn.blobFromImageWithParams(frame, params)
net.setInput(blob)
detections = net.forward() # ERRORS HERE
...
Calling the program as
/home/user0/build-opencv/setup_vars.sh python3 opencv_app.py
but facing the error:
...
[ERROR]: OpenCV(4.7.0-dev) /home/user0/opencv/modules/dnn/src/ie_ngraph.cpp:865: error: (-2:Unspecified error) in function 'initPlugin'
> Failed to initialize Inference Engine backend (device = MYRIAD): Convert_5316 of type Convert: [ GENERAL_ERROR ]
> /home/jenkins/agent/workspace/private-ci/ie/build-linux-ubuntu20/b/repos/openvino/src/plugins/intel_myriad/graph_transformer/src/stages/convert.cpp:67 [Internal Error]: Final check for stage Convert_5316 with type Convert has failed: Conversion from FP16 to U8 is unsupported
As I understand, VPU only supports FP16 precision (intel[dot]com/content/www/us/en/developer/articles/technical/should-i-choose-fp16-or-fp32-for-my-deep-learning-model.html) hence when converting I specified that precision and am not sure what exactly is trying to convert FP16 to U8 and why it is being done.
I have converted the same model to FP32 (just switched --precisions FP32
in the omz_converter
command) but the error persists. Using a different model - ssdlite_mobilenet_v2 (docs[dot]openvino[dot]ai/2022.2/omz_models_model_ssdlite_mobilenet_v2.html) - in FP16 did not solve the error either.
Any pointers appreciated!