cv::dnn::dnn4_v20211004::LayerData&) () from /usr/local/lib/libopencv_world.so.4.5

0x0000ffffee8cb194 in cv::dnn::dnn4_v20211004::Net::Impl::forwardLayer(cv::dnn::dnn4_v20211004::LayerData&) () from /usr/local/lib/libopencv_world.so.4.5

OpenCV => 4.5.4
Operating System / Platform => Linux : aarch64-gnu , Ubuntu 20.04
Compiler => Visual Studio 2019
DARKNET
CUDA
YoloV3
JETSON ORIN

– I am trying to implement R-CNN Masking on my custom object detection model with yolov3 - DARKNET and opencv 4.5.4 on Jetson Orin.

----I have succeeded in the Object detection with CUDA backend support

------Now in the masking implementation for the same, I followed this tutorial in cpp
https://github.com/spmallick/learnopencv/blob/master/Mask-RCNN/mask_rcnn.cpp

``

>
> String textGraph = "./mask_rcnn_inception_v2_coco_2018_01_28.pbtxt";
> String modelWeights = "./mask_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb";
>
> Load the network
> Net net = readNetFromTensorflow(modelWeights, textGraph);
>
> if (device == "cpu")
> {
> cout << "Using CPU device" << endl;
> net.setPreferableBackend(DNN_TARGET_CPU);
> }
> else if (device == "gpu")
> {
> cout << "Using GPU device" << endl;
> net.setPreferableBackend(DNN_BACKEND_CUDA);
> net.setPreferableTarget(DNN_TARGET_CUDA);
> }
>
>

``

I replaced the respective above lines of this code with the following lines

>
> String modelConfig = "/home/.../darknet/cfg/yolo-obj-3classes.cfg";
> String modelWts = "/home/.../darknet/yolo-obj_final_3classes.weights";
>
> Load the network
> Net net = readNetFromDarknet(modelConfig, modelWts);
> net.setPreferableBackend(DNN_BACKEND_OPENCV);
> net.setPreferableTarget(DNN_TARGET_CPU);
>



---- P.S: while the exact same lines worked perfectly good with the plain object detection

------ And my error when I run this on the Visual Studio 2019 is as follows:

**0x0000ffffee8cb194 in cv::dnn::dnn4_v20211004::Net::Impl::forwardLayer(cv::dnn::dnn4_v20211004::LayerData&) () from /usr/local/lib/libopencv_world.so.4.5
**
Seeking for help and suggestions.

Kind regards,
Karishma