System information (version)
- OpenCV => 4.5.2 / 4.1.1
- Operating System / Platform => Windows 10 64 Bit / Ubuntu 16.04 64 Bit
- python=> 3.7.9
Detailed description
Hello, i am try to run my trained model with openCV but get errors:
first i am train my model in keras (down will be show summary).
After training i convert keras model to tensorflow model and get pb file
when i try to run the model get error :
// python code example
error: OpenCV(4.5.2) C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-dn5w5exm\opencv\modules\dnn\src\dnn.cpp:3127: error: (-215:Assertion failed) inp.total() in function 'cv::dnn::dnn4_v20210301::Net::Impl::allocateLayers'
if i try to load pb model with pbtxt configuration i am get error:
// python code example
[ERROR:0] global C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-dn5w5exm\opencv\modules\dnn\src\tensorflow\tf_importer.cpp (748) cv::dnn::dnn4_v20210301::`anonymous-namespace'::addConstNodes DNN/TF: Can't handle node='strided_slice/stack_2'. Exception: OpenCV(4.5.2) C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-dn5w5exm\opencv\modules\dnn\src\tensorflow\tf_importer.cpp:742: error: (-215:Assertion failed) const_layers.insert(std::make_pair(name, li)).second in function 'cv::dnn::dnn4_v20210301::`anonymous-namespace'::addConstNodes'
i am try first with openCV version 4.1.1, and after i am try with version 4.5.2 and try different OS (windows 10 and Ubuntu 16.04)
may be someone can help me to load my trained model with openCV
Steps to reproduce
// python code example
import numpy as np
import cv2
from keras.models import load_model
keras_mdodel = load_model('pb_people_20/sim_mdl_func_v2_pow2_china_people_20.h5')
keras_mdodel.summary()
matcher_path_opencv='pb_people_20/sim_mdl_func_v2_pow2_china_people_20.pb'
matcher_path_opencv_pbtxt='pb_people_20/sim_mdl_func_v2_pow2_china_people_20_constant_graph.pbtxt'
matcher_opencv = cv2.dnn.readNetFromTensorflow(matcher_path_opencv,matcher_path_opencv_pbtxt)
def get_match_opencv(embs_0,embs_1):
blobs = cv2.dnn.blobFromImages([embs_0,embs_1])
# Set blob as input to faceNet
matcher_opencv.setInput(blobs)#,'input_1')
# Runs a forward pass to compute the net output
return matcher_opencv.forward()
def match_opencv(embs_0,embs_1):
res = 0
for first_emb in embs_0:
for second_emb in embs_1:
match_res = np.squeeze(get_match_opencv(first_emb,second_emb))
res = max(res,match_res)
return res
emb_comp = np.array([[-0.01565263, -0.03656595, -0.02106497, -0.01172114, -0.03183658,
0.04423258, 0.07379042, -0.03448544, 0.00917874, 0.10619327,
0.067057 , -0.04213215, -0.07464422, -0.06878724, -0.09982515,
0.01137134, 0.04709429, 0.02330536, -0.00764309, -0.14052634,
-0.17314363, -0.04568923, 0.01383415, -0.00578358, -0.08864444,
0.07767007, -0.06337555, -0.09800968, -0.04953913, 0.01720826,
-0.00948498, -0.09592741, -0.10465072, -0.06163599, 0.22899778,
0.10588204, -0.03106888, 0.06826289, 0.19125976, 0.09685656,
0.10258153, -0.04890387, 0.07771394, 0.10267048, 0.12820566,
-0.17984524, 0.10338812, 0.11158907, -0.06663922, 0.05078162,
-0.0144849 , 0.03813416, -0.12748823, 0.06191524, -0.09051557,
0.01777557, 0.14020114, -0.00481044, -0.04369346, -0.06189118,
0.02411589, -0.2553893 , 0.04399932, -0.11846916, 0.08772267,
0.14251669, -0.11032436, 0.09020332, -0.04854635, -0.1081277 ,
-0.11582536, -0.03723017, 0.03233388, 0.00288666, 0.13660526,
-0.03048528, -0.10125471, 0.0071717 , 0.00672208, -0.02564991,
0.02921752, 0.03457039, 0.01814651, -0.04918699, -0.08088212,
-0.00733991, -0.0040632 , 0.12189984, -0.02091988, 0.27729505,
-0.09878255, 0.01982491, 0.10892831, -0.05085582, -0.08475107,
0.13422482, -0.04512339, 0.01896015, -0.10464227, 0.04971102,
-0.07694119, -0.06038413, -0.0517869 , 0.00917315, 0.00206291,
-0.02974024, -0.01651836, 0.16789761, 0.07368388, -0.10541685,
0.14497165, 0.06329738, 0.1521214 , -0.01266835, -0.00671655,
-0.083036 , 0.07795148, -0.02421858, 0.17278863, -0.05978476,
-0.06421243, -0.14550938, 0.00752902, 0.01600735, -0.07240639,
0.02841103, -0.05694156, -0.11571351]], dtype=np.float32)
img_embs = np.array([[ 0.11008514, 0.01455243, 0.01908959, -0.05834481, -0.14181934,
0.1882611 , 0.08232455, -0.11491345, 0.05500843, 0.04011501,
-0.04828305, -0.03326504, -0.05615507, -0.02345234, -0.0491137 ,
0.12314314, 0.03639589, -0.02172398, 0.12210201, -0.09503838,
-0.08331381, 0.01604286, 0.00464164, -0.01787202, -0.0006151 ,
0.13468699, 0.01733225, -0.10102501, -0.02729505, -0.10542642,
-0.07366027, 0.02062424, -0.08315776, -0.01222375, 0.18682827,
0.11723035, -0.16790824, 0.0639649 , 0.17808685, 0.05183998,
0.07382489, -0.0351243 , 0.00582562, 0.01956308, 0.04352709,
-0.10843045, 0.02012463, -0.05169405, -0.05626178, 0.07084419,
-0.03077255, -0.02904841, -0.0701267 , -0.04788103, 0.08546986,
0.05289406, 0.05466829, 0.05425635, -0.0813577 , -0.03403144,
0.00544576, -0.2616964 , 0.06646861, -0.05636386, 0.0138996 ,
0.01612337, -0.00722741, -0.06695902, 0.05770075, -0.05611902,
-0.00302837, -0.05027693, 0.03430333, 0.02994055, 0.16164383,
-0.02481532, -0.20324335, 0.0160754 , 0.06737015, -0.01307072,
0.14364758, 0.0039455 , 0.09170939, -0.03038747, 0.02139785,
-0.0776043 , 0.07581864, 0.00768354, -0.03795378, 0.11794008,
-0.10925949, -0.07035936, 0.03584298, -0.05980027, -0.05402324,
0.11599824, -0.2232277 , 0.05435819, -0.06722381, 0.04105445,
-0.12166104, 0.02219271, -0.10503792, 0.04090016, 0.01218935,
-0.1845343 , -0.00819734, 0.2274523 , -0.0579717 , -0.19095403,
0.14852127, 0.10555372, 0.11048518, 0.05386249, -0.07497197,
-0.01151269, 0.15070955, 0.0480066 , 0.06213563, -0.01823237,
0.09147692, -0.14006777, 0.072235 , 0.03428208, -0.08408232,
0.03943622, -0.07408921, -0.08130765]], dtype=np.float32)
res2 = match_opencv([emb_comp],[img_embs])
can download this code with model from here:
https://github.com/opencv/opencv/files/6538904/Test_load_with_open_CV.zip
the summary of the model:
// python code example
Model: "Similarity_Model"
=================================================================================
ImageA_Input (InputLayer) (None, 128) 0
__________________________________________________________________________________________________
ImageB_Input (InputLayer) (None, 128) 0
__________________________________________________________________________________________________
lambda_1 (Lambda) (None, 128) 0 ImageA_Input[0][0]
ImageB_Input[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 32) 4128 lambda_1[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 32) 0 dense_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 16) 528 dropout_1[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 16) 0 dense_2[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 1) 17 dropout_2[0][0]
============================================================================
Total params: 4,673
Trainable params: 4,673
Non-trainable params: 0