DNN module layer gradients

I would like to implement and use things like Grad-CAM in my C++ OpenCV applications.
Am I right, that there is currently no dnn-layer-gradient functionality (or back-propagation) implemented in OpenCV, because OpenCV isn’t used for training?

Is the inference engine and the model representation suitable for implementing gradient techniques, or are there some limitations that make it harder (e.g. because the inference engine is strongly optimized for forward passes)?

1 Like

yes, unfortunately you are right about it. there are no gradients at all.
(so imho, the grad-cam idea wont work here)

but is it technically possible to access all the intermediate activation maps and weights etc. during runtime, to manually implement the gradients?

well you can iterate the layers, and retrieve weights & biases,
you can also “tap” the forward() pass by specifying a list of output layers
(that would be the “activations”, right ?)

but how would you get a proper gradient from that ?
wouldn’t that need the backwards (derived) activation ?

btw, GitHub - Pandinosaurus/Cpp-VizDnnBlobsOCV: A tutorial to visualize the deep learning blobs generated by the OpenCV dnn module.

1 Like