I’d like to execute an ANN_MLP on GPU
I think it’s not possible, isn’t it?
So, I’d like to create automatically a neuroan network based on dnn module, from the module file that contains the trained model and its arquitecture
Is it possible?
However as MLPs are neither computationally complex, nor massively parallelizable, I don’t think you would gain a lot by running them on a GPU.
The cv::ml::ANN_MLP class doesn’t have GPU support. What you can do is to create and train your MLP network in another framework (Caffee, Torch…) as a DNN consisting in a few fully connected layers; then import the exported network using the cv::dnn module. As a matter of fact, MLPs are basic DNNs with only fully connected layers (and dropout layers).
An advice: if you want to use a MLP for image processing, you better use a DNN. Even the most basic DNNs outperform MLPs.
If you want to classify other data, then you can use a lighter library than OpenCV.
I understand that I cannot migrate authomatically the ANN_MLP trained to a dnn, so I have to train it again, isn’t it?
Yes. But still don’t understand why do you want to run it on GPU. ANNs are fast on the CPU. And they are not massively parallelizable as the convolutional layers on DNN
In fact, I’m predicting (classifying pixels) with an ANN_MLP at pixel level, with 4 or 5 inputs (corresponding to values in pixel at each of the channels of an image with 4 or 5 channels)
I predict only for pixels that pass some filters, for instance 1,000 pixels extracted from a 1024 x 512 image
Wouldn’t it be faster to execute that on a GPU using a dnn network?
since that’s a so-called “embarrassingly parallel” situation (every computation touches just one pixel), a GPU could do that very well… but see below.
the distinction between ANN/DNN/MLP is a subtle one. they’re all artificial neural networks (ANNs). DNNs are just very Deep. MLPs are usually somewhat shallow (few layers) and they consist exclusively of Fully Connected layers, whereas the usual neural network these days (is deep and) consists of a variety of layer types. the convolutional layer is a very popular type, but also a computationally very expensive one. GPUs ace convolutional layers.
since you’re only planning to touch 1000 pixels, that calculation will probably be cheaper on the CPU. why? because moving data to the GPU and sending requests for computation on the GPU cost time, i.e. latency. when your data is in main memory, the CPU can just access it. a GPU can’t.