Hi berak, I am not clear what the python side would be either.
We have two interfaces here.
In the first function, we allocate a list of GPU memory with cudaMalloc(), which are stored in workspace:
In the second function imencode(), we use the workspace to accommodate some tensors and do some GPU-accelerated optimizations.
I ever tried to use vector<vector< unsigned char>> to replace it, but it didn’t work for cudaMalloc().
Hi sturkmen, I am not familiar with InputArray.
Inside the C++ function, I need multiple pointers to store the memory space on GPU.
I don’t know if InputArray works in this case.
And I find vector<intptr_t> and vector< long> return the same error as above, that’s a little weird.
if all you need in python is to “pass around” your cuda workspace vectors
(as in: A() produces it, B() consumes it, and noone tries to “peek into it” from python inbetween),
you could wrap it opaquely into some struct:
// the struct is exposed to python, the content not so !
struct CV_EXPORTS_W Workspace {
vector<uchar*> workspace;
};
CV_EXPORTS_W Ptr<Workspace> init_workspace() {
Ptr<Workspace> ws = makePtr<Workspace>();
// ... fill ws->workspace
return ws;
}
CV_EXPORTS_W bool imencode( const String& ext, InputArray img,
CV_OUT std::vector& buf,
const std::vector& params = std::vector(),
CV_IN Ptr<Workspace> workspace) {
// do something with workspace->workspace;
}