How to do 3D affine transform where the shape of the output is different than the input

Hello, for a project, I need to do fast affine transform to resample 3D images in python. Where the shape of the second image is not the same as the first one.
Like the shape of the first image is (112,112,70), and the second one : (174,174,50). Normally I used tu do the resampling with scipy.ndimage,affine_transform scipy.ndimage.affine_transform — SciPy v1.10.1 Manual
However, this is too slow for me (like 0.15 seconds per resampling, and there is 200 of them).
Is it possible tu use opencv for this problem, or is it only possible for 2D images ?
I saw that it wasn’t possible in the past ( I saw a post from 2020, but I don’t know if it’s still not possible)
Thanks in advance,
Aurelien

opencv isn’t made for voxel data. one of the fundamental assumptions is that stuff is flat (a picture) and the third dimension is “attributes” (color).

the cv::Mat type, in C++, can carry a limited number of channels (512) but that still doesn’t mean the APIs (warpAffine) can just interpret data as multi-dimensional.

if you’re looking for speed, either write your kernel in python and slap numba on it, or some other thing like it.

or write your kernel as opencl or some other GPU-type thing.

GPUs are good at sampling in voxel data. trilinear interpolation is commonly available.

if you’re trying to visualize voxel data volumetrically, take a class on the subject. there are approaches that run well even on GPUs that are over a decade old.

upcoming 5.0 will have a 3d module

(still, not exactly, what you’re looking for, i guess …)

Thanks for your answers, I will try to inplement that in my software