Is it possible to reduce the size and run time when building OpenCV from source?

I’m building OpenCV 4.5.1 from source with the following configuration:

  -D CUDA_ARCH_BIN=7.0 \
  -D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-11.0 \
  -D OPENCV_EXTRA_MODULES_PATH=/opencv_compile/opencv_contrib-4.5.1/modules \
  -D BUILD_opencv_python3=ON \
  -D PYTHON3_EXECUTABLE=/usr/bin/python3 \
  -D PYTHON3_NUMPY_INCLUDE_DIR=/usr/lib64/python3.6/site-packages/numpy/core/include \
  -D PYTHON3_PACKAGES_PATH=/opt/opencv_python \
  -D CMAKE_INSTALL_PREFIX=/opt/opencv .. && \

I’m building this in a Docker image which takes approx. 1hr 45mins and creates a file 8Gb in size.
I need to ship this image to another user.

Is there anyway that the build can be slimmed down?

which opencv modules do you actually need ? (all of them ?)

if you’re only interested in the python bindings, you should add


to the cmake cmdline, to have cv2 statically linked against the opencv libs
(you neither need those, nor the corresponding .so’s later, then)

it’s also unclear, what exactly you’re shipping
(statically linked cv2 would be ~10mb, dynamically linked ~2mb + ~100mb libs)
((let’s hope, you don’t ship all build artefacts, which indeed would be some GB :wink: ))


ah, wait this is all without CUDA …

@berak thanks for your reply

I need the following:

Image file reading and writing (imread, imwrite)
Drawing Functions (rectangle, putText, FONT_HERSHEY_SIMPLEX)
Color Space Conversions (cvtColor, COLOR_BGR2RGB)
Deep Neural Network (dnn)

what about cuda ???

1 Like

As @berak said if you are processing on the CPU and only need CUDA for the DNN stuff you should be able to remove the remaining CUDA modules

-DBUILD_opencv_cudaarithm=OFF -DBUILD_opencv_cudabgsegm=OFF -DBUILD_opencv_cudafeatures2d=OFF -DBUILD_opencv_cudafilters=OFF -DBUILD_opencv_cudaimgproc=OFF -DBUILD_opencv_cudalegacy=OFF -DBUILD_opencv_cudaobjdetect=OFF -DBUILD_opencv_cudaoptflow=OFF -DBUILD_opencv_cudastereo=OFF -DBUILD_opencv_cudawarping=OFF -DBUILD_opencv_cudacodec=OFF

@berak I need Cuda as I’m using the DNN module for object detection inference on an Nvidia GPU card (v100).

I anticipated your response that’s why I clarified what @berak meant above.

1 Like

@cudawarped I am doing some of the processing on the CPU but need to use the GPU for the DNN module.

Cool you should be able to disable all the other cuda modules then.

@berak @cudawarped thanks both for your guidance.

Build time has reduced from 105mins to 20mins and size of my Docker image from 8Gb to 4Gb. This is much more manageable now and my code still seems to work.