Help manipulating color levels

Hi, I’m trying to filter an image that has a lot of over exposed reflective surfaces. It helps if i can lower the color levels after equalizing the histogram. It helps to show the color of the surface suffering the reflection. Here is some code i found on stack exchange that lowers the color levels and achieves the correct effect. But this code is way to slow on my raspberry pi. Is there a way i can get a similar effect using open cv that is quicker. Thanks in advance.

Adjust Color Levels lower

inBlack = np.array([0, 0, 0], dtype=np.float32)
inWhite = np.array([255, 255, 255], dtype=np.float32)
inGamma = np.array([0.5, 0.5, 0.5], dtype=np.float32)
outBlack = np.array([0, 0, 0], dtype=np.float32)
outWhite = np.array([255, 255, 255], dtype=np.float32)

img = np.uint8(frame)

img = np.clip((img - inBlack) / (inWhite - inBlack), 0, 255)
img = (img ** (1/inGamma)) * (outWhite - outBlack) + outBlack
img = np.clip(img, 0, 255).astype(np.uint8)

cv2.imshow(“test_1”, img)

nevermind, you are asking about optimization…

well, this is numpy, it’ll run fast enough as it is.

all operations have equivalents in pure OpenCV.

if you need optimization, consider getting into how the raspi’s GPU works. you might have to write some kind of shader (OpenGL) or kernel (OpenCL, no idea whether that works on raspis these days).

Yep, optimisation is a tricky thing in Python. Especially on a simple computer like the Raspberry Pi.

First, consider to use integer operations where possible instead of float32. They are much faster.

An important drawback of Python is that it’s single-threaded - so it will use only one of the four cores of the CPU. C++ is much faster and it’s easier to parallelize (e.g. using TBB library).

These solutions are probably simpler to implement than an OpenGL shader. The Raspberry Pi has no OpenCL support.

nobody in their right mind implements numerical algorithms in python (except as a proof of concept). you’re supposed to use libraries like numpy because they implement the stuff in C/C++.

numpy also uses OpenBLAS or Intel MKL for a lot of the heavy lifting. and those ARE multithreaded (yes, “even” when used from python).

raspberry pi 4 doesn’t (yet?) have OpenCL support. the 3B does have some support, and a bunch of caveats (“memory-access speed”)

I’m pretty sure that Numpy is single-threaded on the Raspberry Pi. I’m developing a project on latest Raspbian (using official Numpy packages), and it definitely runs on a single core. If somebody knows how to enable multiprocessing, I’m really interested.

About OpenCL: I wouldn’t wholeheartedly recommend an experimental implementation of this library. OpenCL is quite hard to debug.

1 Like

numpy uses OpenBLAS/MKL for some of its functions, but not all. you can check if it can use multi-threading:

N = 2**13; A = np.random.random((N,N)); B = np.random.random((N,N)); C = np.dot(A, B)

that keeps my 2012-era quadcore busy on all cores for a handful of seconds (Intel MKL I think). use 2**14 to keep a Ryzen 3900X busy for a handful of seconds (OpenBLAS).

if it does that, but your required functions are still single-threading, that might be a failing of numpy or the functions simply don’t run long enough to engage the multithreading.

I absolutely see that numpy has downsides in a lot of situations (lots of quick calls) but it’s not generally incapable.

1 Like

Thanks for the tip! Unfortunately it just confirms my previous post. Numpy runs multithreaded on my laptop, but it’s single-threaded on a Raspberry Pi with Raspbian (didn’t check with other OS).

So I maintain my previous advices:

  • Use integer matrices (images) instead of float32
  • Implement your code in C++ with TBB parallelization (see this answer on how to do it)

Both methods will give a ~2-4 times speedup.