Normalizing CF_32 images with negative pixel values gives output with some black regions of the original image turning white

I have a float32 signed image that displays when I use imshow() but gives a black output when using imwrite(), which I suspect is because the float 32 array has values between around -6 to 6, which result in the output having pixel values 6 in the 0-255 range.

I tried MINMAX normalisation, but this results in the black parts of the image flashing white when I iterate through the frames.

The float32 array I’m getting is the Brox optical flow output which I’d like to save as it is displayed in imshow().

How do I normalize these optical flow arrays to save them as .jpg or any other format that is small in size?
I’ve tried TIFF to get accurate outputs at the cost of size, but the outputs are still distorted.

image
image
image

Code:

#FLOW COMPUTATION
gpu_flow = brox_of.calc(gpu_prev, gpu_frame, None) #2 vectors

#splitting flow into 2
gpu_flow_x = cv2.cuda_GpuMat(gpu_flow.size(), cv2.CV_32FC1)
gpu_flow_y = cv2.cuda_GpuMat(gpu_flow.size(), cv2.CV_32FC1)
temp = cv2.cuda_GpuMat(gpu_flow.size(), cv2.CV_8U)
cv2.cuda.split(gpu_flow, [gpu_flow_x, gpu_flow_y], cv2.cuda.Stream_Null())

# set value to normalized magnitude from 0 to 255
#gpu_flow_x = cv2.cuda.normalize(gpu_flow_x, 0, 1, cv2.NORM_MINMAX, -1)
#print(gpu_flow_x.type())
#gpu_flow_y = cv2.cuda.normalize(gpu_flow_y, 0, 255, cv2.NORM_MINMAX, -1)

#help(cv2.cuda_GpuMat.convertTo)
gpu_flow_x.convertTo(cv2.CV_8U, temp)
x = gpu_flow_x.download()
y = gpu_flow_y.download()
#x3d = np.expand_dims(x, axis=2)
#x = x.astype(np.uint8)
cv2.imshow('temp', x)

The code looks bad now, but this is to show what functions I’ve tried using.

hmm, simple test case (on CPU):

Mat m(5,5,CV_32F);
randu(m,-6,6);
cout << m << endl;
normalize(m,m,0,255,NORM_MINMAX);
cout << m << endl;

[0.36339352, -3.6088896, -1.1872867, 3.7726212, -0.75440431;
 -3.0145235, 3.2772608, 3.1451247, -2.3064661, 2.4291804;
 -0.25863352, 3.5062802, -4.9698825, -5.099277, -4.0389194;
 -2.4026494, 4.867847, 2.5162301, -4.2034502, 3.1852791;
 -4.5086231, -5.9552569, 0.61816305, 5.9783916, -4.0920944]
[135.01788, 50.137535, 101.88271, 207.86676, 111.13261;
 62.838036, 197.28183, 194.45833, 77.967911, 179.15993;
 121.72631, 202.17555, 21.055634, 18.29071, 40.948593;
 75.912659, 231.26971, 181.02002, 37.432877, 195.31635;
 30.911896, 0, 140.46184, 255, 39.81234]

(this looks fairly ok to me !)

so, can you try to repeat your problem on CPU ?
(might be a cuda-specific problem)

1 Like

Ah, I’d like to add that cv2.normalize and cv2.cuda.normalize were giving me different outputs and the number of arguments they needed was a bit different too.

I have tried it on my CPU and the output still doesn’t match the original, though I’ll try it again.
I’ll share the results soon.

I remember the CPU implementation not giving those black to white problems but there were a lot of artifacts.

1 Like

To add, my final goal is to be able to write the images to disk as they were being shown.

Since there was disparity between the output of imshow and imwrite, I’m trying to normalize what imshow takes as an input (floar32c1) to give the closest output as uint8.

I have tried using the convertTo function to achieve this but I am getting the following error.

---------------------------------------------------------------------------
error                                     Traceback (most recent call last)
<ipython-input-10-7d55d48999ef> in <module>
     92 
     93     #help(cv2.cuda_GpuMat.convertTo)
---> 94     temp = gpu_flow_x.convertTo(cv2.CV_8U)
     95     x = gpu_flow_x.download()
     96     y = gpu_flow_y.download()

error: OpenCV(4.5.2) C:\OpenCV_Build\opencv-4.5.2\modules\core\src\matrix_wrap.cpp:342: error: (-213:The function/feature is not implemented) getGpuMat is available only for cuda::GpuMat and cuda::HostMem in function 'cv::_InputArray::getGpuMat'

I have already defined temp as a gpu_mat object.

#splitting flow into 2
gpu_flow_x = cv2.cuda_GpuMat(gpu_flow.size(), cv2.CV_32FC1)
gpu_flow_y = cv2.cuda_GpuMat(gpu_flow.size(), cv2.CV_32FC1)
temp = cv2.cuda_GpuMat(gpu_flow.size(), cv2.CV_8U)
cv2.cuda.split(gpu_flow, [gpu_flow_x, gpu_flow_y], cv2.cuda.Stream_Null())

This is the difference between the original float32 image(which has negative pixel values) and the CPU normalize function’s output.

image

image

image

image

image

And here is my block of code:

#splitting flow into 2
gpu_flow_x = cv2.cuda_GpuMat(gpu_flow.size(), cv2.CV_32FC1)
gpu_flow_y = cv2.cuda_GpuMat(gpu_flow.size(), cv2.CV_32FC1)

cv2.cuda.split(gpu_flow, [gpu_flow_x, gpu_flow_y], cv2.cuda.Stream_Null())

x = gpu_flow_x.download()
y = gpu_flow_y.download()


xclip = np.clip(x, 0, x.max())
yclip = np.clip(y, 0, y.max())

#cv2.imshow('temp1', xclip)

cv2.imshow('temp2', x)

#xnorm = cv2.cuda.normalize(xclip, 0, 1.0, cv2.NORM_MINMAX, -1)

xnorm = cv2.normalize(x, 0, 255.0, cv2.NORM_MINMAX)
ynorm = cv2.normalize(yclip, 0, 255.0, cv2.NORM_MINMAX)
cv2.imshow('temp3', xnorm)

As you can see, I’ve also added lines to clip the negative values to 0, but that’s straight-up data loss, and even that method is failing for a considerable number of images (black turning into white, I suspect negative values are clipping back to 255).

I’d be grateful if you could tell me a suitable method to save the flow output in a storage effective manner. I ultimately plan to use them for training a model, but saving them as NumPy arrays takes up too much space.

Thanks for reading my post.