Help - understanding how to generate a point cloud from SinusoidalPattern::unwrapPhaseMap()

Hello,

Season’s Greetings :slight_smile:

First, I realize that I don’t understand this subject as well and therefore I apologize for any obvious faux pas. I am quite new to 3D scanning using structured light and request for assistance in learning this subject.

I have previously been able to use the GrayCodePattern class with a stereo camera and a projector to generate a point cloud. However, I am struggling with doing the same using phase shifted sinusoidal patterns. I know I am tripping since the SinusoidalPattern class does not implement a decode() to create a disparity map and the root cause of the problem is that I don’t understand the theory as well as I should.

I am playing with the data at opencv_extra/testdata/cv/structured_light/data at 4.10.0 · opencv/opencv_extra · GitHub (specifically the files - capture_sin_0.jpg, capture_sin_1.jpg and capture_sin_1.jpg)

I am using Python and am able to create the computePhaseMap and unwrapPhaseMap functions to create the respective phase maps.

When I was using the GrayCodePattern class and stereo 3d scanning, I had used the decode() function to create a disparity map and then generate a point cloud from this. The StructuredLightPattern does not have a decode() function.

I have referred through the following papers :

Cong, Pengyu, et al. “Accurate dynamic 3d sensing with fourier-assisted phase shifting.” IEEE Journal of Selected Topics in Signal Processing 9.3 (2014): 396-408.
APA

Gaur, Pranav Kant, Dinesh Madhukar Sarode, and Surojit Kumar Bose. “Development and accuracy evaluation of Coded Phase-shift 3D scanner.” arXiv preprint arXiv:2110.10520 (2021).
APA

But I know I don’t understand the theory and maths as well as I should and request if someone could help simplify this for me and / or point me to the right resources.

I attempted to use the following formula to get me a disparity map that seems good to me (not sure), however when I attempt to use this to create a

number_patterns = 16
disparity = (unwrapped_phase_map * number_patterns) / (2 * np.pi)

This gives me a disparity map that to me seems reasonable though the resultant point cloud has some issues

a) it has the background (area where the fringe pattern isn’t there) also shows up.
b) the region where the pattern is is showing in a different plane and an angle not sure if that’s to be expected. Am attaching an image of what I am see.

Would sincerely appreciate your help.

Thank you for your time.

Regards,
Jeetu

unfortunately, i dont recall trying the resp. opencv code, but
from an older project, i remember, that i had to ‘unwrap’ the phase maps - subtract the 1st phase map from a ‘base phase value’ from, to get to some kind of disparity.

then, imo, the test data is not fit for human experiments.
you’ll need at least phase offsets as large as in the repo above to make a nice 3d pc

Hello @berak, thank you so much for your response. Appreciate the pointer to your earlier project. I shall go through it in detail :slight_smile:

i remember, that i had to ‘unwrap’ the phase maps - subtract the 1st phase map
from a ‘base phase value’ from, to get to some kind of disparity.

Yes, I have come across this approach a few times but sometimes with the caveat that this is the simplest approach and this makes me wonder if there is a better approach to calculate disparity than taking a reference phase from which our phase will be subtracted?

The FAPS algorithm of phase shifting seems very interesting.

When using the OpenCV GrayCodePattern class you can use the decode() which returns a disparity map. Unfortunately this function doesn’t exist for the SinusoidalPattern class. I don’t know why this hasn’t been implemented though am sure there must be a good reason. I would be very willing to put in the work though at this moment I don’t have the expertise to do this - but if someone were to explain what needs to be done then I would like to give it a go.

For now, would appreciate any information on how I could have a robust mechanism to compute a disparity map or what I ultimately need is a robust means to get a point cloud.

Thank you once again for all your help. Appreciate your time :slight_smile:

Hello,

I have followed the suggestion given by @berak of not using the test data in opencv_extra and instead have collected images with sinusoidal patterns. I want to use the FAPS algorithm and therefore have the markers in my pattern - I need to validate if these markers are actually being projected out - can’t see them very easily in my image. I have attached the output I am receiving upto the unwrapping.

I expected a smoother gradient so to me this doesn’t look good, but I don’t know enough to know if this is what I should expect? Is this what one would expect? If not any idea of what’s going wrong. How can I improve upon this?

The snippet of code I used to I am using is :

# create object of SinusoidalPattern class
params = cv2.structured_light_SinusoidalPattern_Params()
params.methodId = cv2.structured_light.FAPS
sinusFaps = cv2.structured_light_SinusoidalPattern.create(params)

# load images with sinusoidal patterns
captures = []
captures.append(cv2.imread(os.path.join(img_dir,"0.png"),cv2.IMREAD_GRAYSCALE))
    captures.append(cv2.imread(os.path.join(img_dir,"1.png"),cv2.IMREAD_GRAYSCALE))
    captures.append(cv2.imread(os.path.join(img_dir,"2.png"),cv2.IMREAD_GRAYSCALE))

_, shadow_mask = cv2.threshold(captures[0], 50, 255, cv2.THRESH_BINARY)
wrapped_phase_map = np.zeros_like(captures[0],dtype=np.float32)
unwrapped_phase_map = np.zeros_like(captures[0],dtype=np.float32)

sinusFaps.computePhaseMap(captures,wrapped_phase_map, shadow_mask)
sinusFaps.unwrapPhaseMap(wrapped_phase_map,img_size,unwrapped_phase_map,shadow_mask)

I also want to ask if the sinusoidal phase shift algorithms and FAPS are what I would use in production for generating real time point clouds?

My research application is an intraoral dental scanner - therefore it would be a wand that goes in the mouth and the clinician would move it around and ideally as its being moved around point clouds are generated in real time and 3D stitching / reconstruction is happening in real time. I have tried graycode patterns and the result is good but am now attempting to reduce the scan time by reducing the number of patterns. Any thoughts would be appreciated.

I am horribly out of depth here and would seriously appreciate help on both the code above and also if I am using the correct algorithm / approach (phase shifting with FAPS) for this purpose or if there is a better approach