I’m working on a Light Estimation algorithm. My program results in coefficients for spherical harmonics, which can be used to approximate the environmental light from all directions.
With this estimation I want to estimate the main light, it’s direction and color. Since I have already implemented a renderer, which can get me fast renderings of my results, I thought I could use a simple blob detector and use the keypoint properties (center point, size, etc.) to calculate my needed values. However there are some issues…
Renderings
For example, if I use a spherical rendering (top image):
The backside of the sphere is rendered in such a way, that the resulting light area is stretched into a circle at the edges, which makes it hard for the blob detector to get an accurate position and size.
I can get around this by rendering the same values, but with the spherical angles (phi and theta) representing the rows and columns of an images, which gives the bottom image:
With this I have more precise areas, however as we can see, rendered onto a sphere, the above would only be a single light area and a smaller dark area. What I want is to the blob for this single dark area. Is there a built-in option in opencv for that, or do I have to do it manually, for example by repeating the texture on the edges or similar?