Feel free to split hairs, I’m new to CV and projections etc.
After looking further into this, this is what they mention:
The output of these filters formed the input to the photoreceptors that were equally spaced at 2° along the elevation and azimuth of the eye. The array of photoreceptors formed a rectangular grid in the cylindrical projection with 91rows and 181columns.
So i’m not clear if it is an equirectangular projection (correct me if i’m wrong), as doesn’t that project onto a cylinder? It seems the best thing would be an gnomonic projection. Here, i would do something like turn pixel coordiantes into spherical coordiantes (azimuth and elevation), from a point at the height of the focal length, then just map each degree to a new image (this would be rectangular, though this method, and the paper, don’t perfectly imitate insect eyes as i think they evenly tesellate a sphere with their ommatidia, where as the density of resolution increases greatly near the poles with the papers and my method).
There is a paper from Brian Stonelake available here explaining about this projection, and presents the formula:
With a bit of tweaking, i can easily just use this to sample from my input image. Precomputing this, the main bottleneck would just be accessing the memory.