I have built a custom 3D stereo cam of two Canon Ixus 860 side by side.
Now I want to prepare 3D stereo images from my custom stereo cam to be able to view these images using a stereo headset like valve index.
Currently I do not own an valve index but I will get one end of this year.
Now I want to get prepared in advance with the theory of 3D stereo images.
My main concern is the lenses of my 3D stereo cam and those inside valve index.
The Ixus 860 I can set different focal length e.g. 27, 37, 46, 55, 65, 77, 89 and 105 using the zoom lever. The valve index has it’s own lenses, some fish-eye I think.
The problem now is to convert the left and right image from my stereo cam to get a correct result in the valve index.
The only zoom level on my stereo cam which is closest to fish-eye is 27.
Lets say I take a pair of images with zoom level 27.
What are the steps now using OpenCV to get (more ore less) accurate images for viewing with valve index?
I think I have to undistort the left and right image from my stereo cam and re-distort it to match the lens of valve index - am I right? May also some image rectification I need (or not?).
Can someone give me some more background how to do that?
What are the parameters for the lenses of the valve index?
Thanks for any help in this regard!
those VR headsets always come with SDKs that handle the warping.
you give it a 3D scene and it does the rest.
so what you do is, you pick any focal length (which implies a field of view), and then you prepare a 3D scene that puts that picture in front of the 3D camera/viewport at the right size/distance/field of view.
basically… find the SDK, learn to use it.
thanks for reply. but what about preparation of my images that come from my 3D stereo cam?
I think I have to calibrate it and do stereo rectification before I can use it in the SDK - right?
And are there functions directly inside OpenCV to do the warping?
If yes what are the names of those functions?
The background is that I may try to create some easy stereo image viewer like google cardboard. And for this and it’s lenses I need the image warping as well.
Thanks for further help.
oh right, I forgot all about the binocular aspect when I answered.
hm… anyway, an SDK should have the means, or at least explain how, to warp a view so it looks correct when wearing the headset.
don’t expect OpenCV to do something that an SDK is supposed to do. OpenCV isn’t a magic genie that solves all possible problems that have nothing to do with computer vision. that’s unreasonable to expect. you absolutely need to turn to a manufacturer SDK for anything that’s required to turn regular data (3D geometry or pictures) into something the headset shows properly. that is computer graphics. do you understand that a VR headset has weird lenses and requires funky warping so the pictures look straight? OpenCV doesn’t know how the lenses of your headset work. a manufacturer’s SDK (or documentation) will. you can maybe use OpenCV to perform the warping, but you have to find out what warping the headset needs.
you don’t necessarily have to calibrate anything for your cameras. you can calculate the camera projection matrices from resolution and FoV, and you know the “pupillary distance”. that’s good enough to get started. you can do stereo calibration later. that’s a separate problem from using a VR headset’s SDK to show pictures. focus on one problem at a time.
are your cameras around 63mm/2.5" apart, same as a human’s eyes, or do you plan to make the view feel weird, or will you do 3d reconstruction on the stereo pictures?
have you looked for an SDK and documentation on your headset? you need to put some work into this. you aren’t done until you know what functions to call in the VR headset SDK or you have found an exact description of the exact mathematical function to apply for this particular VR headset. “warp” can mean much and imply arbitrary complexity.
thanks a lot for further help.
Since I will get my Valve index (hopefully) end of this year I want to start with a more simple approach e.g. google cardboard. This is a simple 3D stereo viewer but comes already with lenses which need warped left and right image source.
My 3D stereo cam has lens distance of 72 mm and will provide left and right image which are distorted are typical for real lenses and real cams. The distortion is almost barrel distortion and both images do not show same conent as one cam is mounted upside down.
This distortion I can get rid of uising OpenCV (am I right?). And I believe I can also crop both images to get the same content.
Now I have two images one for left and one for right eye more or less undistorted and showing same content.
To view these images using google card board I have to create the correct warping.
For me it seems like a barrel distortion I have to add to my left and right image but I am not sure.
So what I want to try is to use OpenCV to add some barrel distortion to my left and right image and view it through google card board.
If it looks real then I am happy and if not I have to try different values for the barrel distortion.
Is this the correct way to make my left and right image work for google card board?
Or is barrel distortion not the correct ‘warping’ operation?.
to remove lens distortion from images taken with a camera, OpenCV is the right tool.
OpenCV has APIs to calibrate the lens distortion (measure it).
to re-apply lens distortion… it can do that too. however, I don’t know how you’ll calibrate that.
lens distortion isn’t one value. it’s multiple. the simplest model is a polynomial. varying the first parameter has some effect but you’ll need the other parameters as well to model real-world lenses well.
the correct way to do “google cardboard” is to look at their SDK. google cardboard implies specific lenses in a specific configuration (distance), and you need to know the phone’s screen size and resolution. the google cardboard SDK knows that.
if you don’t want it “correctly”, but do it yourself, without using the SDKs of “google cardboard” or the VR headset, you’ll have work ahead of you.