I am experimenting with undistorting images. I have the following image below, and using the undistort function have the result. The fisheye module doesn’t work. Is this because my image isn’t a fisheye but wide angle instead? And in either case how do I reduce the perspective distortion?
FYI I have lost the specs on this lens but got its intrinsics from a calibration routine.
Images are too dark, but what I see is an undistorted image as output. What is it you say is not working? I believe you already have fisheye distortion parameters for your lens, and used the with fisheye::undistort to get the output.
There is radial distortion and there is tangential distortion. I’m a bit confused about perspective distortion. If you want to get a frontal view of what appears to be a rectangle, after undistortion you can apply an homography (it is easy to say, not so easy to do).
If you want to get a frontal view of what appears to be a rectangle, after undistortion you can apply an homography (it is easy to say, not so easy to do).
Can you be a little more specific? That is my goal. I have a homography set up but after I select the corners their reprojection has a high RMS.
Also, here are some better illuminated images. They are dark b/c they are bandpass IR 835 to )
I am using “vanilla” undistort routine. I’m not using the fisheye::undistort unless of course if they are the same thing. Is this both a fisheye & wide-angle lens? Should I be using the fisheye::undistort rather than the regular undistort?
@rg0001
Yes, you should use fisheye undistortion, but to do so you need to get distortion parameters for fisheye model, with a fisheye calibration routine.
Simple undistortion model apply to pinhole cameras, those that mean to give you a collinear image but they aren’t perfect (collinear means real straight lines will appear as straight lines in images). Wide angular don’t even try to give a collinear image, and fisheye model is more advanced in general, and specially suitable for wide angular lens.
If you are working with keypoints, standard procedure is to detect keypoint in the original image, then undistort their coordinates, avoiding to undistort the whole image.
I actually had one more simple question based on your responses so far: I understand about finding the homography (H) between two planes, and then warping perspective, now that you have explained it. You mentioned doing this with points which makes sense in using as little host resources as possible. My last Q: * So I’ve got (h) between corner coordinates of the original and corner coordinates of the destination – what about a point in the original that needs to be calculated in the destination using the existing homography?
I think you are saying you have an H that maps points from source to destination, and you want to be able to map points from destination to source. For that you will want to use the inverse of H. So, H.inv() (or compute the inverse homography by substituting source for destination and destination for source).
Note that you your 2D coordinates have to be represented as a 3 vector to multiply by the H matrix, so use (x,y,1). The result of the multiplication will also be a 3 vector (X,Y,W) - to get the 2D coordinates in the destination plane divide X and Y by W: (x’,y’) = (X/W, Y/W)