I am a newbie and using a wide angle video camera and have calibrated it to undistort (using a checkerboard).
Is it possible on a Jetson or similarly more powerful CPU to correct the distortion in real-time? Maybe by remapping pixel by pixel or some other way?
Thanks in advance for any help!
It depends on what you mean by real time, I suppose. It depends on the size of the image and how many images per second you want to undistort, but it’s definitely possible for a range real-world use cases.
Using OpenCV the cv::undistort() function will undistort an image based on the camera matrix and distortion coefficients. You can speed this up by splitting this into two calls:
cv::initUndistortRectifyMap() – called one time to generate the undistort map
cv::remap() – called on each image you wish to undistort, using the maps generated above.
The interpolation type you use will affect runtime performance, too.
If you have a GPU to work with, you could do a mesh-based texture-mapped de-warp. I suspect this would be blisteringly fast on modern hardware.
Thanks for the quick reply. I have a project that will have varying wide-angle cameras in use. But, I want to record 1080 videos without distortion. A hardware correction is not practical (and expensive).
I have done post-processing undistortion routines for images. However, I am trying to find a way to undistort videos in real time and, if needed, am willing to use a separate processor just for that.
I can use a checkerboard, take pictures and a python script to determine the correction parameters for each camera/location that could be the input for the process.
Could the two-call system on a separate CPU/GPU you spoke about be able to do this in real time (i.e., be able to record a 1080 undistorted video)?
I would think with a decent GPU you should have no problem at all doing this in terms of compute power. I don’t have any recent experience, but 15 years ago I was doing 1080 mesh-based warps in sub-millisecond times. In your case you have to also get the image data onto the GPU, but I don’t imagine that will be a performance bottleneck at all. So yes, I think it’s totally doable, but You’ve got to be able to write the code to do it.
I am not very advanced in writing this stuff and would like to get it done as soon as I can. Can you or anyone you know write that code? Of course, I would pay for the work.
Thanks.
I keep pretty busy at work and don’t have time to take on additional projects at this point.
I understand totally. Thanks for the help!