# Laser rangefinder and monocular camera extrinsic calibration

I have a 1D laser giving out range data and a monocular camera attached on top of it which is used for detection and tracking of object from image. I have the intrinsic calibration parameters of the camera. I want to establish a correspondence between the camera data and laser data. Is there any known method to get the extrinsic calibration matrix?? The end goal is to use x,y of the detected object from the camera and z (or depth) of the detected object from the laser.

Quite a few years ago, I used a similar setup to calibrate the camera on a robot with the camera at about 5ft high and angled down. There was a 2d rotating laser range finder LRF at a fixed position relative to the camera (4 inches off the ground as I recall). It reported distances in polar coordinates. I asked our shop manager to fabricate a small contraption for me the secured a visible laser pointer immediately above the LRF. The beam was maybe 1 inch above the horizontal plane scanned out by the LRF. We mounted a cheap plastic protractor to the device to keep track of the angle.

I showed a stationary scene to the robot and took a scan from the LRF. I picked a point from the scan and rotated the visible laser to that polar angle. I turned the laser on and took an image on the camera. Then I had a visible laser point 1 inch higher than the reported distance. I only chose to use objects that were perpendicular to the ground for that distance to make the reading accurate.

I manually identified the coordinates of all the laser spots in the images and combined that with the readings from the LRF. The LRF readings had to be converted to XYZ from polar, and the Y coordinate had to be added by measuring the height of the LRF from the ground. I believe I took the world origin to be the LRF origin in X and Z to make things easier. With all that data, I was able to feed a camera pose estimation function, because all you really need is a collection of image points and their corresponding world coordinates. I had previously calibrated the camera’s intrinsic and distortion parameters using a chessboard, so I undistorted the images before finding the laser point.

With your setup, I imagine you could do something similar, especially if your laser is visible in the image. I think you will need to have some means of measuring the angle if the LRF is only 1D, so the protractor trick might come in handy. I was lucky enough to have a shop at my disposal, but I’m sure you could rig something up or cannibalize something.

If your LRF is mounted directly to the camera and pointed parallel to the Z-axis of the camera coordinate system, I think you may have some problems there. My LRF was scanning a horizontal plane, while the camera was elevated and pointing down, so I was able to get data from all over the camera frame. I don’t think you can do pose estimation if all your points are colinear, as they would be if the LRF only shot along the camera’s Z-axis.

Hope this was helpful. Good luck.

Bill

Sorry for the up and probably silly question. Does it matter which monocular is using in this situation? Or problem doesn’t depends on it?