I´m pretty new to this and spend an hour searching this Forum to get an example for what I´m trying to do, but I found nothing similar, so…
My plan is to project a point cloud output of a mmwave radar system (x,y,z) onto my camera output (pixel coordinates), in real time.
I read about OpenCV “Camera Calibration and 3D Reconstruction” and thought it might be the perfect fit for me.
The biggest challenge will be to calibrate the camera and the radar in such a way that the point clouds generated by an object will actually lie over the corresponding bounding boxes by means of projection.
Are there already forum entries on this problem that I have overlooked, or is OpenCV perhaps not the best solution for me after all?
hello, this question is from a few months ago so I don’t know if you found a solution.
I’m quite new to openCV so feel free to correct me if there are weird things in my answer.
I know a technique that does quite the opposite of what you want to do. The technique takes pixels position in an image and returns a vector ‘d_world’ that combined with C the center position of the camera allows you to have the light ray that produced this pixel.
All the matrices used are inversible, so the opposite should work.
here is the technique:
(see camera resectioning for some source)
we assume that with camera calibration you have the intrinsic en extrinsic parameters of the camera (K the camera matrix, dvec the distortion vector, R the rotation matrix, tvec the translation vector)
you take a vector pt= [u, v, 1] of the pixel position homogenized.
you can calculate the direction of the ray forming that pixel in the camera coordinate system by doing dcam = K^(-1)pt K^(-1) the inverse of K
normalise dcam = dcam/||dcam||
you can use the rotation matrix in order to have this vector in real world coordinates dworld = R^Tdcam with ^T the transpose
normalise dworld = dworld/||dworld||
and you have your vector dworld that gives you the direction of the ray.
Using this direction and the center position of the camera C = -R^Ttvec you can have the ray.
Now for your case let’s say that your point that you want to project is A = (x,y,z) in real world coords.
having A and C you can find dworld = (x-xc, y-yc, z-zc)
now that we have dworld we can have dcam by doing
dcam = Rdworld because R is a rotation matrix therefor R^TR= RR^T = I
then we can have pt with pt = K*dcam
pt = [u’,v’,pz] so you obtained the u,v coord of the point A in your image by doing u = u’/pz, v = v’/pz.
I think that these u,v coords are on the undistorted image from your camera so you should undistort the image and use this image.
I think the cv2.projectPoints function is doing something similar but is including in the calculation the distortion to directly return a u, v in image coordinate (not undistorded image coord)