Merging Point Clouds obtained from stereovision cameras

Hello all,

I am working on a project to track movement of a human head with a robot and I am using active stereovision cameras to perform the tracking. I would like to use two cameras as there is potential for one camera being obscured. My question to everyone is whether it is possible to merge the point clouds into one?

sure. it’s just some transformations in 3D.

you will need to know the transformation between their frames in either direction, or their poses in a shared frame.