AutoCalibration of single RGB camera using human pose features (2D points) sequences

Hey all,
I am working on a problem where I need to calculate the world coordinates of each human present in the scene (X, Y, Z) using only image sequences. I.E only their pose (shoulder, nose tip, ankle point) in 2d image plane are given. The camera is uncalibrated. I have worked before on solving pose estimation problems by using a chess board to calibrate both intrinsic and extrinsic and using a 3d model for objects and in a robotics application (pick and place). However here using a chessboard is not a feasible solution. Need help to find the correct direction. Have tried focalsFromHomography() and other solutions but get focal lengths as 0.
Thanks in advance!

It sounds like you want 3D coordinates based on 2D image points from images captured with a single uncalibrated camera. I don’t think this is possible.

You might be able to get somewhere if the people are at constant distance from the camera, but even that would require some sort of calibration process (maybe a homography relating some real world plane to image coordinates, again assuming the people being measured are constrained to this 3D plane) If your camera optics produce distorted images, you will have to deal with that somehow, too.

You need to constrain the problem somehow, and even with that I’m concerned that your results won’t be very accurate.

Maybe add some more detail about the conditions you are operating in and what you are trying to do with the data.