Cant understand triangulatepoints 3d points results

Hi everyone,
I’m using opencv triangulatePoints to estimate 3d position of the matched keypoints between two images using the estimated Essential matrix, but when i try to visualise the resulted 3d points after converting them from homogeneous system, the 3d points are very far from the real world positions (or i can’t understand the results).
To eliminate the camera parameters calibration issues, i used a scene on blender where i put objects and captured images and the camera intrinsic and extrinsic parameters, but still it gives me the same results.
I checked also if it was the difference in coordinates axis between opencv and blender that gave the erros.
The question is are two images enough to infer the 3d positions?