Estimate intrinsic parameter for synthetic fisheye image(FOV 190)

Context:
I have created synthetic fisheye image using this method. I can project world points into synthetic fisheye image and also project points from camera image back to world if I have depth information.
Goal:
I want to apply SLAM on this synthetic image sequence. Let’s say, I go with ORB-SLAM3. The SLAM module requires intrinsic parameter, which I do not have.
Question:
How do I estimate or handle the intrinsic parameter for this case? FYI, I tried to estimate the camera intrinsic using lidar points and corresponding image points using opencv::fisheye::calibrate, but the projection error comes in 3 digits. So, it does not seem to work unless I am doing something wrong.

# get image points and corresponding lidar points 
img_pt, world_pt = get_image_point(transformed_valid_points, img)
# prepare data points in format required by calibrate function
image_points = np.asarray(img_pt, dtype=np.float32)  # Should be of shape (N, 2)
lidar_points = np.asarray(world_pt, dtype=np.float32)  # Should be of shape (N, 3)
image_points = image_points.reshape(-1, 1, 2)  # (N, 1, 2)
lidar_points = lidar_points.reshape(-1, 1, 3)  # (N, 1, 3)
# image size
img_size = (1536, 1536)
# Intrinsics initialization
intrinsics_in = np.eye(3, dtype=np.float64)  # Camera intrinsic matrix
D_in = np.zeros((4, 1), dtype=np.float64)  # Distortion coefficients

# Calibration flags
calibration_flags = cv2.fisheye.CALIB_RECOMPUTE_EXTRINSIC + cv2.fisheye.CALIB_FIX_SKEW
reproj_err, intrinsics, distortion, rvecs, tvecs = cv2.fisheye.calibrate(
            [lidar_points],  
            [image_points],  
            img_size,  
            intrinsics_in, 
            D_in,  
            flags=0,  # Calibration flags
            Criteria=(cv2.TERM_CRITERIA_EPS+cv2.TERM_CRITERIA_MAX_ITER, 100, 1e-6)
        )