Why is the point coordinates output by calling the undistortPoints method negative


// The camera parameters
let w = 3840,
  h = 2160,
  fx = 2385.724365,
  fy = 2385.724365,
  cx = 1883.785767,
  cy = 1094.341431,
  height_left = 1260,
  height_right = 2160;
//Distortion parameters
let fc1 = 7.797425,
  fc2 = 5.739021,
  cc1 = 0.000563,
  cc2 = 0.000260,
  kc1 = 0.482698,
  kc2 = 8.237219,
  kc3 = 10.391369,
  kc4 = 2.691715;
let aisle = cv.CV_32F;

let camera = new cv.matFromArray(3, 3, aisle, [fx, 0, cx, 0, fy, cy, 0, 0, 1])
let coeff = new cv.matFromArray(1, 8, aisle, [fc1, fc2, cc1, cc2, kc1, kc2, kc3, kc4])

const args = {
  inputArray: cv.matFromArray(2, 1, cv.CV_32FC2, pt_2d),
  outputArray: new cv.Mat(),
  cameraMatrix: camera,
  distCoeffs: coeff,

cv.undistortPoints(args.inputArray, args.outputArray, args.cameraMatrix, args.distCoeffs);

A few comments:

  1. Negative image coordinates are not wrong or unexpected generally. Neither are image coordinates that are larger than the original image width or height. Depending on the nature of the distortion, an undistorted point can be further from or closer to the image center compared to the distorted point. With the lenses I work with (wide FOV & significant distortion, but not really fisheye distortion) it is common for a significant portion of my undistorted points to be outside the original image bounds.

  2. It looks like you are using the 8 parameter model (rational model). When I look at your distortion coefficients, the variable names you are using confuse me, and the values seem pretty large compared to what I get.

The variables you have labeled as fc1 and fc2 are what I would call kappa1, kappa2. Your cc1,cc2 are what I would call p1,p2 (tangential distortion coefficients) and your kc1-kc4 are what I would call kappa3-kappa6.

The kappa values you have all look pretty large compared to what I am used to getting. I don’t have great intuition on what is “right” because it’s a rational model so there is a division that happens…but I will just say that I’m used to much smaller values. For example, here is an example of a recently calibrated lens:

distCoeffs13269709: !!opencv-matrix
rows: 1
cols: 8
dt: d
data: [ 3.3254828087991828e-01, 1.6132497402520388e-02,
7.0160668848183323e-06, 1.4828951697919293e-05,
6.7314646376585041e-05, 6.0952318529228633e-01,
6.5702675968763377e-02, 1.0402604027629409e-03 ]

where the data is in the expected order (k1, k2, p1, p2, k3, k4, k5, k6)

Notice that they are all smaller than 1, and decrease in magnitude from k1 to k3 (numerator) and k4 to k6 (denominator).

Again, I’m not saying your values are wrong, but they don’t look like anything I remember seeing.

How did you get the distortion parameters for your lens?

  1. It’s not clear from your code what you are doing. You have a float32 array of size 4 with what appears to be image coordinates for the first 2 elements and 0 for the 3rd and 4th element. It’s not clear how you go from the input array to the output array. Is it with undistortPoints? (Show your code, please)

The distortion parameters are called in the Python version undistortPoints method, which should be fine. The code I used above is used to remove distortions from the JS version