How do I calculate the normalized camera matrix of a cropped image? So I got the original camera matrix with focal length f_x etc. Do I need to divide e.g. f_x / width_crop (width of cropped image) ? Or rather f_x / width_orig (width of original image) ? What about the principle point c_x? I would be glad for any advices.
So apparently some people divide their matrix by the resolution (e.g. fx_normalized = fx / width) to get the matrix independend of image resolution. My question now is, how this process will change in a cropped image.
The normalization of the camera matrix depends on what will you do with it. OpenCV functions assume the usual plain and simple camera matrix we all know, without normalization.
The term normalized camera matrix usually refers to normalizing focal length. So, you divide the matrix by the focal length to get the normalized camera matrix. Camera matrix - Wikipedia.
3D image generation (like openGL and such) use to have a viewport normalized with coordinates x and y in the range (-1, 1), although this viewport normalization is not common in computer vision. Augmented reality is a realm where this two world concur, and in some point it is needed to transform the camera matrix from one form to the other.
With OpenCV you may get the camera matrix and pass along to another function, without understanding what’s in it. But when normalizing, it is imperative to understand the fundamentals, and what the consumer function expects from it.
So, in your case, you can get the camera matrix for the cropped image as @crackwitz said, then apply you own normalization depending on what will you do with such matrix.