The normalization of the camera matrix depends on what will you do with it. OpenCV functions assume the usual plain and simple camera matrix we all know, without normalization.

The term *normalized camera matrix* usually refers to normalizing focal length. So, you divide the matrix by the focal length to get the normalized camera matrix.

Camera matrix - Wikipedia.

3D image generation (like openGL and such) use to have a viewport normalized with coordinates x and y in the range (-1, 1), although this viewport normalization is not common in computer vision. Augmented reality is a realm where this two world concur, and in some point it is needed to transform the camera matrix from one form to the other.

With OpenCV you may get the camera matrix and pass along to another function, without understanding what’s in it. But when normalizing, it is imperative to understand the fundamentals, and what the consumer function expects from it.

So, in your case, you can get the camera matrix for the cropped image as @crackwitz said, then apply you own normalization depending on what will you do with such matrix.