For the map type (to improve remap() performance) you can just look at the documentation either for initUndistortRectifyMap or convertMaps. I found that in my case this was “free” performance - I did a comparison of the remapping results (compared to the float based maps) and there was essentially zero image quality difference between the two. The benefit is faster remap() times and a smaller memory footprint for the maps.
I don’t know if I have a link that describes the second part - I’m sure I found something somewhere once upon a time, but this might help:
cv::initUndistortRectifyMap(m_camMat, m_distCoeffs, cv::Mat(),
perspective*m_camMat, cv::Size(destWidth, destHeight), CV_16SC2,
The key parts of getting it to map to a different output image size is the 4th argument (perspective * m_camMat), and the 5th argument (the size of the output image). In my case I am actually using the full perspective transform since I’m remapping a world plane (which has perspective distortion that I want to correct) into my image plane. For your case I think you only care about the scale and maybe cropping, but you can still use this method to achieve what you want.
To get the perspective transform, set up two lists of point correspondences - the first in your source image coordinates, the second in your destination image coordinates.
cv::Mat perspective = cv::getPerspectiveTransform(sourcePoints, destPoints);
These source/dest points can be wherever you want them to be in the source/dest images (well, they should probably form a convex quadrilateral in image space). The point is that if you can crop the source image by adjusting the source points (and you can also adjust where it lands in the dest image by adjusting the destination points). The magic gets encoded in the warp maps so all you have to do is call remap with the source/dest images.
I hope that helps.