// Resize image to target size
cv::Mat resized_img;
cv::resize(color_img, resized_img, cv::Size(800, 1100));
std::cout << resized_img.rows << " " << resized_img.cols << " " << resized_img.channels() << std::endl;
I have some resized codes as above, I expected the output to be 800 1100 3 but it is 1100 800 3.
I asked gpt and it says
In OpenCV, always remember that
cv::Size(width, height)
specifies the dimensions in width (columns) first and height (rows) second. This convention is different from some other libraries or frameworks where height comes before width.
it is stupid, why would I want this from resize?