findHomography inaccurate as it moves to left side of image

Steve,

A followup and some followup questions.

  1. I got the flags to work. I was leaving out the TermCriteria parameter. My original call was
    calibrateCamera(object_points, image_points, image.size(), cameraMatric, distCoeffs, rvecs, tvecs);
    with the flags and TermCriteria the call was this
    calibrateCamera(object_points, image_points, image.size(), cameraMatric, distCoeffs, rvecs, tvecs, CALIB_FIX_ASPECT_RATIO | CALIB_FIX_FOCAL_LENGTH, TermCriteria(TermCriteria::COUNT, 10, 1.0));

Regardless of the flag or flags used I got much worse results per the average error and the visual undistorted image returned. My avg error call looks like this

double dAvgError = std::sqrt(totalErr / totalPoints); // test = 0.085775

  1. As far as my originally not being able to get the camera captures in this thread forcing me to work with only a single image; I fixed that but the end result was that using the various perspective chessboards ended up with far far worse results than sticking to the single chessboard.

  2. I donā€™t know if this has anything to do with anything but my current call to the findHomograhpy looks like this

mHomoToScreen = findHomography(vImage, vObject, CV_RANSAC);

I see that there are some other possible parameters including iterations but few examples that I have found on the Internet ever use them. Do you think it would be worth my while to consider the mask and iterations parameters?

The follow up questions ore these:

  1. In a couple of your replies you said

you can probably get away with the 5 parameter model. Getting measurements as far into the corner of your image during calibration will be helpful. Otherwise you might want to try the rational (8 parameter) model.

later

Fortunately if you use the rational model and the standard calibration you will be totally fine with that lens.

and again

Itā€™s fine to start with the 5 parameter model, but if you find that accuracy is lacking you might want to use the rational model at some point.

What function are you referring to when you talk about the 5 parameter model and what is the rational 8 parameter model?

What I have so far, in this 1st test anyway with the monitor and this camera, are close but it bugs me to leave it at that always thinking that I am missing something that might be just out of sight. You mention more than one the ChAruco pattern but the docs here in OpenCV are meaningless. If you believe that that might help I will continue to find out even what a ChAruco pattern is. Docs donā€™t show and example but there must be an example out there.

Thanks again for all of your help.

Ed

The focal length is the same concept that you are used to. A few differences:

  1. Most webcam (or similar) lenses will have significantly smaller focal lengths than what you are used to with SLR cameras simply because the sensors are much smaller, so to achieve the same FOV a shorter focal length is required. I typically work with focal lengths from about 1.5mm-6mm.
  2. Due to the way the intrinsic parameters are modeled, the focal length is represented in units of pixels, not mm. So for example a 2mm focal length lens will have a value of about 900 pixels on the cameras I use (AR0330 sensor, 2.2um pixel size)
  3. There are two values for the focal length - I believe this is because historically cameras didnā€™t have square pixels, so one focal length for the X direction, one for the Y. CALIB_FIX_ASPECT_RATIO enforces Fx and Fy to be the same value, which is probably what you want (essentially all modern cameras will have square pixels - I use this parameter for all of my calibration.)

Itā€™s a bit surprising it would hang - I would think even if it was having trouble converging to a reasonable solution it would ā€œtime outā€ after some number of iterations. Iā€™m not sure what is causing it to hang, but a few thoughts:

  1. If you are using CALIB_USE_INTRINSIC_GUESS, your camera matrix should have sensible values. I would start with your CX, CY parameters at the numerical center of the image unless you have an unusual camera and specific knowledge about where it actually is. For FX, FY parameters Iā€™d probably use something like half the width of the image - that would correspond to a HFOV of 90 degrees and is probably decent starting point. If you are using 1 for your FX parameter (I think you mentioned that before), that might be a bad enough starting point that the algorithm wouldnā€™t converge. Not sure.

Also if you are manually populating the camera matrix, make sure you are indexing into it correctly - otherwise you might be inadvertently using the transpose of of the matrix, which would be nonsensical (wouldnā€™t work for calibrateCamera). Print it out, it should look like:

Fx 0 CX
0 Fy CY
0 0 1.0

(And Fx/Fy should be the same value)

Frist, calibrateCamera returns an error score, so you donā€™t need to compute your own. The error score returned by calibrateCamera is an RMS value, so depending on how your totalErr is calculated you might have something different. All the error values I refer to are RMS.

Your 0.085 error value is exceptionally small in my experience. We calibrate about 10,000 cameras per year and itā€™s not very often I see one with an error that low - and thatā€™s with a very high precision calibration target and a controlled / automated calibration process. What Iā€™m suggesting is that your 0.085 number isnā€™t accurate. If that number comes from a single image calibration that could make sense (not as many samples, easier to fit a solution that matches closely). It wouldnā€™t surprise me if if your error went up significantly when using multiple images, and I think the higher number would be more representative of reality. The only way I would trust any error value is with a large number of images (at least 6, probably 10 or more) from a range of scales/angles, and captured from a moving camera or moving target. (Iā€™m suspicious of any calibration derived from a fixed camera / fixed monitor with images rendered from different perspectives).

What do you mean by ā€œfar far worseā€ calibration results? Like 1.0? 1000?

mHomoToScreen = findHomography(vImage, vObject, CV_RANSAC);

For this function call your vImage contains undistorted points? As far as parameters for computing the homography, I donā€™t have a lot of experience with findHomography and CV_RANSAC (I mostly use calibrateCamera) - I would expect the default parameters for findHomography to be reasonable.

As far as the 5 parameter vs 8 parameter model, you can control this with the flags. The 5 parameter model is the default, and the 8 parameter model is accessed by passing in CALIB_RATIONAL_MODEL. I have found this to be significantly better at extrapolation for lenses with high distortion.

I think the Charuco calibration process could be helpful, particularly if you are having problems getting good image coverage with the calibration input points. Last I checked the Aruco functionality was still part of OpenCV_contrib (not part of OpenCV proper) so that might be why you are having trouble finding it.

https://docs.opencv.org/3.4/da/d13/tutorial_aruco_calibration.html

Having said that, Charuco isnā€™t some magic bullet that improves calibration - the primary benefit is that it enables you to get calibration points further into the corner of your camera images. If you are using regions of the camera image that fall outside of where you collected the calibration points, then using the Charuco target might help - but only because you donā€™t have to see the full pattern in each image. I would start with CV_CALIB_RATIONAL_MODEL if you are having trouble modeling the lens distortion, and only switch to Charuco if you think more points (further into the corner) would help.

BODY {font-family=ā€œArialā€} TT {font-family=ā€œCourier Newā€} BLOCKQUOTE.CITE {padding-left:0.5em; margin-left:0; margin-right:0; margin-top:0; margin-bottom:0; border-left:ā€œsolid 2ā€;}

Steve,

This helps a lot for the fx and fy values. You had written earlier that

"1/2 the width of the image would be a reasonable number to use. "

So I had assumed fx would be 1/2 the width and that fy would be 1/2 the height. I see now that they should both be the same and that CALIB_FIX_ASPECT_RATIO is apparently fixing this ratio and not the ratio of the Mat of the chessboard being used. That clarifies a lot.

Thanks again.

Ed

Oh, yes. Sorry I wasnā€™t more clear on that - if you used fx = width/2 and fy = height/2 (and CALIB_FIX_ASPECT_RATIO) you will get terrible results.

You are getting close, it seems.

Steve,

I appear to be stuck. In the code below Debug works but with different results on each run through. With Release mode it will always get hung up and stuck while trying to run calibrateCameraCharuco. Here is the code with just about all for the sample cpp from OCV 4.5.3

Can you spot anything that jumps out at you other than the fact that I am only using one iteration of the ChAruco board?

In creating the board I use DICT_6X6_250 only because I saw this once in an example and assumed that maybe the more markers the better but I have found absolutely nowhere where it is discussed why one dictionary should be selected over another.

// Public to be available in calibrateCameraChAruco
cv::Ptr< cv::aruco::CharucoBoard > pCharucoBoard;
cv::Ptr< cv::aruco::Dictionary > dict;
Mat cameraMatric = Mat(3, 3, CV_32FC1);
Mat distCoeffs;

////////////////////////////

// Create the board based on 2nd screen size
cv::Mat mChAruco(rSecondary.Width(), rSecondary.Height(), CV_8UC1);
dict = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
int iAcross, iDown = 0;
iAcross = rSecondary.Width() / 100;
iDown = rSecondary.Height() / 100;
pCharucoBoard = cv::aruco::CharucoBoard::create(iAcross, iDown, 0.08, 0.04, dict); // 
cv::imwrite("board.png", mChAruco);
// Get this ChAruco board onto the GameBoard window
mToScreenChessBoard = imread("board.png");

////// seperate function CalibrateCameraChAruco() /////////////

// collect data from each frame  -- from sample 
// ...OpenCV_453\opencv\sources\opencv_contrib-4.x\modules\aruco\samples
std::vector< std::vector< std::vector< Point2f > > > vvvp2AllCorners;
std::vector< std::vector< int > > vviAllIds;
std::vector< Mat > vmAllImgs;
cv::Size imgSize;

// Initialize the detector parameters using default values -- From Sample code
cv::Ptr<cv::aruco::DetectorParameters> detectorParams = cv::aruco::DetectorParameters::create();

int iTotalFrameGrabs = 1;
int iFrameGrabs = 0;
// run 5 times on same image as a test
// run 1 time as a test -- No difference either way
while (iFrameGrabs < iTotalFrameGrabs){
	std::vector< int > viIds;
	std::vector< std::vector< Point2f > > vvp2Corners, vvp2Rejected;

	// detect markers -- From Sample code
	aruco::detectMarkers(mCameraImage, dict, vvp2Corners, viIds, detectorParams, vvp2Rejected);

	// interpolate charuco vvp2Corners -- from Sample code
	Mat mCurrentCharucoCorners, mCurrentCharucoIds;
	if (viIds.size() > 0)
		aruco::interpolateCornersCharuco(vvp2Corners, viIds, mCameraImage, pCharucoBoard, mCurrentCharucoCorners, mCurrentCharucoIds);

	if (viIds.size() > 0) {
		vvvp2AllCorners.push_back(vvp2Corners);
		vviAllIds.push_back(viIds); // Sample V viIds goes into VV allIds 	
		vmAllImgs.push_back(mCameraImage);
		imgSize = mCameraImage.size();
	}
	iFrameGrabs++;
}

if (vviAllIds.size() < 1) {
	Beep(555, 555);
	//return false;
}

std::vector<Mat> rvecs;
std::vector<Mat> tvecs;
int iCalibrationFlags = 0;
// iCalibrationFlags = CALIB_FIX_ASPECT_RATIO | CALIB_USE_INTRINSIC_GUESS;

// prepare data for charuco calibration -- from Sample code
int nFrames = (int)vvvp2AllCorners.size();
std::vector< Mat > vmAllCharucoCorners;
std::vector< Mat > vmAllCharucoIds;
std::vector< Mat > vmFilteredImages;
vmAllCharucoCorners.reserve(nFrames);
vmAllCharucoIds.reserve(nFrames);

for (int i = 0; i < nFrames; i++) {
	// interpolate using camera parameters -- from Sample code
	Mat mCurrentCharucoCorners, mCurrentCharucoIds;
	aruco::interpolateCornersCharuco(vvvp2AllCorners[i], vviAllIds[i], vmAllImgs[i], pCharucoBoard,
		mCurrentCharucoCorners, mCurrentCharucoIds, cameraMatric,
		distCoeffs);

	vmAllCharucoCorners.push_back(mCurrentCharucoCorners);
	vmAllCharucoIds.push_back(mCurrentCharucoIds);
	vmFilteredImages.push_back(vmAllImgs[i]);
}

// This calibrateCameraCharuco with no flags and no TermCriteria works in Debug mode
// with repError of around 0.09xxx and an obviously squared off undistorted image
// HOWEVER, in Release mode it will hang along with the attempts below.
double repError =
aruco::calibrateCameraCharuco(vmAllCharucoCorners, vmAllCharucoIds, pCharucoBoard, imgSize,
cameraMatric, distCoeffs, rvecs, tvecs);

cv::Mat mImageUndistorted;
undistort(mCameraImage, mImageUndistorted, cameraMatric, distCoeffs);
imshow("winUndistorted", mImageUndistorted);  // Unsidtorted cameraWindow
waitKey(1);

// The above fails in Release mode as well as these attempts
// double repError =
// aruco::calibrateCameraCharuco(allCharucoCorners, allCharucoIds, pCharucoBoard, imgSize,
// cameraMatric, distCoeffs, rvecs, tvecs, iCalibrationFlags, TermCriteria(TermCriteria::EPS | TermCriteria::COUNT, 10, 1));
// I have tried with iCalibrationFlags as 0, CALIB_FIX_ASPECT_RATIO, with and without  | CALIB_USE_INTRINSIC_GUESS
// and Release mode will just hang while running the calibrateCameraCharuco

// In both cases, Debug and Release, the distCoeff is 0,0,0 NULL, NULL, NULL,...
// doesn't matter in Debug but I have a feeling that it matters in Release
// Should this have been filled in in interpolateCornersCharuco above?

// In debug I hit a repError of 4.5 and in that case the distCoeffs was filled in
// but, of course, the undistorted board was still fish eyed.
// next time in debug the error was .5 and the undistorted board was still fisheyed.
//

PS. I also tried the rational model for the flag but that did not help either.

PPS. I just realized that I had not transferred the cameraMatric (sp) from my previous calibrateCamera to this calibrateCameraCharuco. I added this before the call to interpolateCornersCharuco above


cameraMatric.ptr<float>(0)[0] = mCameraImage.cols / 2;
cameraMatric.ptr<float>(1)[1] = mCameraImage.cols / 2;  
cameraMatric.ptr<float>(0)[2] = mCameraImage.cols / 2; // Cx = Half the width
cameraMatric.ptr<float>(1)[2] = mCameraImage.rows / 2; // Cy = Half the height
cameraMatric.ptr<float>(2)[2] = 1.0; // ?

That seems to have fixed all of my problemsā€¦Jeeeeze, it is always something.

Ed

Ed

I donā€™t see anything that would cause it to hang. A few comments:

  1. What are you passing in for the camera matrix? It looks like the code that is currently active is not using the intrinsic guess / fix aspect ratio currently.
  2. Can you post the images you are using as input to the detectMarkers call?
  3. The choice of Aruco dictionary depends on how many unique markers you need - a 3x3 marker has only 9 bits of information, and due to rotational symmetry (and some degenerate cases, like all black / all white, maybe?) you may end up only with 100 uniquely identifiable markersā€¦so if you need more than that, you need a larger marker size. There are also supposed false-positive rejection benefits when using a small dictionary with larger marker sizes. To be clear, by ā€œmarker sizeā€ Iā€™m referring to the 3x3, 4x4, 5x5, or 6x6 partā€¦not the physical size of the displayed marker.
  4. Are you drawing the detected markers to an image and checking it? If not, I would to make sure you are getting good detection results. You donā€™t need 6x6 markers, so if your detection is struggling you might try a 4x4 marker size since each individual marker bit will occupy more space in the camera image (and therefore be easier to detect/identify)
  5. For your parameters Iā€™d use more iterations for the calibration. Iā€™m not sure how fast it converges, but you might want to use a larger number (and a smaller epsilon). I use 50 and 0.0001 for mine. My understanding is that the calibration process will stop trying to improve the result either after 50 iterations (in my case) or when the reprojection error fails to improve by at least 0.0001.
  6. If you are able to get good undistorted images without passing in an intrinsic guess, keep doing that. Your actual intrinsics (focal length, in particular) wonā€™t be valid, but if all you need is a way to undistort the image, it sounds like you have that. (Notwithstanding the debug/release problem)

Again, nothing obvious for why it should lock up. Iā€™d maybe try to debug it (I know, itā€™s not a debug build) and see if you can figure out where it is getting stuck.

You could look at the code for calibrateCameraCharuco and see if there are any obvious code path differences between debug and release builds. Or add some logging to see how far itā€™s getting (and hopefully determine where it is getting stuck) in release mode. Locking up / not returning smells like a bug to me.

Steve,

First of all, if you didnā€™t notice my PS and PPS edits in the previous post. I found the problem that was causing all of my problems with the Release mode. A mistake while transporting from from calibrateCamera code to calibrateCameraCharuco code.

But to your current post: This is great. These are the details that I was having a hard time finding
I was using the DICT_6X6_250 to create the ChAruco board
DICT_6X6_250 ChAruco board
I will now use the DICT_4X4_50 board
Dict4X4_50 ChAruco board
I know that they have predefined dictionaries of 4X4_50, _100, _and more but I will start with the _50.

For calibrateCameraCharuco I was using only the CALIB_RATIONAL_MODEL flag. I have always had problems when I tried to use the CALIB_USE_INTRINSIC_GUESS flag but I will add the CALIB_FIX_ASPECT_RATIO flag to my current rational model flag.

I knew that in the TermCriteria::COUNT, 10, 1) the 10 in this case was iterations but I never understood what they meant by epsilon. I new it is used in math as an error but I never connected the dots to understand what exactly it meant in this case. I will change my TermCriteria to ::COUNT, 50, 0.0001) and see how that goes.

One thing I noticed, and maybe the changes I will be doing mentioned above will help, was that, when running one iTotalFrameGrabs loop, my return error from calibrateCameraChruarco would bounce around even with nothing changing. As in from 0.7, to 0.9, to 1.5 etc. I changed the iTotalFrameGrabs to 10 loops and that seemed to even out the return error values. I have a feeling that changing the TermCriteria will go a long way towards that as well.

I think I am getting there if not actually there. A little more testing and tweaking and I think this may be it. Of course I then need to test with different cameras and then different big screen TV and projector/screen setups but I am feeling that the worse is behind me thanks to all of your help.

Ed

I wanted to correct this. I looked at the code and the epsilon value is not compared to the improvement in reprojection error, but rather used as a minimum change in the parameters being optimized.

I think I followed the code to the right place, this i from calib3d/src/compat_ptsetreg.cpp:471 (version 3.4.0 source)

    if( ++iters >= criteria.max_iter ||
        cvNorm(param, prevParam, CV_RELATIVE_L2) < criteria.epsilon )
    {
        _param = param;
        state = DONE;
        return true;
    }

So it is checking whether the L2 norm (Euclidean distance) between the parameter vector of the current step & previous step. The parameter vectors include Fx, Fy, Cx, Cy and all of the distortion coefficients. (and not the reprojection error)

-Steve

Steve,

Thanksā€¦whatever the epsilon is doing I am getting good consistent error returns from calibrateCameraCharuco of 0.1 down to 0.08 and excellent accuracy with undistortPoints. Iā€™ve tested with a different low distortion camera and all seems OK there as well. Now the UIā€¦Ughhh.

Ed

Steve,

One final note of something you mentioned about ChAruco but that my creating a ChAruco board programmatically did not address fully.

You had mentioned how the calibrateCameraChAruco is good at the calibration getting into the corners much better than the calibrateCameraChessboard method.

What I had originally done before creating the ChAruco board was take the width and height of the 2nd monitor where the ChAruco board was to be displayed and divide by 100 to decide how many squares would go across and how many down

iAcross = rSecondary.Width() / 100;

But with a test screen width of 1280, for instance, the ChAruco board was being created with a blank area on either side to make up the missing 80 pixels. I was still getting some very slight inaccuracy as the far left so I decided to use rounding to see what happened with the accuracy.

iAcross = ((rSecondary.Width() / 100.00)+.50);

I did the same with the height. Now the ChAruco board goes all the way to edges and the final accuracy on the extreme left now matches the accuracy that I was getting in the middle.

Iā€™m probably not alone in that someone can tell me something but it takes me awhile to understand what they told me and what it all means. Little by littleā€¦

Thanks again for all of you help.

Ed

I think I understand what you are saying, but I think we are talking about two different things (maybe?)

I canā€™t see how you are calibrating from over here, but from what I have gathered the camera and the monitor (your calibration target) are fixed and donā€™t move during calibration. You have constraints that prevent you (or your customers) from moving the camera or monitor, and since you are only interested in calibrating the distortion you can get away with this. Furthermore you only need to have accurate distortion calibration in the area where your single calibration target (monitor) is visible. (Please correct me if Iā€™m wrong on any of this)

I wanted to bring that up because what you are doing is a special case situation, and I wouldnā€™t want anyone who comes along later to think that you can get a full camera calibration from a single view.

Back to your comment on getting the Aruco pattern into the cornersā€¦

Since you are using the monitor as a calibration target, and you need to be able to distort anything you (later) see in the monitor, it is important to get the calibration target to fill as much of the monitor as possible. This way you get corner points as close to the edges / corners of the monitor as possible. I think your trick would have applied to the standard chessboard target too, or at least seems like it should.

So whatā€™s the big deal about the Charuco calibration target? What do I mean by "it lets you get points closer to the edges/corner? Well, in a typical camera calibration process you will capture a large number of calibration targets from different angles, filling different parts of the image, etc. The idea is to get points in every part of the cameraā€™s image so that the resulting calibration will be valid / accurate for any pixel. Getting data points in the central part of the image is easyā€¦typically you have way more than you need. Getting points near the corners / edges is much more difficult. Why? Because with the standard chessboard calibration pattern you have to be able to see the entire pattern for any of the points to be used. If a single corner isnā€™t visible, none of them get used. This can make it very difficult to get points that cover the corners. The magic of the Charuco calibration target is that you donā€™t have to see the full calibration pattern, so you are free to move the calibration target however you want, moving it so that you get something in the corners of the camera image.

Sorry for the confusion on that.

Steve,

Right. I create the ChAruco board and then imshow it to a full screen named window that is on the 2nd monitor. The camera is pointing to that 2nd monitor and that camera capture is in a window called cameraWindow. Here is a screen capture of the current ChAruco board as the camera sees it
ChAruco board close to edges
When creating the board there is a parameter for what size border you want and I selected 10. So that is the very small border that you see on the left and right. I could make the border 1 and that would reduce almost all the way.

It is this cameraWindow that I am calibrating. As I said, for me, with a single image to send to the camera, this seems to be working.

I then switch to the Chessboard to do the homography. It is the chessboard for doing homography (and also calibrateCamera?) that the docs say should have a white border around the edges as least as wide as the chessboard squares.

Anyway, I get the cameraMaticx and distCoeffs and then get the homography. When a laser stricks the 2nd monitor, that is picked up by the camera. That point in the cameraWindow is then undistortedPoints and that undistorted point is then processed by the homography matrix to get the similar point on the 2nd monitor.

Getting the ChAruco board pattern closer to the edge helped with the accuracy near the edge. I will probably change the border size from 10 pixels to 1 and test that next.

Thanks again

Ed

For the charuco markers themselves, I think you need a 1 ā€œpixelā€ border, pixel in this context meaning the size of the black/white squares within the Aruco marker. So you can probably get a lot closer to the corners if you wantā€¦by generating the Charuco target at higher resolution than your monitor can display, then cropping it to take of, say, 50 pixels left/right/top/bottom. As long as there is enough white around the Aruoco markers, the rest should work. (I think) Also you can probably get away with slightly larger Aruco markers if you are having trouble locating them.

Another thingā€¦if it hasnā€™t come up already. You want to be displaying your calibration target without any scaling / resampling if at all possible. You want each image pixel in the calibration target to correspond to exactly one pixel on the monitor for best calibration results.

Steve,

I had the Aruco boxes at 1/2 the size of the Chessboard boxes only because that was the way it was in one of the examples. I have something like 0.8 for the chessboard and 0.4 for the aruco. I didnā€™t know if getting the aruco too close to the chessboard sides if that would throw off the calculation of where the chessboard squares intersect or not. Iā€™ll make it .8 and .6 and give it a try.

Just tried the .08 and .06 and I actually think that make it even betterā€¦Thanks.
Ed

Do note that it can cause problems if the aruco corners are too close to the chessboard corners because the corner detector can accidentally latch on to the wrong corner. Iā€™m speaking generally - I actually donā€™t use the standard Charuco calibration (I use a heavily modified version specific to my needs), so Iā€™m not sure.

Iā€™d just watch your scores, and draw the detected corners to your images and inspect manually. It can be super helpful in understanding if there are problems. If you are planning on shipping this feature to customers and want to have any hope at debugging it remotely, I suggest saving out debug images of the calibration process. You seem to be getting pretty close to having a prototype, but speaking from experience I think you have a lot of work left to make it robust in many different situations.

Some things to be aware of as you move forward:

  1. For projection screens there might be a ā€œhot spotā€ - an area (view angle dependent) of the screen that is significantly brighter than the other parts. This might not be very apparent/objectionable to the human observer, but cameras (lower dynamic range) can get overwhelmed by this. The auto-exposure setting in the camera might not give you an image where all of the points are clearly visible. You might have to put the camera in a manual exposure setting and take a series of pictures to be able to ā€œseeā€ all of the points. (You might be able to combine the separate exposures into an HDR image, or you can find some points in dark images and other points in brighter images, then create a global set of points from that.)
  2. Projection screens are often not flat even if they are supposed to be. Your lens distortion calibration method might struggle under those conditions.
  3. The variability of cameras and monitors is going to create some unanticipated problems. Try to test on as many different configurations as you can. Try to be conservative on the parameters you use. Maybe 0.8/0.6 for the charuco size gives you better results when it works, but it could fail to work for some cameras/monitors. Maybe backing off a bit on those parameters (and sacrificing some accuracy) is the right choice for a more robust solution.
  4. The image you shared of the monitor - the right side looks kind of noisy / blurry. If it is working for you, thatā€™s good, but you might benefit from taking multiple images (5 is often sufficient) in a row and then averaging the images. It can really help with image noise, especially in lower light situations.

You have made great progress so far - just remember itā€™s a marathon not a sprint. :slight_smile:

Steve,

A final observation that I was wondering if you might be able to explain. It has to do with creating and drawing a ChAruco board. I attempt to create a ChAruco board the same size of my 2nd screen using that size to calculate some of the parameters to use. When the 2nd screen is 1280X768, for instance I would have this

dict = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);

int iAcross, iDown = 0;
iAcross = ((rSecondary.Width() / 100.00) + .50);  // ie 13 rounded up
iDown = ((rSecondary.Height() / 100.00) + .50);  // ie 8
pCharucoBoard = cv::aruco::CharucoBoard::create(iAcross, iDown, 0.08, 0.05, dict);			
pCharucoBoard->draw(cv::Size(rSecondary.Width(), rSecondary.Height()), mChAruco, 1, 1);

This works fine. However, if the 2nd screen is say 1600X900 the pCharucoBoard->draw crashes and closes the whole app. It crashes in charuco.cpp::draw line 96

// draw markers
Mat markersImg;
aruco::_drawPlanarBoardImpl(this, chessboardZoneImg.size(), markersImg,
diffSquareMarkerLengthPixels, borderBits);

Using SWAG I changed the dictionary from DICT_4X4_50 to DICT_4X4_100 and that solved the problem. What gets me is that I donā€™t understand why plus I donā€™t completely understand what the dictionary parameters are implying. I know the docs say

Each dictionary indicates the number of bits and the number of markers contained.

But I donā€™t understand what the number of markers has to do with the size of the charuco board. I would like to program around this unknown. Currently I have left it as DICT_4X4_100 but what if a user has a screen resolution above 1600X900 in the future. Is it really the 1600X900 or the fact that there is zero modulus after the divide by 100? I just hate these unknowns and wanted to at least try to understand what could be going on.

Ed

PS it is not the fact that there is a zero modulus. I tested with 800X600 and it did not crash so it must be the higher resolution that causes the crash when the _50 is used.

Ed

PPS. In the line above where I said that it crashes with the

aruco::_drawPlanarBoardImpl(this, chessboardZoneImg.size(), markersImg,

                            diffSquareMarkerLengthPixels, borderBits);

I followed that into the aruco.cpp. What I donā€™t understand is the ā€œthisā€ being passed. Whatever ā€œthisā€ is it has a size of 72. In the _drawPlanarBoardImp on line #1661
for(unsigned int m = 0; m < _board->objPoints.size(); m++) {
it goes to m == 50 and then crashes. So, obviously, the 50 is the _50 in the dictioary so when I change it to _100 that is more than enough to get past the objPoints.size or 72. What I donā€™t see is where ā€œthisā€ got the size of 72.

Ed

PPPS apparently the 72 is from the pCharucoBoard original created with iAcross of 16 and IDown of 9. 16 time 9 is 144 and half of that is 72 so I assume that is where the size came from.

So apparently I just need to make sure my dictionary just has enough markers to cover the ā€œsizeā€.

Sometimes I just need to talk to myself to finally arrive at the answer. :wink:

Ed

You got to the same answer I got to. Sometimes rubber duck debugging is the best way to figure things out, eh?

You might consider using variable marker sizes so you donā€™t end up needing 500 markers for a 4K projector. You might have trouble detecting the markers in your camera if they get too small.

Ahhh, Thatā€™s right, I can create a dictionary and not have to rely on predefined dictionaries. Perfect. I currently have it as _100 which would take me to a resolution of 1680 X 1050 but I would have to jump it to _250 if the resolution went to something like 1680 X 1080. I hadnā€™t understood that the number of markers necessarily meant smaller markers and thus maybe harder to detect. That makes sense not that you mention it. With that in mind, doing a custom dictionary seems the best course. Thanks again as always.

Ed