Origin not in center of Aruco marker

Hi,

I followed the basic tutorial for detecting and placing an aruco marker but during my simulation my axis and so my origin is not centered on my marker.

Here is how axis are placed and where the origin is (top image) and what i would like to have at the end (down image (from internet)) :

Here is my code :

vector<cv::Vec3d>rvecs;  //rotation vector from detectMarkers
cv::Ptr<cv::aruco::DetectorParameters> parameters = cv::aruco::DetectorParameters::create();
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
cv::aruco::detectMarkers(frame, dictionary, corners, ids, parameters);

float mtx[3][3] = {
{229.41206666, 0.00000000e+00, 292.21276158 } ,
{0.00000000e+00, 229.41206666, 163.36008624} ,
{0.00000000e+00, 0.00000000e+00, 1.00000000e+00}
};

float dist[14][1] = {
{0.15749949} ,
{0.71279781} ,
{ -0.00673416} ,
{-0.04085565} ,
{3.81203065} ,
{0.12853096} ,
{0.76355458} ,
{ 3.70903916} ,
{0.} ,
{0.} ,
{0.} ,
{0.}
};

cv::Mat cameraMatrix = cv::Mat(3, 3, CV_32F, mtx);
cv::Mat distCoeffs = cv::Mat(14, 1, CV_32F, dist);

if(ids.size()>0)
{
  cv::aruco::estimatePoseSingleMarkers(corners,1,cameraMatrix,distCoeffs,rvecs,translation_vectors);
  cv::aruco::drawDetectedMarkers(frame, corners, ids);
  cv::drawFrameAxes(frame,cameraMatrix,distCoeffs,rvecs,translation_vectors,1);
}

Corner, translation_vector, ids etc are in parameters of my function.

I think the problem is when I call “estimatePoseSingleMarkers”. It sets the origin of the marker to the first corner stored in my corner vector.
As a result, my translation vectors are based on this corner and not on the center of the marker, which is bad and I don’t know and understand how to solve it.

Any idea ?

Thanks !

again, ditch cv::drawFrameAxes and draw them yourself. that function doesn’t put the triad at the marker’s origin but at its corners. your data is very likely good but you’re getting spooked by this function’s behavior.

Thanks for the answer. I have placed a point at the center of the marker. Unfortunately, I don’t think the origin of the marker is in the center since my translation values for the right and forward axis are very close to zero when the camera is just above the first corner (the corner with the red dot). (see on the picture)

Whereas by definition, the translation vectors are filled in according to the origin of the marker.
So I don’t understand why the origin is on a corner.

how do you place the marker? are you sure your placement of the marker works with its center, not a corner of it?

everything you’ve shown so far is unlikely to have detected the marker in the picture because your marker lacks the quiet zone that is required to detect its outline, and the background in that one picture is dark gray, i.e. doesn’t give any contrast for detecting it.

there’s a lot going on here that I can’t verify, meaning the whole half if your operation that creates the image.

My marker was created and placed in a virtual world on Gazebo.
When I do the same thing in the real world with a webcam, I get the same results. And it is always the same corner that he defines as the center.

I think the detection works well because I can draw the 4 corners of the marker, his center and I also get his id.

very strange. if any of the parameters here were to blame, the coordinate triad would be away from the center of the marker, but it would not always be exactly at the corner of the marker.

this must be a change of semantics somewhere in the code.

what version of OpenCV do you use?

I’m gonna need a Minimal Reproducible Example to reproduce your findings. provide pictures to reproduce the issue. installing Gazebo should not be required for a MRE.

1 Like

Hi,
Thanks for everything!

I use the following version of OpenCV : OpenCV 4.5.5

I tried to draw the axes manually and the result is the same in Gazebo simulation and even in real world with a webcam.

image

Here is the code:

    float mtx[3][3] = {
    {229.41206666, 0.00000000e+00, 292.21276158/2 } ,
    {0.00000000e+00, 229.41206666, 163.36008624} ,
    {0.00000000e+00, 0.00000000e+00, 1.00000000e+00}
    };

    float dist[14][1] = {
    {0.15749949} ,
    {0.71279781} ,
    { -0.00673416} ,
    {-0.04085565} ,
    {3.81203065} ,
    {0.12853096} ,
    {0.76355458} ,
    { 3.70903916} ,
    {0.} ,
    {0.} ,
    {0.} ,
    {0.}
  };

    cv::Mat camMatrix = cv::Mat(3, 3, CV_32F, mtx);
    cv::Mat distCoeffs = cv::Mat(14, 1, CV_32F, dist);

    cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);

while(video_stream.grab())
              {
                cv::Mat image, imageCopy;
                video_stream.retrieve(image);
                image.copyTo(imageCopy);
                std::vector<std::vector<cv::Point2f>> mycorners,rejected;
                std::vector<cv::Vec3d>rvecs, tvecs;
                std::vector<int> myids;
                cv::aruco::detectMarkers(image, dictionary, mycorners, myids, detectorParams, rejected);

                if (myids.size() > 0)
                {
                  cv::aruco::estimatePoseSingleMarkers(mycorners, MARKER_LENGTH, camMatrix, distCoeffs, rvecs, tvecs);
                  cv::aruco::drawDetectedMarkers(imageCopy, mycorners, myids);

                  vector< cv::Point3f > axisPoints;
                  axisPoints.push_back(cv::Point3f(0, 0, 0));
                  axisPoints.push_back(cv::Point3f(MARKER_LENGTH, 0, 0));
                  axisPoints.push_back(cv::Point3f(0, MARKER_LENGTH, 0));
                  axisPoints.push_back(cv::Point3f(0, 0, MARKER_LENGTH));
                  vector< cv::Point2f > imagePoints;
                  cv::projectPoints(axisPoints, rvecs, tvecs, camMatrix, distCoeffs, imagePoints);
                  cv::line(imageCopy, imagePoints[0], imagePoints[1], cv::Scalar(0, 0, 255), 3);
                  cv::line(imageCopy, imagePoints[0], imagePoints[2], cv::Scalar(0, 255, 0), 3);
                  cv::line(imageCopy, imagePoints[0], imagePoints[3], cv::Scalar(255, 0, 0), 3);
                }

                  imshow("receiver", imageCopy);
                  char key = (char) cv::waitKey(10);
                  if (key == 27)
                    break;
              }

And here is the marker used:
marker

Does anyone have an idea of the problem? Or has anyone encountered the same ?
Thanks

that matrix’s values… looks weird, and it’s got a /2 in there. what picture resolution do you use this on? going by this matrix, your picture resolution would be 585x327 or something like that

also try setting all distortion to 0.

I haven’t actually run your code but the python equivalent, on v4.5.5, gives me the triad in the middle of the marker.

I suspect there’s something going on I haven’t seen yet.

try to replicate this with a webcam looking at a marker. throw everything out that isn’t opencv (or start a new program)

I’ve checked the documentation. It contradicts itself. I’ve opened an issue about that:

Further, there were some fixes happening recently in the aruco module… maybe that fixed, or introduced, the issue you see:

Hello,

Yes, small mistake for me with the /2, I removed it but it does not change anything. I also tried to set the distortion to 0 but still the same result.

The camera simulated on Gazebo is 640 by 360 resolution.

I did the test with a webcam and the result in real is the same. I removed all the non-opencv code and here is the result:

The resolution of the camera is 1280x720.

And here is my code used for the webcam test :

// opencv modules
#include <opencv2/opencv.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/aruco.hpp>
#include <opencv2/core/types.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/gapi.hpp>

#include <iostream>

using namespace std;
using namespace cv;

int main(int argc, char* argv[])
{
  //settings camera C270
  float mtx[3][3] = {
       {794.71614391, 0.00000000e+00, 347.55631962 } ,
       {0.00000000e+00, 794.71614391, 293.50160806} ,
       {0.00000000e+00, 0.00000000e+00, 1.00000000e+00}
       };
  float dist[14][1] = {
       {-2.45415937e-01} ,
       {-6.48440697e+00} ,
       {3.54169640e-02} ,
       {9.11031500e-03} ,
       {-1.09181519e+02} ,
       {-1.23188350e-01} ,
       {-7.76776901e+00} ,
       {-1.05816513e+02} ,
       {0.00000000e+00} ,
       {0.00000000e+00} ,
       {0.00000000e+00} ,
       {0.00000000e+00} ,
       {0.00000000e+00} ,
       {0.00000000e+00}
       };

       cv::Mat camMatrix = cv::Mat(3, 3, CV_32F, mtx);
       cv::Mat distCoeffs = cv::Mat(14, 1, CV_32F, dist);

  cv::VideoCapture inputVideo;
  inputVideo.open(0);
  cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
  while (inputVideo.grab())
  {
      cv::Mat image, imageCopy;
      inputVideo.retrieve(image);
      image.copyTo(imageCopy);
      std::vector<int> ids;
      std::vector<std::vector<cv::Point2f> > corners;
      std::vector<cv::Vec3d> rvecs, tvecs;
      cv::aruco::detectMarkers(image, dictionary, corners, ids);

      if (ids.size() > 0)
      {
          cv::aruco::drawDetectedMarkers(imageCopy, corners, ids);
          cv::aruco::estimatePoseSingleMarkers(corners,0.12 , camMatrix, distCoeffs, rvecs, tvecs);

          // project axis points
          vector< Point3f > axisPoints;
          axisPoints.push_back(Point3f(0, 0, 0));
          axisPoints.push_back(Point3f(0.12, 0, 0));
          axisPoints.push_back(Point3f(0, 0.12, 0));
          axisPoints.push_back(Point3f(0, 0, 0.12));
          vector< Point2f > imagePoints;
          projectPoints(axisPoints, rvecs, tvecs, camMatrix, distCoeffs, imagePoints);
          line(imageCopy, imagePoints[0], imagePoints[1], Scalar(0, 0, 255), 3);
          line(imageCopy, imagePoints[0], imagePoints[2], Scalar(0, 255, 0), 3);
          line(imageCopy, imagePoints[0], imagePoints[3], Scalar(255, 0, 0), 3);
        }

    cv::imshow("out", imageCopy);
    char key = (char) cv::waitKey(10);
    if (key == 27)
        break;
  }
 return 0;
}

Maybe I should try with a Python only code.

Thanks for opening an issue about this. The result is quite annoying on my side.

Thanks for the help. Really appreciate it.

still can’t reproduce. did you modify your version of OpenCV? where did you get it from? how did you install it? how do you determine the version of OpenCV you use?

I’ve compiled your code against OpenCV v4.5.2 and get this:
image

and this is on v4.5.5 in a python equivalent code:
image

#!/usr/bin/env python3

import numpy as np
import cv2 as cv

# settings camera C270
mtx = np.float32([
	[794.71614391, 0.00000000e+00, 347.55631962 ],
	[0.00000000e+00, 794.71614391, 293.50160806],
	[0.00000000e+00, 0.00000000e+00, 1.00000000e+00],
])

dist = np.float32([
	[-2.45415937e-01],
	[-6.48440697e+00],
	[3.54169640e-02],
	[9.11031500e-03],
	[-1.09181519e+02],
	[-1.23188350e-01],
	[-7.76776901e+00],
	[-1.05816513e+02],
	[0.00000000e+00],
	[0.00000000e+00],
	[0.00000000e+00],
	[0.00000000e+00],
	[0.00000000e+00],
	[0.00000000e+00],
])

camMatrix = mtx
distCoeffs = dist

inputVideo = cv.VideoCapture(0)

dictionary = cv.aruco.getPredefinedDictionary(cv.aruco.DICT_6X6_250)

while True:
	inputVideo.grab()
	(rv, image) = inputVideo.retrieve()
	if not rv: break
	imageCopy = image.copy()

	(corners, ids, impts) = cv.aruco.detectMarkers(image, dictionary)

	if ids is not None:

		cv.aruco.drawDetectedMarkers(imageCopy, corners, ids)

		(rvecs, tvecs, objpts) = cv.aruco.estimatePoseSingleMarkers(corners, 0.12, camMatrix, distCoeffs)

		print(tvecs.shape)

		for rvec,tvec in zip(rvecs, tvecs):
			# project axis points
			axisPoints = np.float32([
				[0, 0, 0],
				[0.12, 0, 0],
				[0, 0.12, 0],
				[0, 0, 0.12],
			]).reshape((-1, 1, 3))

			(imagePoints, jacobian) = cv.projectPoints(axisPoints, rvec, tvec, camMatrix, distCoeffs)
			imagePoints = imagePoints.astype(int)
			cv.line(imageCopy, imagePoints[0,0], imagePoints[1,0], (0, 0, 255), 3)
			cv.line(imageCopy, imagePoints[0,0], imagePoints[2,0], (0, 255, 0), 3)
			cv.line(imageCopy, imagePoints[0,0], imagePoints[3,0], (255, 0, 0), 3)

	cv.imshow("out", imageCopy)
	key = cv.waitKey(10)
	if key == 27:
		break

I downloaded it from the OpenCV GIT with the following links:

Then I compiled, built and installed it.

When i do a git log I got that:

And i’ve download OpenCV just some weeks ago so I think I’m on the last release ! the 4.5.5.

I didn’t modify my OpenCV version…

Maybe should I try to uninstall it and re-install it ?

if you’re using the git master branch, instead of a specific release, they might have added code that breaks stuff.

please check out a specific tag like v4.5.5 and build that (for both main and contrib repo). that should then look like the results I got.

if someone broke aruco in the current master branch, that definitely warrants a bug report!

It works ! I upgraded to stable version 4.5.5 which is the latest release and everything works normally now!

Thank you very much for the help.

1 Like

ok great, so that means they’re introducing bugs in the current master branch…

1 Like