Alignment CAD edges to real image

Hi everybody!

I’m looking to a robust way to register a cad view to a real image.

the most robust way was to use the optimization findTransformECC().
I found the workflow here: Image Alignment (ECC) in OpenCV ( C++ / Python ) | LearnOpenCV #

I choose this algo beacause the ORB feature matching was very instable.
The features are not easly matched. (Feature Based Image Alignment using OpenCV (C++/Python))

Now the code gave me relative minimum of optimization and I don’t know I to improve it.

The idea is to preprocess the image to extract the edges, to clean the noise and to align the cad image to the preprocessed.
Than to use the trasformation matrix obtained to align the cad to the real image.

Code:

def blob_removal(img_real_preprocess, area):
    # blob removing
    img_real_preprocess = 255 - img_real_preprocess
    img_real_preprocess = img_real_preprocess.astype(np.uint8)
    cnts = cv2.findContours(img_real_preprocess, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    cnts = cnts[0] if len(cnts) == 2 else cnts[1]
    for c in cnts:
        if cv2.contourArea(c) < area:
            cv2.drawContours(img_real_preprocess, [c], -1, (0, 0, 0), -1)

    img_real_preprocess = (1 - (img_real_preprocess/255))*255
    return img_real_preprocess.astype(np.uint8)

def bilateral_filtering(img, filter_size, sigma_r, sigma_s):
    img = cv2.bilateralFilter(img, filter_size, sigma_r, sigma_s)
    return img

def median_filtering(img, filter_size = 3, it = 1):
    for i in range(it):
        img = cv2.medianBlur(img, filter_size)
    return img

def crop_img(img, top: int, bottom: int, left: int, right: int):
    img = img[top:-bottom, left:-right]
    return img

def gaussian_thresholding(img):
    img_th = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 6)
    return img_th

def corners_defect_removal(img):
    # Calcola le dimensioni dell'immagine, tolgo pixel nei bordi
    h, w = img.shape[:2]
    l = 140
    # Disegna un quadrato bianco su ogni angolo dell'immagine
    cv2.rectangle(img, (0, 0), (l, l), (255,255,255), -1)
    cv2.rectangle(img, (w-l, 0), (w, l), (255,255,255), -1)
    cv2.rectangle(img, (0, h-l), (l, h), (255,255,255), -1)
    cv2.rectangle(img, (w-l, h-l), (w, h), (255,255,255), -1)
    return img

def trim_white_borders(img_real, img_preprocess): #tolgo i bordi bianchi
    gray = 255 * (img_preprocess < 128).astype(np.uint8)  # To invert the black to white
    coords = cv2.findNonZero(gray)  # Find all non-zero points (borders)
    x, y, w, h = cv2.boundingRect(coords)  # Find minimum spanning bounding box
    img_preprocess = img_preprocess[y:y + h, x:x + w]  # Crop the image - note we do this on the original image
    img_real_sized = img_real[y:y + h, x:x + w]  # Crop the image - note we do this on the original image
    return img_real_sized, img_preprocess


def image_allignement(img_cad, img_real):

    img_real_cropped = crop_img(img_real, top=40, bottom=20, left=20, right=15)

    img_real_preprocess = bilateral_filtering(img_real_cropped, filter_size=6, sigma_r=10, sigma_s=24)

    img_real_preprocess = gaussian_thresholding(img_real_preprocess)

    img_real_preprocess = blob_removal(img_real_preprocess, 3)

    img_real_preprocess = corners_defect_removal(img_real_preprocess)

    #Allinemaneto
    sz = img_real_preprocess.shape

    # Define the motion model
    warp_mode = cv2.MOTION_AFFINE

    # Define 2x3 or 3x3 matrices and initialize the matrix to identity
    if warp_mode == cv2.MOTION_HOMOGRAPHY:
        warp_matrix = np.eye(3, 3, dtype=np.float32)
    else:
        warp_matrix = np.eye(2, 3, dtype=np.float32)

    # Specify the number of iterations.
    number_of_iterations = 50;

    # Specify the threshold of the increment
    # in the correlation coefficient between two iterations
    termination_eps = 1e-10;

    # Define termination criteria
    criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, number_of_iterations, termination_eps)

    # Run the ECC algorithm. The results are stored in warp_matrix.
    (cc, warp_matrix) = cv2.findTransformECC(img_real_preprocess, img_cad, warp_matrix, warp_mode, criteria)

    if warp_mode == cv2.MOTION_HOMOGRAPHY:
        # Use warpPerspective for Homography
        im2_aligned = cv2.warpPerspective(img_cad, warp_matrix, (sz[1], sz[0]), flags=cv2.INTER_LINEAR + cv2.WARP_INVERSE_MAP)
    else:
        # Use warpAffine for Translation, Euclidean and Affine
        im2_aligned = cv2.warpAffine(img_cad, warp_matrix, (sz[1], sz[0]), flags=cv2.INTER_LINEAR + cv2.WARP_INVERSE_MAP);


    return im2_aligned, img_real_cropped

if __name__ == '__main__':

        img_real = load_img(path_img)
        img_cad = load_img(path_cad)

        img_cad, img_real_sized = image_allignement(img_cad, img_real)

        result = cv2.addWeighted(img_real_sized, 0.5, img_cad, 0.5, 0)

The Result:

How can I go forward?

Thanks to all

get the CAD drawing to not be a line drawing but an imitation of the camera view, i.e. filled shapes.

ECC is an optimization, working with per-pixel differences. it requires something to work with. line drawings are terrible for that.

also, if you’re trying to imitate Machine Vision operations like locating instances of a 2D/3D CAD model in 2D/3D images, as is available in commercial MV programs… I believe none of that has found its way into OpenCV yet.

there’s feature matching, which requires texture. and there’s some contour-based stuff (“shape matching”) but that’s decades old tech.

and please don’t apply arbitrary tags to a thread. the “build” tag makes no sense on this thread.

imo, main problem here is you’re comparing apples(lines) to pears(textured surfaces)

features2d wont work properly with lines only

but maybe, you can get contours from both, and use findHomography() with those ?

hi,
yes I test many ways and the best is to make a preprocess where I extract the contours and match them.
On the other hand it’s very unstable on the results.

thanks all for the advices!