How to use Thin Plate Spline Shape Transformer correctly?

I’ve got a grid of 9 landmarks, shown in the picture below

My goal is to obtain warped image of a palm region, like this

image

Here, L1 on the source image is translated to point 0, 0 in the resulting image, L9 in the source image is translated to the bottom right corner of the resulting image

Here is my code.

import numpy as np
import cv2

source_points = np.array(
# L1 - L3, collected from image
      [[[207.,  39.],
        [302.,  35.],
        [402.,  28.],
# L4 - L6
        [176., 144.],
        [300., 124.],
        [425., 100.],
# L7 - L9
        [217., 250.],
        [335., 211.],
        [447., 174.]]])

w, h = 330, 330

target_points = np.array([[[0,   0],
                           [h/2, 0], 
                           [h,   0], 
                                  
                           [0,   w/2],
                           [h/2, w/2],
                           [h,   w/2], 
                                  
                           [0,   w], 
                           [h/2, w], 
                           [h,   w]]])
       
tps_transformer = cv2.createThinPlateSplineShapeTransformer()

matches = [cv2.DMatch(i, i, 0) for i in range(source_points.shape[1])]
tps_transformer.estimateTransformation(source_points, target_points, matches)

trans_points = tps_transformer.applyTransformation(source_points)
        
warped_palm = tps_transformer.warpImage(source_crop)

When I check trans_points, returned from tps_transformer.applyTransformation, their values are correct.

>> print(trans_points.astype(int))

array([[[  0,   0],
        [165,   0],
        [330,   0],
        [  0, 165],
        [165, 165],
        [330, 165],
        [  0, 329],
        [165, 330],
        [330, 330]]])

However, warping result is not that is desired

it might be transforming the image backwards instead of forwards. that’s a natural thing for images because of how sampling/interpolation works. pixels are pulled into the destination, from the source, using the inverse transform to calculate the place to look up.

other “warp” functions in opencv will implicitly invert the given transform, e.g. an affine or homography matrix, such that the result is the expected forward transform.

you might have to fit a transformer for the opposite transform, i.e. swap source and destination points.

1 Like

I’ve seen this workaround in ThinPlateSplineShapeTransformer::wrapImage transformed X and Y values are located in the oposite side · Issue #7084 · opencv/opencv · GitHub and tried it.

If I switch source_points and target_points in tps_transformer.estimateTransformation(target_points, source_points, matches),

the result is even worse.

I have obtained desired result with the ThinPlateSplineTransform from the skimage.

However, I’m still interested in how to do this with the OpenCV

are your points accurate? state them in X,Y order, not Y,X

1 Like

Here is what happening, if I swap X and Y

        w, h = 330, 330
        source_points = np.array([[[ 39., 207.],
                                   [ 35., 302.],
                                   [ 28., 402.],
                                   [144., 176.],
                                   [124., 300.],
                                   [100., 425.],
                                   [250., 217.],
                                   [211., 335.],
                                   [174., 447.]]])

        target_points = np.array([[[0, 0],
                                   [0, h/2], 
                                   [0, h], 
                                  
                                   [w/2, 0],
                                   [w/2, h/2],
                                   [w/2, h], 
                                  
                                   [w, 0], 
                                   [w, h/2], 
                                   [w, h]]])
       
        tps_transformer = cv2.createThinPlateSplineShapeTransformer(2500)

        matches = [cv2.DMatch(i, i, 0) for i in range(source_points.shape[1])]
        tps_transformer.estimateTransformation(source_points, target_points, matches)
        
        warped_palm = tps_transformer.warpImage(source_crop)

Wow! And finally, there is!
Solution is to swap sources and targets in estimateTransformation

Final variant

source_points = np.array([[
#  X, Y, collected from image
# L1 - L3
  [ 39., 207.], [ 35., 302.], [ 28., 402.],
# L4 - L6
  [144., 176.], [124., 300.], [100., 425.],
# L7 - L9
  [250., 217.], [211., 335.], [174., 447.]]])

w, h = 330, 330 # desired image size

target_points = np.array([[
  [0, 0], [0, h/2], [0, h], 
  [w/2, 0], [w/2, h/2], [w/2, h], 
  [w, 0], [w, h/2], [w, h]]])

matches = [cv2.DMatch(i, i, 0) for i in range(source_points.shape[1])]
       
tps_transformer = cv2.createThinPlateSplineShapeTransformer()

tps_transformer.estimateTransformation(
    target_points,   # << final bit, estimate transform *from target to source*
    source_points,
    matches)
    
warped_palm = tps_transformer.warpImage(source_crop)

result = warped_palm[:h, :w]

Result: source, output

2 Likes