Shifting of begin/end point of contour between images

Hi all!

I am working on a project where I am tracking the evolution of ice crystals over time and specifically the curvature of contours at certain points of the contours.

My raw data consists of snapshots of the sample I am looking at, at different timestamps. The simplified summary of my current approach is stroring all the attributes of each contour (area, cuvature, center of mass etc.) in the initial frame as an object, and automatically comparing these attributes to all the contours in the next frame to see track individual crystals over the frames. If there is a contour match, I append the attributes to the matching crystal object.

Doing this, I managed to track all the contour points, their local curvature, and the area and length of the contour to which the point belongs over time. I store these in a dataframe and turn them into a CSV table.

The problem I am currently facing is that over the different frames, OpenCV sometimes chooses the begin/end point with a slight offset. Because of this, if I for example try to profile the local curvature of a point over time, there is a jump in the curvature because I start tracking a different point half way.

My question now is, has anyone experienced something slightly similar or has a bright idea to make sure the begin and end points are always chosen at the “same” point? I am thinking to basically loop through the whole data right now, to trace these jumps and account for them by a shift, but that would be quite a lot of code I would love to avoid.

Thanks for reading and thinking with me!!

PS. Ter illustration, have a look at the data in two consecutive time frames.

Edit: I am only able to post one image

you’re allowed another post (with another image), i say.

can you show, how you do this ? (code)

The code for creating the crystal objects is the following:

    def tresholding_img(self,img_denoised):
        return cv2.adaptiveThreshold(src=img_denoised, maxValue=255,
                thresholdType=cv2.THRESH_BINARY, blockSize=151, C=1)

    def get_img_contours(self, img_treshold):
        self.contours, self.hierarchy = cv2.findContours(img_treshold,
            cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)

    def process_contours(self):
        self.crystalobjects = []
        self.otherobjects = []
        num_contours = len(self.contours)
        print(f'Number of contours found: {num_contours}')
        for i, contour in enumerate(self.contours):
            if len(contour) > 30: # Checks the amount of coordinate points in the contour
                print(f'Processing img contours {i}/{num_contours}', end = '\r')

                if self.hierarchy[0][i][3] == -1:  # Only create a crystal when it's not a hole contour
                    # Check whether contour is a parent, if so add the childrens contour as input as well to create hole
                    if self.hierarchy[0][i][2] != -1:
                        child_contour = self.contours[i+1]
                        obj = CrystalObject(contour, self.hierarchy[0][i], child_contour, True, contour_num=i, frame_num=self.frame_num)
                        obj = CrystalObject(contour, self.hierarchy[0][i], None, False, contour_num=i, frame_num=self.frame_num)

                    if obj.x_center == 0 and obj.y_center == 0:

I do this for every frame, and store the crystal objects per frame. The remainder of the tracking makes for about another 500 lines of code, but the logic is essentially the following:

create tracking objects for each crystalobject in frame 1 and store in a list

for each of these target objects, loop through the crystal objects of the next frame
if the area and center of mass coordinate of a crystal object are within set constraints, append all attributes of this crystal to the current target crystal.

I hope this gives a bit of an idea of what I am doing, but I can post more code if needed.


A long time ago I have done this

and abstract is

I think you can use fourier descriptors


Thanks for your reply.

I am just not sure how this would help with the shift in the data. I figured that in using the Fourier descriptors I would have to start all over again with the tracking right? Or am I missing something here.

using fourier descriptor you can fit to find translation, rotation and origin between two shapes

what’s the issue?

you have a list of points: [1,2,3,4,5,6,7]

you want a specific point to be first: 4

so you have to “rotate” the points in the list: [4,5,6,7] + [1,2,3]

you want to know what point in the contour is closest to some given point? check the distance of each point in the contour to your given point.