Adjusting pixel colour in a segment after blending with original image

I am working on a wall painting app based on a segment returned from an AI model. The user selects an x,y coordinate of a wall, which is sent to the model.

When I get back the mask that I want for the wall, I color the segment the color selected by the user and then blend the mask with the original image.

 def set_color(self):
        # Create a copy of the original image
        self.result_image = self.original_image.copy()

        # Replace pixels inside the contour with the new color
        self.result_image[np.where(self.mask)] = (self.color[2], self.color[1], self.color[0])

        # Blend the modified region with the original image to preserve shadows
        self.result_image = cv2.addWeighted(self.original_image, 0.5, self.result_image, 0.5, 0)

        # Adjust the color
        max_value = 50
        for y, x in np.argwhere(self.mask):
            pixel = self.result_image[y, x]
            
            # Adjust each color channel individually
            for i in range(3):  # Loop over the three color channels (BGR)
                if pixel[i] < max_value:
                    self.result_image[y, x, i] = 0
                else:
                    self.result_image[y, x, i] -= max_value

        return self

The problem that I have is that once the blend is that the colour is obviously adjusted based on the original wall colour. As you can see in the code above I do an adjustment to all the brg values after the images are blended.

This static value if far from perfect but it is better than most other ways I have tried to go about solving this problem.

I though taking a mean of the of all the brg values individually from the blended image and then substracting that value from the original color value would work but it actually made it far worse.

def set_color(self):
        # Create a copy of the original image
        self.result_image = self.original_image.copy()

        # Replace pixels inside the contour with the new color
        self.result_image[np.where(self.mask)] = (self.color[2], self.color[1], self.color[0])

        # average color difference 
        # Create a mask for non-zero values
        mask_indices = np.where(self.mask)
        roi = self.original_image[mask_indices]

        # Calculate the mean of each channel
        mean_b = np.mean(roi[:, 0])
        mean_g = np.mean(roi[:, 1])
        mean_r = np.mean(roi[:, 2])

        print(int(self.color[2]) - mean_b, int(self.color[1]) - mean_g, int(self.color[0]) - mean_r)

        # Blend the modified region with the original image to preserve shadows
        self.result_image = cv2.addWeighted(self.original_image, 0.5, self.result_image, 0.5, 0)
        max_value = 50
        for y, x in np.argwhere(self.mask):
            pixel = self.result_image[y, x]
            
            # Subtract the average from each color channel
            for i, average in enumerate([int(self.color[2]) - mean_b, int(self.color[1]) - mean_g, int(self.color[0]) - mean_r]):  # Loop over the three color channels (BGR)
                if pixel[i] < abs(average):
                    self.result_image[y, x, i] = 0
                else:
                    self.result_image[y, x, i] -= abs(average)


        return self

Before I continue with more examples, I am wondering if I am going about this wrong? Is there a better way for me to combine the images and adjust the color using a different technique? Do you think openCV is the correct tool to use for this?

Unfortunately it’s only letting me upload one image for being a noob. I can provide more images if necessary.

if you want it to look good, the original surfaces must be white, not black or any other color.

Hi Crackwitz,

Thanks for the reply. I do understand that it will look better on white backgrounds versus other colours. But I still have the question about the colour adjustment after the image blend. Is there a better way to adjust to the color after the blend?

Also the reason I have chosen to go with blending the two images is that it keeps the shadows which is really key to making the image look authentic.