Image dimension measurement - mm per pixel calibration

I am trying to measure dimension of an object in real time using a video feed from a camera. I am working off an single frame/image of the video for now.

In order to measure the real (physical) dimensions, I want to first calibrate the camera measurements, i.e. to obtain mm per pixel value. To do this, I have replaced the original object to be measured with a ruler. Since the camera is mounted at an angle, to get the top-down or bird’s eye view, I have corrected the image using perspective transform 4 point method.

To obtain the perspective corrected image above, I have created the following code

# To open matplotlib in interactive mode
%matplotlib qt5
 
# Load the image
img = cv2.imread('Extracted Images/Color/colorframe50.jpg') 

print('Width: {0}'.format(img.shape[1]))
print('Height: {0}'.format(img.shape[0]))
print('Channel: {0}'.format(img.shape[2]))
 
# Create a copy of the image
img_copy = np.copy(img)
 
# Convert to RGB so as to display via matplotlib
# Using Matplotlib we can easily find the coordinates of the 4 points that is essential for finding then transformation matrix
img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
img_rgb = cv2.cvtColor(img_copy,cv2.COLOR_BGR2RGB)
 
#plt.imshow(img_copy)

scale_width = round(11.7348*100) # mm Actual ruler width measured using calipers
scale_height = round(6.35*100)   # mm Height based on selected 4 points for perspective transform 

# to calculate the transformation matrix
input_pts = np.float32([[192.30,343.00],[1079.0,379.80],[153.50,571.90],[1107.10,611.70]])
output_pts = np.float32([[0,0],[scale_width-1,0],[0,scale_height-1],[scale_width-1,scale_height-1]])

# Compute the perspective transform M
M = cv2.getPerspectiveTransform(input_pts,output_pts)
print(M.shape)
print(M)
 
# Apply the perspective transformation to the image
imgPersp = cv2.warpPerspective(img_rgb,M,(scale_width, scale_height)) #,flags=cv2.INTER_LINEAR)
imgGrayPersp = cv2.cvtColor(out, cv2.COLOR_BGR2GRAY)

# save image 
#cv2.imwrite('scalePerspCorrrected.jpg',imgGrayPersp)

# visulaize corners using cv2 circles
for x in range (0,4):
    cv2.circle(img_rgb,(input_pts[x][0],input_pts[x][1]),5,(255,0,0),cv2.FILLED)

# save image 
#cv2.imwrite('wonz4P.jpg',img_rgb)

# Plot results
plt.figure()    

titles = ['Original Image','4-point Selection','Perspective Warp Correction','Grayscale Perspective Warp Correction']
images = [img, img_rgb, imgPersp, imgGrayPersp]

for i in range(4):
    plt.subplot(2,2,i+1),plt.imshow(images[i],'gray') # use gray argument to correctly show grayscale images. Doesnt affect rgb images
    plt.title(titles[i])
    plt.xticks([]),plt.yticks([])

plt.show()

I want to measure the number of pixels between the graduations on the ruler (For example in the above picture number of pixels between mark 8 and any adjacent graduations on the right hand side). Since I already know the actual distance between the graduations in the ruler, I can get the mm per pixel value. What will be the best sequence of operations to obtain the distance between any two graduations on the ruler? I am thinking of gaussian blur → Thresholding → edge detection, but I am not sure how to measure number of pixels between two graduations.

How do i go about this? any suggestions?

EDIT: When selecting the 4 points for perspective transform, I eyeballed the x,y coordinates using the mouse pointer in matplotlib plot. Is there an accuarte way of obtaining the 4 points using thesholding and edge detection? I guess this will affect mm/pixel value