Confusing Epipolar Lines


I’m fairly new to OpenCV so please excuse my very limited understanding.

I am working through the OpenCV tutorials and I’m currently working on calculating the Epipolar lines from two images. However, even though I’m using the exact code from the OpenCV tutorial and the same test images I’m getting very strange results.

If anybody could point me in the right direction for a solution it would be greatly appreciated :slight_smile:

My Result:


import cv2 as cv
import numpy as np
from matplotlib import pyplot as plt

def drawlines(img1, img2, lines, pts1, pts2):
    """img1 - image on witch we draw the epilines for the points in img2
       lines - corresponding epilines"""
    r, c = img1.shape
    img1 = cv.cvtColor(img1, cv.COLOR_GRAY2BGR)
    img2 = cv.cvtColor(img2, cv.COLOR_GRAY2BGR)
    for r,pt1,pt2 in zip(lines, pts1, pts2):
        color = tuple(np.random.randint(0, 255, 3).tolist())
        x0, y0 = map(int, [0, -r[0]/r[1]])
        x1, y1 = map(int, [c, -(r[2] + r[0]*c/r[1])])
        img1 = cv.line(img1, (x0, y0), (x1, y1), color, 1)
        img1 =, tuple(pt1), 3, color, -1)
        img2 =, tuple(pt2), 3, color, -1)
    return img1, img2

if __name__ == '__main__':

    img1 = cv.imread('left.jpg', 0)         # queryingImage # left image
    img2 = cv.imread('right.jpg', 0)        # trainImage # right image

    sift = cv.xfeatures2d.SIFT_create()

    # find the keypoints and descriptors with SIFT
    kp1, des1 = sift.detectAndCompute(img1, None)
    kp2, des2 = sift.detectAndCompute(img2, None)

    # FLANN parameters
    index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
    search_params = dict(checks = 50)

    flann = cv.FlannBasedMatcher(index_params, search_params)
    matches = flann.knnMatch(des1, des2, k=2)

    good = []
    pts1 = []
    pts2 = []

    # ratio test as per Lowe's paper
    for i, (m,n) in enumerate(matches):
        if m.distance < 0.8*n.distance:

    pts1 = np.int32(pts1)
    pts2 = np.int32(pts2)
    F, mask = cv.findFundamentalMat(pts1, pts2, cv.FM_LMEDS) #F = fundamental matrix

    # We select only inlier points
    pts1 = pts1[mask.ravel() == 1]
    pts2 = pts2[mask.ravel() == 1]

    # Find epilines corresponding to points in right image (second image) and
    # drawing its lines on left image
    lines1 = cv.computeCorrespondEpilines(pts2.reshape(-1, 1, 2), 2 ,F)
    lines1 = lines1.reshape(-1, 3)
    img5, img6 = drawlines(img1, img2, lines1, pts1, pts2)

    # Find epilines corresponding to points in left image (first image) and
    # drawing its lines on right image
    lines2 = cv.computeCorrespondEpilines(pts1.reshape(-1, 1, 2), 1, F)
    lines2 = lines2.reshape(-1, 3)
    img3, img4 = drawlines(img2, img1, lines2, pts2, pts1)

    plt.subplot(121), plt.imshow(img5)
    plt.subplot(122), plt.imshow(img3)

Results I should get (Can onlt embed one media item in post):

which tutorials exactly do you mean? a link please

1 Like

The epiploar geometry tutorial

yeah something’s broken.

the F matrix looks completely ruined. I don’t know what i’d expect but those values can’t be right.

>>> F
array([[ 0.00001,  0.00002, -0.00441],
       [-0.00001,  0.     ,  0.0002 ],
       [ 0.00039, -0.00518,  1.     ]])

nearly 1000 keypoints per image, but only 165 “inliers”, also sounds broken.

I would expect those match points to have the same color for each match. that’s what the code says… so something upto and including the matching must have gone wrong.

I don’t know how to help. I hope someone else can.

if you don’t get a resolution to this in the forum, you could open a bug about it on OpenCV’s github. it’s certainly something that needs fixing.

1 Like

Hi, thanks for the help! :smiley:
I also thought that it could be an issue with the matches, so I changed the feature detection method to ORB and have 160 good matches which resulted in 264 inliers, but the epiploar lines are still as broken as before.

Good matches: