fastSLAM using single calibrated camera

Hi. I’ve just started experimenting with fastSLAM and so far I’ve calibrated my camera and is currently able to detect and match ORB-features from the camera feed.
This is my feature extractor:

def extract(self, img):
    feats = cv2.goodFeaturesToTrack(np.mean(img, axis=2).astype(np.uint8), 3000, qualityLevel=0.01, minDistance=3)
    kps = [cv2.KeyPoint(x=f[0][0], y=f[0][1], size=20) for f in feats]
    kps, des = self.orb.compute(img, kps)

    # match features
    ret = []
    if self.last is not None:
        matches = self.bf.knnMatch(des, self.last['des'], k=2)
        for m,n in matches:
            if m.distance < 0.75*n.distance:
                kp1 = kps[m.queryIdx].pt
                kp2 = self.last['kps'][m.trainIdx].pt
                ret.append((kp1, kp2))

    if len(ret) > 0:
        ret = np.array(ret)

I am however, not entirely sure how to proceed. As I understand it, the next step is to implement a particle filter and initialize ‘N’ number of particles well distributed across the frame? But following that, I really struggle to understand what to do with the particles and the features/landmarks.