OpenCV and ReactNative

Hello OpenCVees!

I’ve been working with openCV in the backend server, using Python, and now I’d like to install openCV as a component in React Native to implement Face’s SignIn/SignUp feature.

The goal isn’t to heavily process images/streams directly in the mobile device, but only to scan/map the face, and once it is the image is ready (i.e. face recognized, centralized, adjusted, etc…) the portrait face is sent as a frame, which will be processed in the backend server, compared with a database of descriptors & etc.

Thus, openCV’s component would be responsible for doing an image mapping/scanning, in order to guide and show the user that Face recognition has been applied.

As well as showing a colored frame involving the face of the user, while the user is positioning the device properly in order for the frame to be captured.

Then, I’m going to provide a button, which triggers a post file request to the backend server within the portrait image/frame.

Doing my homework (i.e. research) I’ve found the following thread in the old forum
https://answers.opencv.org/question/103368/react-native-and-opencv/?answer=189246

However, it seems too superficial though.

Furthermore, I found the tutorial

However, it’s been missing pieces still. I followed it through and even fixed a few details, but It didn’t work in the end. On both platforms, IOS and Android.

Does anyone have a similar project?
Is there any documentation/source available? (if, not I’d be willing to share what I have done so far and proceed further with this project.

Best wishes,
I

So far, I found “react-native-opencv3”, which is a React Native package, which seems to work with modular approaches for Android and IOS.

https://github.com/adamgf/react-native-opencv3#:~:text=React-native%20opencv3%20or%20"RNOpencv3,app%20developers%20with%20new%20functionality.

However, it still lacks documentation for creating features and integration between client and server.

Best wishes,
I

Of course it lack server integration, as a sever apparently has no role there. Sending data to server would require a server application (such as something written in node.js) and after such exists, it is just a general programming problem of sending data via some APIs.

Hi Matti,
Of course! and I agree that further processing requires a backend server to process the stream/frames. In most cases, I recommend Python’s backend.

Nevertheless, I’m referring to the client’s side only. Events, which preceded the backend server’s processing.

For example, pressing a button, enabling camera, scanning, identifying, cropping, and only after that, sending a post request within the image to the server. Sending frames only if there’s a face previously capture by OpenCV libraries installed in the mobile app.

Best wishes,
I

To clarify the scenario, giving some real examples, I’ve written a quick and simple code ( which I call client’s interface)
The sample starts a camera view to map and identify faces, using default cascade files. Once it finds it, it writes a frame as a PNG image to the filesystem.

So, during streaming capture, I’ve added colored frames (i.e. cv2.rectangle), arrows and messages, in order to guide the user to place the camera in the right position to facilitate image capturing.

That simple code was written in python in my desktop (just to illustrate the scenario to get a show the idea (even barely). Now I’d like to write a similar snippet in ReactNative

import numpy as np
import cv2
import sys
import random, string

faceCascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eyeCascade = cv2.CascadeClassifier('haarcascade_eye_tree_eyeglasses.xml')

video_capture = cv2.VideoCapture(0)

cv2.namedWindow("Window", cv2.WINDOW_NORMAL)
cv2.resizeWindow('Window', 400, 400)

while True:
    ret, img = video_capture.read()
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = faceCascade.detectMultiScale(gray, scaleFactor=1.3, minNeighbors=3, minSize=(30, 30))

   # print("[INFO] Found {0} Faces!".format(len(faces)))                                                                                                                                                
   for (x, y, w, h) in faces:
      cv2.rectangle(img, (x, y), (x+w, y+h), (255,0, 0), 2)
      roi_gray = gray[y:y+h, x:x+w]
      roi_color = img[y:y+h, x:x+w]
      eyes = eyeCascade.detectMultiScale(roi_gray)

      for (ex, ey, ew, eh) in eyes:
        cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0), 2)

       if len(eyes) != 0:
          print("EYES", eyes)
          crop_img = img[y-99:y+h+99, x-36:x+w+36]
          if len(crop_img) != 0:
            letters = string.ascii_lowercase
            result_str = ''.join(random.choice(letters) for i in range(12))
            status = cv2.imwrite(result_str+".jpg", crop_img)
            print("[INFO] Image written to filesystem: ", status)


  cv2.imshow("Window",img)

  #This breaks on 'q' key                                                                                                                                                                              
  if cv2.waitKey(1) & 0xFF == ord('q'):
      break

video_capture.release() 
cv2.destroyAllWindows()

there was an effort to port (some of?) OpenCV to javascript.

There’s a couple of repositories, which can be used as samples for further development.
Many Thanks to Adam Freeman

Best wishes,
I