I want to perform selfie segmentation on Android, similar to what Zoom does. However, I would like to do it preferrably without using deep learning models and instead rely solely on conventional image processing and computer vision techniques, such as using only OpenCV or just Android SDK. Google MLKit or Mediapipe have this functionalities but I don’t want to include a large library to do so. My goal is to achieve fast selfie segmentation; may not be very accurate but it’s fine.
How can I do that? I assume apps like Zoom were probably using image processing techniques before using deep learning models, so there should be ways to achieve that.
no they weren’t. because that doesn’t work.
you aren’t gonna require users to stand in front of a greenscreen, right?
you can do this using opencv alone, however this task needs a dnn, forget ‘conventional image processing’
cv::Size inputSizeNewBarracuda = cv::Size(256, 256);
std::string imagefilename = "img/pers.png";
std::string newBarracuda = "C:/data/dnn/selfie/model_float32.pb";
cv::dnn::Net net = cv::dnn::readNet(newBarracuda);
cv::Mat img = cv::imread(imagefilename);
cv::Mat blob = cv::dnn::blobFromImage(img, 1.0/255, inputSizeNewBarracuda, cv::Scalar(), false, false, CV_32F);
cv::Mat output = net.forward();
cv::Mat mask = output.reshape(1,256) > 0.5;