Prospective GSoC 2026 Student - Interest in Quantized Models & Slack Invitation

Hi everyone,

My name is Oscar, and I am a 2nd-year AI Engineering student from Spain. I am writing to express my strong interest in the GSoC 2026 project: “Quantized models for OpenCV Model Zoo”.

I have already started my technical preparation by setting up a dedicated development environment on Linux and exploring the opencv_zoo repository. Using Python and the ONNX library, I have been inspecting the internal structure of models like YuNet. I’ve analyzed the weight tensors and confirmed they are currently in float32 (Type 1), and I am eager to work on their transition to int8 to optimize performance for edge devices.

As part of my current university studies, I am working with search algorithms and data structures (like KNN), which gives me a solid mathematical background to understand how weight distribution and quantization error metrics work.

I have two quick questions for the mentors: 1. Beyond ONNX Runtime, is there a preferred quantization tool or framework (e.g., OpenVINO or a specific OpenCV internal tool) you would like to see implemented in this project? 2. I tried joining the Slack channel to follow the discussions, but the invitation link seems to be expired. Could someone provide a new one?

I am looking forward to contributing to the OpenCV community!

Best regards, Oscar

Quick update: I have successfully converted the YuNet model from the OpenCV Zoo to float16 using onnxconverter-common. I verified the conversion by inspecting the tensor types, confirming they changed from Type 1 (float32) to Type 10 (float16). This resulted in a 50% reduction in model size. Now I am starting to explore the tools for the next step: int8 quantization.

Hi again! I have a quick update (and a bit of a technical wall).

I managed to create an INT8 version of the model, which is much smaller (around 118KB compared to the original 400KB). However, when I tried to run it with cv2.dnn, I got a ‘parse error’ saying that the DynamicQuantizeLinear node is not supported.

I’m still a 2nd-year student and learning how these internal nodes work, but I guess this is exactly why this GSoC project exists: the current OpenCV engine needs better support for these quantized formats.

It’s a bit frustrating not being able to run the benchmark yet, but it’s a great lesson on why model optimization is so tricky! I’ll keep digging into how to make the model more friendly.