I have a Docker image that simply decodes every 10th frame from one short video, using OpenCV with Rust bindings. The video is included in the Docker image.
When I run the image on an EC2 instance, I get a set of 17 frames. When I run the same image on AWS Lambda, I get a slightly different set of 17 frames. Some frames are identical, but some are a tiny bit different: sometimes there’s green blocks in the EC2 frame that aren’t there in the lambda frame, and there’s sections of frames where the decoding worked on lambda, but the color is smeared on the EC2 frame.
The video is badly corrupted. I have observed this effect with other videos, always badly corrupted ones. Non-corrupted video seems unaffected.
I have checked every setting of the VideoCapture I can think of (CAP_PROP_FORMAT, CAP_PROP_CODEC_PIXEL_FORMAT), and they’re the same when running on EC2 as they are on Lambda. getBackendName() returns “FFMPEG” in both cases.
For my use case, these decoding differences matter, and I want to get to the bottom of it. My best guess is that the EC2 instance has a different backend in some way. It doesn’t have any GPU as far as I know, but I’m not 100% certain of that. Can anyone think of any way of finding out more about the backend that OpenCV is using?
so, your question is: why does ffmpeg behave different on those mahines.
i don’t think, ppl can answer this from here.
you really don’t need opencv in that stack.
try to find more direct ffmpeg bindings for rust
however, you still could check the output of cv::getBuildInformation() (VIDEO section) , to see, if there are any differences in the hardware flags or the local ffmpeg install, your opencv libs were linked against.
docker image got corrupted on transfer?
docker uses the system’s kernel, which makes a difference somehow?
please clarify: the source video file, did you feed the same but known corrupted file into this pipeline and the decodes differ? or do you feed the same intact file into the pipeline, but one decode is corrupted?
I don’t see how any of that is an OpenCV problem. reproduce this with ffmpeg alone. debugging means figuring out which parts cannot have caused an issue.
I don’t think it is an OpenCV issue. It’s most likely a difference in the system kernel, as you say. I tried berak’s suggestion of cv::getBuildInformation(), but that returns the same info on Lambda and EC2.
/proc/cpuinfo returns different info, though: EC2 has AVX512 support, Lambda does not. Could that be the difference?
I think I’ll have to build everything from source and insert instrumentation. If I figure it out I’ll post here.