How to get faster/lighter video decoding

Hello and thank you in advance for your help!

I’m building a simple app at Windows using Intellij and gradle that its purpose is to be able to show a couple, or maybe more, videos (mp4) simultaneously. The code is pretty simple, I am using a thread for capturing a frame and then I use the GUI thread to show it in a panel.

My issue is that the decoding is overly using my CPU to the point that the result is far from optimal. I believe that the decode isn’t using any hardware decoders. I’ve looked for a couple of days but I haven’t found a way to include FFMPEG to my openCV library, which I believe that would solve the issue. Is there a simple guide describing the procedure?

I’m mentioning again that I’m using gradle to download the openCV jar but I’m willing to try another way in order to make it work.

in general, your hardware has to be capable of decoding that many streams (or the specific resolution and fps).

there must be libraries for java that expose the operating system’s video decode acceleration APIs.

there might even be java wrappers for ffmpeg’s API. ffmpeg can use OS APIs for video decode acceleration.

you shouldn’t use OpenCV for that, but you can. OpenCV’s videoio has gained HW accelerated decoding through ffmpeg but it’s very new. OpenCV will layer its own abstractions on top, which costs performance.

here’s a wiki page on OpenCV’s videoio HW acceleration

here’s specifically how I got this to work in python (OpenCV 4.5.2+):

vid = cv.VideoCapture(
    path,
    apiPreference=cv.CAP_ANY, # or CAP_FFMPEG
    params=[
        cv.CAP_PROP_HW_ACCELERATION, cv.VIDEO_ACCELERATION_ANY,
    ]
)
1 Like

Thank you for the answer, you helped realize a few things.

Video I/O:
    DC1394:                      NO
    FFMPEG:                      YES (prebuilt binaries)
      avcodec:                   YES (58.134.100)
      avformat:                  YES (58.76.100)
      avutil:                    YES (56.70.100)
      swscale:                   YES (5.9.100)
      avresample:                YES (4.0.0)
    GStreamer:                   NO
    DirectShow:                  YES
    Media Foundation:            YES
      DXVA:                      NO

This is my information print. The only way I can successfully capture video is by setting as backend the CAP_MSMF and none of my hardware acceleration options seems to work.

Should I configure/install something extra?

make sure it’s opencv 4.5.2 or newer. that should be sufficient.

you must differentiate between “video capture” from a video device (webcam), and “video capture” meaning to read a video file. OpenCV sees both as “sources” of video.

capture requires system APIs, here dshow/MSMF. “accelerating” that is meaningless. it’s raw data from a device.

file decoding requires ffmpeg. ffmpeg usually runs entirely on the CPU. it can use OS APIs for hardware-accelerated video decoding.

I’m not sure what OpenCV does with MSMF + DXVA. I know that ffmpeg uses DXVA to hardware-accelerate decoding.

1 Like