Combining multiple frames with cv2.concat

Hi,
i have got a program where im getting multiple camera feeds in. (it is dynamic so it can vary between 1 to xx)

At this stage im only able to get all frames displayed individually.

how would i go about adding them all into the same frame aswell?

So i would like to still have them shown individually but also show them in one frame together.

if check_sync(queues, new_msg.getTimestamp()):
    fps.next_iter()
    # print("FPS:", fps.fps())
    # print('frames synced')
    print("devices", len(devices))
    for device in devices:
        frames = {}
        for stream in device['queue']:
            frames[stream['name']] = stream['msgs'].pop(0).getData()
            cv2.imshow(f"{stream['name']} - {device['mx']}",
                       cv2.imdecode(frames[stream['name']], cv2.IMREAD_UNCHANGED))
        # print('For mx', device['mx'], 'frames')
        # print('frames', frames)
        device['frame_q'].put(frames)

options:

  • use multiple imshow windows
  • use a real GUI toolkit, use multiple widgets in a window, display each camera frame in its own widget
  • make a big numpy array, use slicing to copy the data in there, display that thing instead

thank you for responding so quick.

what GUI toolkit would you recommend for this type of application?

I can’t recommend any of them because they all require lots of ritual.

people commonly use Tkinter (python), Qt, GTK, … the C#/windows crowd uses whatever’s popular there (also not straight OpenCV but some wrapper that’s hopefully not relying on OpenCV’s dead C API), which is probably WinForms, WPF, and whatever new thing microsoft made up.

do not “just go into” GUI programming. it requires good tutorials that explain things you can’t anticipate (event loop, how to deal with concurrency/multithreading, how to properly do delays, …). it’s not obvious to someone who hasn’t done any GUI programming before.

perhaps start with Tkinter. it seems the most popular and least complex, and it already comes with python.