Announcing CvComposer 1.3

Hi,

This post is just to annouce the release of CvComposer 1.3, which is a graphical tool to easily experiment OpenCV functions.
The new version introduces undo-redo and many new functions, which makes it much more user-friendly.

Link to project : GitHub - wawanbreton/cvcomposer: Advanded GUI for OpenCV, to compose various filters and quickly see the result

For Windows users, a setup is provided. If you would like an other packaging, feel free to open an issue.

Any comment is welcome on how to make this software usable and user-friendly. I use it myself at work, and it allowed me to test many algorithms in a short time. I hope it can be useful to someone else !

7 Likes

for quick n dirty experiments I was usually stuck with “ImagePlay”. when I know that’s not cutting it, I start vscode and a jupyter notebook. that’s not as interactive but very reliable.

all the other alternatives that were known to me until now are buggier/idiotic to set up or have fewer features (and that’s saying something, ImagePlay lacks a ton of stuff) or are commercial or have insanely weird “game mechanics” (MV software loves to be idiotic because it’s made by engineers, not programmers or computer scientists).

I welcome this option. I like the robust UI. undo even works. nice :smiley:

since you’re asking, I’m happy to throw lots of ideas at you:

  • drag-drop of an image file from outside (explorer) should cause an image widget to be created
  • drag-drop of image file onto “image from file” box should set that image path
  • tooltips on everything
    • image viewer: those textbox-looking things in the bottom right (ah, one of them is a color swatch, and I have to click)
  • ports
    • there appear to be a green type, green with asterisk in it, an orange type, and a 4-colored type. ports, explain yourselves!
    • hitbox for a port seems smaller than the drawn circle.
  • image viewer:
    • scroll-zoom into mouse position instead of center
    • drag-to-pan? nevermind, middle button. good, so primary and secondary buttons are still unassigned :wink:
    • what’s the difference between data viewer and image viewer, besides the extra window?
  • matlab/simulink: click on background, get a cursor, start typing, for a search/autocomplete to create widgets
  • give widgets various keywords they can be found by. I wanted to divide one image by another, but “arithmetic”, “div…” won’t turn up anything. I’d have to know I need the “operator” widget.
  • some way to click on a port and get a filtered list of other widgets to instantiate that are type-compatible
    • say, I have a median blur. the size port… how would I know what to connect to that? browse the whole list and try things?
  • “operator” block
    • s/substract/subtract/
    • how do I multiply by a constant? (I divided an image by its median blur, getting values around 1, but the image viewer seems to like 0…255 as a value range)
  • input widgets
    • “constant” scalar
    • numeric spinbox, slider, … that I can attach to, say, the kernel size of a median blur
    • image/data viewer that can display a list of ROIs, and that lets me add/edit/remove ROIs (or one at least) graphically, and feed that elsewhere
  • flow control, like scratch or machine vision programs?
  • didn’t see any warps
  • a bunch of “machine vision” blocks
    • 1D sampling along lines that are normal to a linear or circular path (that’s how they do circle fitting in MV)
  • kernel widget
    • flipflops crazily when I change the size, either width or height. that also affects the other dimension, strangely. apparently entering something in one box actually applies it to the other box… or there’s some kind of state happening.
  • a “scope/probe/inspector” perhaps, that shows the data of whatever port/connection you hover/click on. same idea to instantiate a viewer on any port/connection.

Well, thank you so much for this complete feedback !
I am so happy you took the time to test the application, and that you actually liked it :slight_smile:
I didn’t known ImagePlay, but the general philosophy seems to be a bit different.
Now, about your (many) ideas :

Definitely a good idea !

I was looking for a way to help new users understand the GUI, but I indeed forgot the tooltips…

Actually, in the menu “Help > plug types” there is a detailed help on them, but it is so well hidden that I rediscovered it myself recently :smiley: I should find a better place to put it… Also, you seem to call them “ports” instead of “plug”, maybe your word is better (I’m French so I can’t tell which one is better)

Can you explain ?

Always difficult to implement, but I have some code somewhere…

“Data viewer” can display any type of data (but currently, only images and numbers…) directly inside the graph. “Image viewer” is dedicated to images and has a separate window. Maybe just the names are not correct.

That would be nice indeed !

Also a very nice idea, crossed my mind at some point. The hardest part is to find the proper words people would type in when looking for a processor !

Well, I think in real conditions, you are trying to achieve a purpose, so you don’t just put processors randomly. But I will keep that in mind.

You could use the “Add weighted” processor and put a 0.0 Beta, but that’s definitely a workaround :slight_smile:

I assumed data like numbers could be directly edited in processors, but I guess you would like to have external inputs, for example to have a constant value shared with many processors ?

Having ROIs of images is possible but not very convenient if you have many. Can you explain your use case in details ?

Can you explain ?

I suppose you mean blocks that are not just a simple OpenCV function, but a combination of them. I have a few ideas for that, for example loading a processors from an other CvComposer file, or from a C++/Python plugin.

This one is a bit buggy indeed, I should spend some time on it :slight_smile:

That’s the data viewer :slight_smile:

1 Like

you got me there. didn’t look for that at all. now I did. interesting. so it supports a whole lot of types. I always wanted that in ImagePlay but it can’t.

ports, plugs, eh, it’s a visual thing, terms aren’t too important. I didn’t even know what to call the boxes/blocks/widgets. wasn’t sure if there was established vocabulary. as for the word “plug” itself, that literally is something that plugs a hole (or the complement to a jack/socket, …)

watch the cursor:

ah, good to know :slight_smile: I didn’t actually try to generate other types of data.

you don’t have to come up with all the synonyms. the project is on github. people can always create pull requests.

ah, right, for such input types, instead of a connection from elsewhere, a block also provides manual entry directly “inside” of the block, that’s perfectly fine. I didn’t think about that enough.

can’t really. annotation tasks perhaps. my latest use cases for this type of thing was not ROIs but a set of points (for a perspective transform).

mostly I’m disappointed in OpenCV’s selectROI and selectROIs functions (showed up in recent years). neither allows passing a ROI into the function, so they don’t support modifying data, only creating it. also, the plural form doesn’t show all the ROIs you already made… so, fairly useless for any kind of ad-hoc annotation task.

that is perhaps way out of scope for something based on data flow graphs.

the usual solutions in Machine Vision are somewhat like data flow graphs. the programmer stacks up function blocks (strictly vertically) and wires them up. execution order is then determined by the vertical order, like a program, and the wiring translates into variables. a more constrained (re)presentation of the data flow graph. removing some degrees of freedom for the position of functional blocks can make the graph look tidier, but also hide its true shape in something that merely looks linear (list/vertical arrangement), but isn’t.

here’s one example: https://www.adaptive-vision.com/en/software/studio/

edit: oh, I just figured out the image viewer tool windows can be docked

and one other thing: scroll wheel on the area is inconsistent. while hovering over a block, it scrolls. while hovering over background, it zooms.

Ok, now I see ! That’s a bit confusing indeed…

Hmm, I think annotation is very specific to the application, so make something generic may be a bad thing. By the way, you can use the “Sub image” processor to extract an ROI. However, it doesn’t support multiple rectangles. That would be an improvement.

Ok, from what I understand, the purpose of these applications is not similar : it is intended for non-developer people who want to setup a full detection stack. It looks more like a dashboard builder. But this is interesting.

Very nice algorithm, I see you figured out how to use the lists ! But it seems that you are used to vertical workflow, you should try horizontal :slight_smile:
I will try to see if the tool windows can be docked by default…

Noted !

I will convert all your remarks to issues/features requests on github so that I don’t forget anything. Thank you again for this detailed feedback.

1 Like

I would suggest to use this contact form and propose to write an article about your tool on the OpenCV news feed.
See for instance some news about different computer vision tools:


Looking quickly at the repo, I would suggest to support also Deep Learning inference.
In my opinion, your tool would be perfect to make the link between “classical computer vision operators” and DL models.
In addition to be the perfect tool to be able to test quickly a DL model.


I like the idea of using blocks to chain and compose different computer vision operations.
Maybe you can find some ideas with these tools:

Thank you for this suggestion, I will probably do it ! I have also looked at the other tools, it is always interesting to see different GUIs. My own original inspiration is from Blender, but there is definitely room for improvement !

That would probably be a good feature indeed. I see many people migrating to DL detection instead of using basic functions, so having both would be great. However, for the time being I have never practiced it myself, so I don’t know this part of OpenCV and how to use it. I will add an other feature request.

Since I am currently learning Blender, definitively the Compositing Nodes is a good source of inspirations.


Maybe your tool could be a great demonstrator for this OpenCV functionality:


Looking at it again, your tool could be “perfect”:

  • in term of usability,
  • and from a performance point of view.

I am wondering if your tool could not be a great demonstrator for the OpenCV G-API module, and so if your tool could not be interesting for them as a good demonstrator for the G-API feature?

About Halide-lang and Halide: Decoupling Algorithms from
Schedules for High-Performance
Image Processing
.

Hi Eduardo, and thank you for your constructive answer !

Actually, Blender is one of my favorite softwares and my original source of inspiration :slight_smile: And yes, maybe there are some GUI details that I could get inspiration from.

I was not aware of the Graph API, but this is definitely an idea I had in mind : my ultimate purpose is to have a separate library to execute a CvComposer file directly from code, so that you could design your algorithm with the GUI and not have to re-code it in your application, which is what I currently do. I’m not 100% G-API can fit CvComposer, but if so, what is the next step ?

I don’t have experience with G-API.

But you could start with:

In the same spirit : chaiNNer

A flowchart/node-based image processing GUI aimed at making chaining image processing tasks (especially upscaling done by neural networks) easy, intuitive, and customizable.

No existing upscaling GUI gives you the level of customization of your image processing workflow that chaiNNer does. Not only do you have full control over your processing pipeline, you can do incredibly complex tasks just by connecting a few nodes together.

chaiNNer is also cross-platform, meaning you can run it on Windows, MacOS, and Linux.

For help, suggestions, or just to hang out, you can join the chaiNNer Discord server

Remember: chaiNNer is still a work in progress and in alpha. While it is slowly getting more to where we want it, it is going to take quite some time to have every possible feature we want to add. If you’re knowledgeable in TypeScript, React, or Python, feel free to contribute to this project and help us get closer to that goal.