Opencv_createsamples not using background

Hi,

I’m having some trouble running running opencv_createsamples on linux mint. In particular, I don’t think the background file is being picked up.

Here is the command that I’m running:

opencv_createsamples -info fencers.info -bg bg3.dat -w 24 -h 48 -vec fencers_w_bg.vec -show -num 3

For reference

me@inspiron:~/python/opencv$ tail bg3.dat 
neg3/UMD_001.jpg
me@inspiron:~/python/opencv$ ls neg3/UMD_001.jpg -l
-rw-rw-r-- 1 me me 73579 Jul  6 10:20 neg3/UMD_001.jpg

Now what I expect to see is 3 of my positive images pasted onto the background image (a bit like here http://note.sonots.com/?plugin=ref&page=SciSoftware%2Fhaartraining&src=0001_0351_0227_0115_0115.jpg&fbclid=IwAR2nPQuvf6CfqkghbJRLZ1vcbJlhrenZgfy3zMcdnUVTNcsqdDAES5GxiF4)

Instead I just see 3 of my positive images with no background image.

I get the same result if I give it a background file that doesn’t exist e.g.

opencv_createsamples -info fencers.info -bg does_not_exist.dat -w 24 -h 48 -vec fencers_w_bg.vec -show -num 3

I think I’m having a problem similar to this,
https://answers.opencv.org/question/93312/opencv_createsamples-and-training-cant-read-negativestxt/

So I installed imagemagick and ran the following in the neg3 directory

for i in *; do convert $i -colorspace gray $i; done

Still no joy.

Any help would be appreciated. I have tried several different back ground images of different formats, using absolute paths etc.

just out of curiosity, what are you trying to detect here ?
(what are you trying to train it on ?)

doesn’t createsamples print out alll the variables ? may we see the console output ?

I’m trying to detect images of people fencers (fencers clipped from videos of matches).

Here’s the outout

me@inspiron:~/python/opencv$ opencv_createsamples -info fencers.info -bg bg3.dat -w 24 -h 48 -vec fencers_w_bg.vec -show -num 3
Info file name: fencers.info
Img file name: (NULL)
Vec file name: fencers_w_bg.vec
BG  file name: bg3.dat
Num: 3
BG color: 0
BG threshold: 80
Invert: FALSE
Max intensity deviation: 40
Max x angle: 1.1
Max y angle: 1.1
Max z angle: 0.5
Show samples: TRUE
Scale: 4
Width: 24
Height: 48
Max Scale: -1
Create training samples from images collection...
Done. Created 3 samples

sorry to say so, but it won’t work.
i haven’t seen any of your images, but cascades only work with rigid objects (e.g.: stop sign) , while fencing ppl are highly dynamic, you can’t assume they have a common, unique pose.

also, training “real world” cascades needs a ton of “real world” images, synthesizing positives from a few only does never work here.

this means, your background file is ignored

again my advice would be: read up on re-training dnn’s like yolo or SSD, which can be done with a few hundred (positive) images only
(but no, don’t expect it to be easy …)

Thanks for the advice. I’ve experimented a bit with a dnn pre-trained for detecting people and it seems to work for detecting fencers. (Turns out fencers count as people).

This is more a learning exercise for me though, so I’d still like to get the cascade training working, even if the result isn’t useful.

If I’m reading that correctly, if you supply an infoname and no imgname then the background isn’t used. So my command is working as intended?

yes.
again, please note, that this gets very brittle with pose variation, you want to restrict it to a single pose, like there is a side-face cascade, but only for the left side
(you have to flip the img for the other side).
if you try to train it on both at the same time, you only “smear” the inference

anyway, start collecting data, you’ll need a lot.

Whelp, wish I asked here before banging my head against this for several days.

Thanks again for the help.