I am trying to train an object with haar training. I have 40000 Samples 30000 p and 10000 n. My samples ain’t need cropping and each of them is 48x48 pixels. I am setting the crop as the same as 48x48… That means ı don’t need cropping on my samples. Is this approach is right or should I set my samples bigger and crop them ?
I have tried many training but all of them are crashed. I thought this may be the issue.
having cropped positive samples is ok.
you still need an annotation txt file, containing a line for each image:
/full/path/to/pos_image 1 0 0 48 48
however, cropped negatives are a problem, since the training tries to crop many (like hundreds) neg “windows” from a larger image. if your neg image only has a singe window, 10000 might not be enough
at least show the resp. cmdline, and the resulting errors, please
p.s. the HOGDescriptor expects cropped images like yours, might as well try to train one of those, too.
no idea, what you’re doing, but please do not try to generate anything.
use large , realistic images with the bg you’re expecting later with the detection
as long as they are same cropped size as the positives, yes
means, your positives are bad
(i hope, you did not try to generate those from a few only, too)
it does not generate a casacade (but a pretrained SVM) see here
Thank you for your replies again. What I am trying to do is to detect if an air device is plugged to its holder or not and I am collecting my image samples from a security cam. All my samples were taken from that camera records and the image qualities are the same with the target view. I mean I will use the generated data on the same quality view with the real time view at the end of project. So I have 2 scenerios : plug and not plug. My negatives were choosen from the not plug view and positives are plug view of the device. So what i mean with the generate new samples is to get more not plug view of the device.