Predictions are the loss values within the log file, right?
If so - i need to look for these below 0.00, i suppose?
it’s the mask predictions you see during training. the window has three images, original box, original mask, predicted mask. if the predicted mask is consistently sharp and clear and has no holes, you can try the model.
loss never goes below zero, zero is the optimal loss which is not possible in reality.
AIUI the training process has a “learning rate”, which is a kind of randomness… and the learning rate (randomness) decreases over epochs.
if you jump “into” the training, the learning rate will be low, and the training won’t be able to move the network strongly enough to learn new stuff.
So, in this particular example, you suggest to watch carefully the screen results and to stop the training ASAP when the results are OK - otherwise the training “efficiency” could decrease?
no. that’s not what I meant to convey. you need to read what preceded the quote. that is important for context.
I don’t think I can rephrase that. just ignore my point.
About the data augmentation.
What about adding small rotations?
In real life the labels may come in many angles.
The images have wide margins of redundancy.
Also brightness and contrast could work.
there are rotation, shift and scale(zoom±) augmentations as well as warp which in my opinion is the major reason that the model doesn’t overfit.
sometimes I use color augmentations but most of the time it’s not necessary.
Further reading from PyImageSearch here: