Hello, I need to take/extract small images from a big .tif file. Suppose I have a big .tif file gathered from a UAV depicting houses. I need to gather every house from the .tif image and store them as .jpeg files, → house_1.jpeg, house_2.jpeg etc.? Is there any fast way?? I will define the crop area around the house. I don’t want to use MSpaint it is very time consuming… Thanks!!
literally it’s numpy slicing.
As I see from here: Numpy - Arrays - Example - Extracting a portion of an image using array slicing | Automated hands-on| CloudxLab you have to define specific regions on the initial image. Too much time consuming… Can’t I crop certain parts indicated/marked by mouse? I don’t know the coordinates of every smaller images in the .tif image…
Ok, I extracted objects from the main .tif image and irrelevant images so as to train my Convolutional Neural Network. I trained it and I need to go to the main .tif image to identify the objects (houses) and draw a red rectangle around each house (automatically via the use of the CNN I created and python3). Any idea how to achieve that?
did you train a classification network ?
sounds like you want a detection network here
(which you would train on annotated, not cropped images)
again, please start at the beginning, try to explain what you want to achieve with all of this
So, the idea is that I have a big UAV .tif image. This image among other data, includes also houses. So, as a first step (I do not know If I am correct) I cropped small images with houses, from the main UAV.tif image and also I cropped irrelevant images. Then I used them in order to make a classification using CNN network that I have built in python. The CNN works well and classifies the images with 95% accuracy. Now I want to identify the houses (only) on the main UAV .tif image and draw a red rectangle around each house that is idenitfied, in the main UAV .tif image. My question is how can I achieve that?
please read up on “object detection cnn’s” (like yolo or SSD), imo that’s what you want instead.
to use your current classification nn, you’d have to go through the image with a “sliding window” and test each candidate region, probably even on multiple scales,
– not feasible, as you’d need 1000nds of predictions
(while the detection nns do this in a single pass)
if you’re halfway familiar with pytorch, i’d recommend
(see tutorials section there !)
in general, do more research, please.
detecting rooftops from an uav is such a common task, nowadays …
In that case I use the CNN in .h5 format that I built, right? So, I search for “sliding window” NN? Sorry for asking too much, but I want to get the idea, because I will need to implement the same idea not to houses but also on fruit fields on another NN, so I need to get the rationale…
I have trained my own model and build an .h5 file. Do I need the YOLO? How can I implement my .h5 model on the main .tif image and detect the houses?