Essentially, we’re using a computer to look at pictures from space (satellite images) and figure out what they show us. But instead of having someone manually go through all those images and label them as “forest,” “water,” or “city,” we can teach the computer to do it for us!
Here’s how: first, we feed a bunch of labeled satellite images into our deep learning algorithm (which is just fancy math that computers use to learn). The algorithm looks at all those pictures and tries to figure out what makes them different from each other. For example, maybe “forest” images have lots of green pixels, while “water” images have lots of blue pixels.
Once the algorithm has learned how to tell the difference between these two types of images (and any others we’re interested in), it can start making predictions on its own! We feed it a new satellite image that hasn’t been labeled yet, and it uses all the knowledge it’s gained from looking at so many other pictures to guess what kind of land use is happening below.
For example, let’s say we have an unlabeled satellite image that looks like this:
[insert image here]
Our deep learning algorithm might look at this picture and think “hmm, there are lots of green pixels in the middle… maybe it’s a forest?” And then it would make its prediction based on all the other images it has seen before. If we compare that to another unlabeled satellite image:
[insert second image here]
Our algorithm might look at this picture and think “hmm, there are lots of blue pixels in the middle… maybe it’s water?” And then it would make its prediction based on all the other images it has seen before. Of course, sometimes our deep learning algorithm will get things wrong (just like humans do!). But over time, as we feed it more and more labeled satellite images to learn from, it gets better at making accurate predictions.