The thing is remote sensing imagery provides not only one value per pixel (for example the color value) but additional values per pixel, which brings a lot of extra data to improve any analysis or decision making.
To understand how this works, we must first talk about spectral imagery.
Seeing the unseenAs humans we only see a very small part of the electromagnetic spectrum (what we call “visible light”), and the truth is that through centuries we have been experiencing the world just through a little lock.
In 1800 William Herschel (the same person who discovered Uranus) separated sunlight with a prism and placed thermometers under each light color.
By placing an extra one beyond the red light, he discovered that this extra thermometer had the highest temperature of all.
This was the discovery of infrared light, and the world knew for the first time that there are types of light that we cannot see with our eyes.
Only a year later, Johann Wilhelm Ritter did the same experiment on the other side of the visible light spectrum (this time beyond the purple light side), and discovered ultraviolet light.
The electromagnetic spectrum was born.
Modern remote sensing imagery can measure various wavelengths (far beyond our known “visible light” spectrum), many of which are invisible to our eyes.
This is called multispectral imagery, and it is produced by sensors (in satellites or drones for example) that measure reflected energy within several sections (or bands) of the electromagnetic spectrum.
Since different objects or conditions reflect energy in different ways (for example vegetation reflects light differently than water) each spectral band captures different attributes.
Multispectral imagery can have around 10 or 20 different band measurements in each pixel of the images they represent.
Satellites like Sentinel-2 (which provides worldwide free imagery for land and water monitoring) have 13 spectral bands, each one associated to a particular wavelength of the electromagnetic spectrum.
But if you think that 13 bands are not enough, then you might consider going hyperspectral.
Hyperspectral imagery can provide hundreds of band measurements (yes, hundreds by image!) across the entire electromagnetic spectrum which provides higher sensitivity to subtle variations in the observation.
Images produced from hyperspectral sensors contain substantially more data than the ones generated by multispectral sensors and have a lot of extra potential to detect differences among features.
For example, while multispectral imagery can be used to map forested areas, hyperspectral imagery can be used to map tree species within the forest.
The company Satellogic is the leading supplier of high-resolution hyperspectral imagery in the world.
Beyond providing businesses in agriculture, mining, construction and environmental mapping with a competitive edge, Satellogic has taken the bold step of providing free hyperspectral data for open scientific research and humanitarian causes.
Different Landsat bands exploded out like layers of onion skin along with corresponding spectral band wavelengths and common band names.
Source: NASAWhy Data Science is a real game changer?Since images are datasets, you might consider carrying out mathematical operations with them.
By performing algebra with spectral bands it is possible to combine two or more wavelengths to highlight particular features like types of vegetation, burned areas or spot the presence of water.
It is also possible to combine bands from different sources (like different satellites) and reach deeper insights: imagine combining different light reflectance bands with soil humidity, plus land elevation characteristics, everything stacked in the same dataset: if each band or image in the dataset has the same numbers of rows and columns, then the pixels from one band fall in the same spatial location as the pixels in another band.
Since each pixel is geocoded, we can add different layers of information in any specific location.
The good news is this isn’t as hard as it sounds.
Google Earth Engine provides petabytes of satellite imagery and geospatial datasets powered back by their own data centers and APIs, everything in a user friendly platform that lets any user to test and implement its own models and algorithms.
You can perform math and geometric operations, classify pixels or recognize objects, while running in the cloud at fast speed.
Once again, Google has scaled up and simplified what some years ago was extremely complex and expensive.
Moreover, the availability of remote sensing imagery has increased so much through the years that it is possible to add time dimensions.
The number of active satellites has increased as well as their revisit times, so we can potentially incorporate robust temporal information to any model.
By performing time series analysis in images we move away from mono-temporal information (just one glimpse in time) and enter a new concept, where properties change over periods, and we can observe different cycles and trends.
Driving conclusions in this big data context is extremely challenging, since none of this data would make any sense if we didn´t have the ability to process it.
How do you manage and interpret billions of pixels and tremendous datasets?This is where data science is making a real revolution, and fueling a deep disruption in sectors like agriculture.
Data science can be considered as quite a recent discipline that focuses on the extraction of meaningful insights from data for use in strategic decision making, product development, trend analysis and forecasting.
Data science concepts and methodologies are derived from disciplines like statistics, data engineering, natural language processing, data warehousing and machine learning, which are exceptionally efficient in dealing with the actual explosion of data.
You can say that data scientists are the best in making discoveries while swimming in data.
But from all the sub disciplines in data science, I believe machine learning deserves a special remark.
Machine learning describes algorithms that learn from data rather than being specifically programmed for that, and represent the best solutions to solve the problem of data abundance.
This is the area of data science most likely to provide direct and tangible benefits in the foreseeable future.
Companies like Planet exploit machine learning to transform imagery into actionable analytic feeds, tracking changes in infrastructure and land use, at a much better spatial and temporal resolution than free satellites.
From the huge universe of machine learning algorithms, Convolutional Neural Networks (CNNs) are the state of the art in image processing and are being used in fields like agriculture to reshape the way of doing things.
CNNs represent a promising technique with current high performance results in terms of precision and accuracy, outperforming other existing image-processing techniques.
C2-A2, the world´s first agrodroid, uses CNNs to protect equipment and people from all possible collisions.
C2-A2 is an artificial brain that aims to become the universal control system for autonomous agricultural machinery, like harvesters, tractors or sprayers.
Human capabilities are being strengthened at an exceptional rate thanks to these disciplines.
The irruption of technology in fields like remote sensing imagery is completely changing the way we interact with the world, and who knows where this will end.
In this scenario, it is not so crazy to think about systems that process extremely complex inputs and make decisions on their own, freeing up human time and resources to make other endeavors.
.. More details