The Airware aerial data processing engine is smarter than ever before. We were already using deep learning techniques to automatically detect ground control points in our aerial surveys, but our most recent release elevates our capabilities to a new level. Here is a sneak peek of our latest deep learning results: 100% automatic digitization for large, complex survey sites.
Machine learning is providing automated tools
for measuring and reporting accurate
stockpile measurements and volumes.
Airware provides advanced analytics to improve efficiency, performance, safety, and production tracking for its customers, turning aerial data into business insights for a large set of industries. Which haul roads require optimization? How can fuel consumption be reduced for a machine fleet? Is a site in compliance? How can stockpile inventory be tracked automatically?
The answers to many of these questions are based on the digitization of the physical world. For instance, in order to determine which parts of a haul road need to be optimized, the site must first be digitally segmented to find the roads. The same is true for stockpile volume computations, where each exact pile contour must be calculated.
The following example illustrates large stockpile contours detected along with vegetation and water.
The following example illustrates quarry face, toe, and crest automatic detection for safety compliance.
For Airware, digitizing the physical world started back in 2013. As many projects begin, we started manually and quickly moved to more traditional morphology-based computer vision algorithms. This helped us to achieve partially automatic digitization. With such methods, a human still has to review the computer digitization to correct false positives and incomplete segments. While time-consuming, this process has proven to be critical in the creation of a substantial database of aerial drone data with carefully annotated physical assets. In other words, a comprehensive training set for our digitization of the physical world.
Today, I share a major leap forward that has dramatically reduced the human quality control required to a cursory level. This is possible thanks to a new kind of algorithm: deep learning-based image semantic segmentation. Deep learning has gained significant attention for its manifold applications varying from social network image analyses to autonomous vehicle perception.
From a technical point of view, implementing effective deep learning algorithms is challenging for multiple reasons. Foremost, much of the recent deep learning research is based on ground imagery for tasks like facial recognition and robotic navigation. Drone imagery is taken from a unique perspective for which publicly available training data does not exist. Second, information from the aerial data-derived 3D surface model is also critical for our deep learning methods, requiring heavy modification from existing algorithms.
We’ve made several iterations: different models, different input tensors, different learning strategies. With each iteration, we’ve seen consistent improvement, and our latest model is outperforming benchmark algorithms for the task of segmenting vast quantities of aerial data. Better than our tests of DeepLabv3+ implementation – top performing on PascalVOC 2012 leaderboard.
I’d like to give my sincere thanks to everyone involved with the project. Foremost, our core team of data scientists: Clément Fouquet, Damien Grosgeorge, and Nicolas Goguey for their desire to always go beyond and learn more, and our production team for their continuous review and positive feedback: Thomas Sitbon, Guillaume Durand-Lienart and, Mihaela Dragoi. And thanks to everyone else who has contributed!
I couldn’t be prouder of our team and our company.
Stay tuned for more!