Training EcoSat Vegetation Classifications: User tips

What is EcoSat?
EcoSat delivers a one-of-it’s-kind semi-automated cloud processing of very high resolution satellite imagery to map nearshore vegetation and coastal benthic habitats.  EcoSat uses the latest multi-spectral imagery from reputable providers such as Digital Globe (World View 2,3 and 4), Airbus Defence and Space (Pleiades), and ESA’s Sentinel program and industry standard image processing techniques.  Sophisticated Amazon Web Service cloud infrastructure rapidly processes imagery, creates reports and imagery tiles, and delivers detailed habitat maps to user’s BioBase dashboard where it can be analyzed and shared.  Average turnaround time from imagery tasking order to delivery of results is 90 days.  The rapid and standard processing methods are allowing entities like the Florida Fish and Wildlife Conservation Commission to establish regular monitoring programs for emergent vegetation.  The extremely long and expensive one-off nature of conventional remote sensing mapping projects using non-repeatable tailored techniques has prevented natural resource entities from assessing the degree that habitats are changing as a result of environmental stressors such as invasive species invasions and climate change.

Can EcoSat Identify Vegetation Species?
Yes! Using the industry-standard Random Forest method of machine learning, EcoSat can employ supervised classification methods using training data from users, or unsupervised classification based on automated pixel/object clustering.  Supervised classifications produce actual vegetation species classifications as seen in Figure 1.  If no training data exist or species communities are mixed, unnamed objects are delineated and the user can go in later and add names to classifications or group classes as they see fit (Figure 2).

ELT Classes Zin
Figure 1. EcoSat uses user-submitted in situ waypoints to train image classifiers in a machine learning process to create discrete beds of floating-leaf and emergent vegetation. These polygon objects are summarized in reports and available for download as vector shapefiles.
ELT Unsupervised Edit
Figure 2. Where no in situ species data exist to train classifiers, EcoSat uses an unsupervised object-based clustering algorithm to identify unique “objects.”  Users can enter identification information after the fact within BioBase (e.g., mixed species communities).

How do I Know Vegetation Classifications Are Accurate?
In large part, this is up to you. If you have greater than 15 in situ points (the more the better) collected from the interior of large monotypic beds, then the machine learning process will know to correlate a particular spectral response to your input information.  See the images below that will help you to create robust classifications.

Figure 3. Submitting in situ waypoints from the interior of dense monospecific stands of aquatic plant species to BioBase prior to EcoSat orders increases the confidence in supervised classifiers and accurate identification of species.
Figure 4. Avoid submitting in situ waypoints from mixed beds for training of classifiers.  In this case, a maidencane spectral signature could “contaminate” a spatterdock classifier. Instead, rely on unsupervised classification and after-the-fact classification as seen in Figure 2 (e.g., Maidencane-Spatterdock mix).


Figure 5. Avoid capturing and classifying imagery during periods of the year when brown decaying vegetation is mixed with green vegetation. In this case, bulrush could be mistaken for cattail. Collect imagery during peak growth of vegetation (however, in the case of wild rice, one might deliberately wait until fall when it’s brown color most distinct from other vegetation)

A second way to verify classifications is to take your map out into the field.  We make the validation process easy by giving you waypoint creation tools in EcoSat and also automatically generate a Lowrance or Simrad GPS chart of classifications that you can take out into the field with you (Figure 6).

Figure 6. Screen shot of a Lowrance GPS chart of EcoSat Classifications seen in Figure 2. GPS chart files are created automatically with each EcoSat order. These can be used to verify classifications or facilitate on the ground management.

What I see from my boat doesn’t match what EcoSat tells me. Why?
The answer to this question partially lies within the concept of Minimum Mapping Unit (MMU).  The MMU is the smallest scale at which an object is mapped as a discrete entity.  Any object smaller than the MMU is incorporated into a larger object within which it is nested. At the smallest scale, the image resolution (e.g., pixel size) could be the MMU.  However, images from large scenes quickly get overwhelmingly busy with detail (Figure  7).  As such, it is common to use a larger MMU to create more generalized maps of natural features.

Figure 7. EcoSat vegetation bed detection in Lake Okeechobee NW marsh from very high resolution Airbus Pleiades satellite imagery with a minimum mapping unit of 100 m2 (left) and 1,000 m2 (right)

A problem occurs however when the biologist ventures into the field and navigates to a random validation waypoint and happens to land within a vegetation bed that is either less than the MMU or their field of view is greater than the MMU.  In both cases, what they write down in their field sheet will not correspond to what is classified by EcoSat (Figure 8).  As such, it is important that the scale of field verifications match the scale at which vegetation “objects” are being classified in EcoSat. Or, collect several verification points within an aggregated bed

Figure 8. Example of machine learning classification of a vegetation object grouping a collection of smaller units to a discrete bed (e.g., Maidencane is the major species type in this bed). However, the field biologist may classify this area as “mixed” based on where they happened to land in the bed and the size of their field of view (in this case, 1,000 m2).  Indeed, it may have been mixed in this location but overall, Maidencane constituted the dominant species in this 13,600 m2 area

Digging deeper into supposed misclassifications – GPS error:
Figure 9 demonstrates another issue where a spreadsheet may indicate misclassifications but a closer look in GIS would indicate measurement error as the cause.  In this example, the field biologist captured a verification waypoint on the edge of a maidencane bed next to a bulrush bed.  Consumer GPS typically has a 2m deviation in any one direction for spot locations.  A straight-up spatial join in GIS shows that the satellite classification wrongly classified maidencane as bulrush.  But a closer look in GIS suggests the classifications was actually correct.  Lesson learned: ensure GPS calibration and verification waypoints are captured in the middle of homogenous beds.

Figure 9. Maidencane field sample (black text) recorded with a GPS on the edge of a maidencane bed.  Because the point actually falls within a bulrush bed a GIS spatial join will record this as a misclassification.  This stresses the importance for training and verification points to be captured within the middle of large (>> the GPS position error) homogeneous vegetation beds

Harness the power of the cloud to iteratively learn and rapidly map vegetation and coastal habitats

As more processing rules are established and species classification libraries grow (not to mention steady increases in computing power and sensor resolution), outputs will be even more precise and accurate, faster, and cheaper.  This will empower natural resource managers with more and better information about the status of natural habitats and facilitate more effective conservation.


Author: biobasemaps

BioBase is a cloud platform for the automated mapping of aquatic habitats (lakes, rivers, ponds, coasts). Standard algorithms process sonar datafiles (EcoSound) and high resolution satellite imagery (EcoSat). Depth and vegetation maps and data reports are rapidly created and stored in a private cloud account for analysis, and sharing. This blog highlights a range of internal and external research, frequently asked questions, feature descriptions and highlights, tips and tricks, and photo galleries.

Leave a Reply