3.2: Methods and Algorithms
11:10am - 11:30am
Sentinel-2 cloud free surface reflectance composites for Land Cover Climate Change Initiative’s long-term data record extension
1Brockmann Consult GmbH, Germany; 2ESA ESRIN, Italy; 3Université catholique de Louvain, Belgium
Long-term data records of Earth Observation data are a key input for climate change analysis and climate models. The goal of this research is to create cloud free surface reflectance composites over Africa using Sentinel-2 L1C TOA products, to extent end enrich a time series from multiple sensors (MERIS, SPOT VGT, Proba-V and AVHRR). While the focus of previous work was to merge the best available missions, providing near weekly optical surface reflectance data at global scale, to produce the most complete and consistent possible long-term data record, Sentinel-2 data will be used to map a prototype African land cover at 10-20 meters. To achieve this goal the following processing methodology was developed for Sentinel-2: Pixel identification, atmospheric correction and compositing. The term “Pixel identification” – IdePix – refers to a classification of a measurement made by a space borne radiometer, for the purpose of identifying properties of the measurement which are influencing further algorithmic processing steps. Most importantly is the classification of a measurement as being made over cloud and cloud shadow, a clear sky land surface or a clear sky ocean surface. This step was followed by atmospheric correction including aerosol retrieval to compute surface directional reflectance. The atmospheric correction includes the correction for the absorbing and scattering effects of atmospheric gases, in particular ozone, and water vapour, of the scattering of air molecules (Rayleigh scattering) and the correction of absorption and scattering due to aerosol particles. All components except aerosols can be rather easily corrected because they can be taken from external sources or can be retrieved from the measurements itself. Aerosols are spatially and temporally highly variable and the aerosol correction is the largest error contributor of the atmospheric correction. The atmospheric correction particularly in case of high-resolution data like Sentinel 2 data has to take into account the effects of the adjacent topography or terrain. Furthermore, the final step of the atmospheric correction should be an approximate correction of the adjacency effect, which is caused by atmospheric scattering over adjacent areas of different surface reflectance, and is required for high spatial resolution satellite sensors. The source of uncertainty associated with the atmospheric correction are observation and viewing geometry angles, aerosol optical thickness and aerosol type, digital elevation model, accuracy of ortho-rectification, pixel identification, atmospheric parameter (e.g. water vapour column), and accuracy of spectral / radiometric calibration. All sources of errors are taken into account for the uncertainty calculation, with only one exception, which corresponds to the pixel identification. In case of the uncertainty estimation for the Sentinel 2 data, the Monte Carlo simulation, a mostly used modelling approach, will be applied. Afterwards the data were binned to 10-day cloud free surface reflectance composites including uncertainty information on a specified grid. The used compositing technique includes multi-temporal cloud and cloud shadow detection, to reduce their influence. The results will be validated against CEOS LANDNET and RadCalNet sites measurements. This very large scale feasibility study should pave the way for regular global high resolution land cover mapping.
11:30am - 11:50am
Wide area multi-temporal radar backscatter composite products
1University of Zurich, Switzerland; 2ESA-ESRIN, Frascati, Italy
Mapping land cover signatures with satellite SAR sensors has in the past been significantly constrained by topographic effects on both the geometry and radiometry of the backscatter products used. To avoid the significant distortions introduced by strong topography to radiometric signatures, many established methods rely on single track exact-repeat evaluations, at the cost of not integrating the information from revisits from other tracks.
Modern SAR sensors offer wide swaths, enabling shorter revisit intervals than previously possible. The open data policy of Sentinel-1 enables the development of higher level products, built on a foundation of level 1 SAR imagery that meets a high standard of geometric and radiometric calibration. We systematically process slant or ground range Sentinel-1 data to terrain-flattened gamma nought backscatter. After terrain-geocoding, multiple observations are then integrated into a single composite in map geometry.
Although composite products are ubiquitous in the optical remote sensing community (e.g. MODIS), no composite SAR backscatter products have yet seen similar widespread use. In the same way that optical composites are useful to avoid single-scene obstructions such as cloud cover, composite SAR products can help to avoid terrain-induced local resolution variations, providing full coverage backscatter information that can help expedite multitemporal analysis across wide regions. The composite products we propose exhibit improved spatial resolution (in comparison to any single acquisition-based product), as well as lower noise. Backscatter variability measures can easily be added as auxiliary channels.
We present and demonstrate methods that can be applied to strongly reduce the effects of topography, allowing nearly full seamless coverage even in Alpine terrain, with only minimal residual effects from fore- vs. backslopes.
We use data from the Sentinel-1A (S1A), Sentinel-1B (S1B), and Radarsat-2 (RS2) satellites, demonstrating the generation of hybrid backscatter products based on multiple sources. Unlike some other processing schemes, here data combinations are not restricted to single modes or tracks. We define temporal windows that support ascending/descending combinations given data revisit rates seen in archival data. Next, that temporal window is cycled forward in time merging all available acquisitions from the set of satellites chosen into a time series of composite backscatter images that seamlessly cover the region under study. We demonstrate such processing over the entirety of the Alps, as well as coastal British Columbia, and northern Nunavut, Canada. With S1A/S1B combinations, we demonstrate full coverage over the Alps with time windows of 6 days. Results generated at medium resolution (~90m) are presented together with higher resolution samples at 10m.
The radar composites demonstrated offer a potential level 3 product that simplify analysis of wide area multi-temporal land cover signatures, just as e.g. 16-day MODIS composite products have in the optical domain.
Use of the Radarsat-2 data was made possible through the SOAR-EU programme, and an initiative of the WMO’s Polar Space Task Group SAR Coordination Working Group (SARCWG). This work was supported by a subcontract from ESA Contract No. VEGA/AG/15/01757.
11:50am - 12:10pm
Large area land cover mapping and monitoring using satellite image time series
Wageningen University, Netherlands, The
Time series remote sensing data propose important features for land cover and cover change mapping and monitoring due to its capability in capturing intra and inter-annual variation in land reflectance. Higher spatial and temporal resolution time series data are particularly useful for mapping land cover types in areas with heterogeneous landscapes and highly fluctuating vegetation dynamics. Although, for large area land monitoring, satellite data such as PROBA-V that provides five-daily time series at 100 m spatial resolution, improves spatial detail and resilience against high cloud cover, it also creates challenges in handling increased data volume. Cloud-based processing platforms namely ESA (European Space Agency) Cloud Toolbox infrastructure can leverage large scale time series monitoring of land cover and its change.
We demonstrate current activities of Wageningen University and Research in time series based land cover mapping, change monitoring and map updating based on PROBA-V 100 m time series data. Using Proba-V based temporal metrics and cloud filtering in combination with machine learning algorithms, our approach resulted in improved land and forest cover maps for a large study area in West Africa. We further introduce an open source package for Proba-V data processing.
Aiming to address varied map user’s requirements, different machine learning algorithms are tested to map cover percentages of land cover types in a Boreal region. Our study also extends to automatic updating of land cover maps based on observed land cover changes using Proba-V full time series.
Cloud-based “big-data” driven land cover and change monitoring approaches showed clear advantages in large area monitoring. The advent of cloud-based platforms (e.g., PROBA-V mission exploitation platform), will not only revolutionize the way we deal with satellite data, but also enable the capacity to create multiple land cover maps for different end-users using various input data.
12:10pm - 12:30pm
Towards a new baseline layer for global land-cover classification derived from multitemporal satellite optical imagery
German Aerospace Center - DLR, Germany
In the last decades, satellite optical imagery has proved to be one of the most effective means for supporting land-cover classification; in this framework, the availability of data has been lately growing as never before mostly due to the launch of new missions as Landsat-8 and Sentinel-2. Accordingly, methodologies capable of properly handling huge amount of information are becoming more and more important.
So far most of the techniques proposed in the literature made use of single-date acquisitions. However, such an approach might often result in poor or sub-optimal performances, for instance, due to specific acquisition conditions or, above all, the presence of clouds preventing to sense what lies underneath. Moreover, the problem becomes even more critical when investigating large areas which cannot be covered by a single scene, as in the case of national, continental or global analyses. In such circumstances products are derived from data necessarily acquired at different times for different locations, thus generally resulting not spatially consistent.
In order to overcome these limitations we propose a novel paradigm for the exploitation of optical data based on the use of multitemporal imagery which can be effectively applied from local to global scale. First, for the given study area and the time frame of interest all the available scenes acquired from the chosen sensor are taken into consideration and pre-processed if necessary (e.g., radiometric calibration, orthorectification, spatial registration). Afterwards, cloud masking and, optionally, atmospheric correction are performed. Next, a series of features suitable for addressing the specific investigated application are derived for all scenes as, for instance, spectral indexes [e.g., the normalized different vegetation index (NDVI), the atmospherically resistant vegetation index (ARVI), the normalized difference water index (NDWI), etc.] or texture features (e.g., occurrence textures, co-occurrence texture, local coefficient of variation, etc.). The core idea is then to compute per each pixel key temporal statistics for all the extracted features, like temporal maximum, minimum, mean, variance, median, etc. Indeed, this allows compressing all the information contained in the different multi-temporal acquisitions, but at the same time to easily and effectively characterize the underlying dynamics.
In our experiments, we focused the attention on Landsat data. Specifically, we generated the so-called TimeScan-Landsat 2015 global product derived from almost 420,000 Landsat-7/8 scenes collected at 30m spatial resolution between 2013 and 2015 (for a total of ~500 terabytes of input data and more than 1.5 petabyte of intermediate products). So far, the dataset is being employed for supporting the detection of urban areas globally and estimating the corresponding built-up density. Additionally, it has also been tested for deriving a land-cover classification map of Germany. In the latter case, an ensemble of Support Vector Machines (SVM) classifiers trained by using labelled samples derived from the CORINE land-cover inventory was used (according to a novel strategy which properly takes into account its lower spatial resolution). Preliminary results are very promising and assess the great potential of the proposed approach which is planned to be applied at larger continental scale in the next months.
12:30pm - 12:50pm
Advancing Global Land Cover Monitoring
University of Maryland College Park, Department of Geographical Sciences
Mapping and monitoring of global land cover and land use is a challenge, as each theme requires different inputs for accurate characterization.