Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Date: Thursday, 16/Mar/2017
9:00am - 10:40am3.1: Validation and Accuracy
Session Chair: Thomas R. Loveland, U.S.Geological Survey
Session Chair: Sophie Bontemps, Université catholique de Louvain
Big Hall 
 
9:00am - 9:20am

Validation of global annual land cover map series and land cover change: experience from the Land Cover component of the ESA Climate Change Initiative

Sophie Bontemps1, Frédéric Achard2, Céline Lamarche1, Philippe Mayaux3, Olivier Arino4, Pierre Defourny1

1UCLouvain-Geomatics (Belgium), Belgium; 2Joint Research Center, Italy; 3European Commission, Belgium; 4European Space Agency, Italy

In the framework of the ESA Climate Change Initiative (CCI), the Land Cover (LC) team delivered a new generation of satellite-derived series of 300 m global annual LC products spanning a period from 1992 to 2015. These maps, consistent in space and time, were obtained from AVHRR (1992 - 1999), SPOT-Vegetation (1999 - 2012), MERIS (2003 - 2012) and PROBA-V (2012 - 2015) time series. The typology was defined using the UN Land Cover Classification System (LCCS) and counts 22 classes compatible with the GLC2000, GlobCover 2005 and 2009 products.

A critical step in the acceptance of these products by users is providing confidence in the products’ quality and in their uncertainties through validation against independent data. Building on the GlobCover experience, a validation strategy was designed to globally validate the maps series and change at four points in time: 2000, 2005, 2010 and 2015.

In order to optimize data and resources availability, 2600 Primary Sample Units (PSUs), defined as a 20 × 20 km box, were selected based on a stratified random sampling. Over each PSU, five Secondary Sample Units (SSUs) were defined, located at the center of each 20 × 20 km box. This cluster sampling strategy increased the number of sample sites and thus lowered standard error of accuracy estimates.

An international network of land cover specialists with regional expertise was in charge of interpreting high spatial resolution images over SSUs to build the reference database that eventually allowed assessing the accuracy of the CCI global LC map series.

Over each SSU, the visual interpretation of very high resolution imagery close to 2010 allowed labeling each object derived by segmentation according to the CCI LC legend. Change between 2010, 2005 and 2000 was then systematically evaluated on Landsat TM or ETM+ scenes acquired from the Global Land Surveys (GLS) for the respective years. Annual NDVI profiles derived from the SPOT-Vegetation time series facilitated image interpretation by providing seasonal variations of vegetation greenness. This reference validation database was then complemented for 2015 thanks to the large FAO data set obtained using the Collect Earth tool.

Overall, users and producers accuracies of the CCI LC map series are then derived. In addition, various quality indices, related to specific use in the climate models (carbon content, net primary productivity, methane emissions, etc.), will be constructed by taking into account the semantic distance between LC classes.

Such validation scheme was made possible thanks to a tailored user-friendly validation web interface which integrated large amount of very high spatial resolution imagery, pixel-based NDVI profiles from various sensors, as well as GIS interactive tools to facilitate LC and change interpretation. This efficient web platform was already selected by several other international initiatives to collect reference data sets as this unique object-based approach provides reference database which can be used to validate land cover maps at different resolution.


9:20am - 9:40am

Comparative validation of Copernicus Pan-European High Resolution Layer (HRL) on Tree Cover Density (TCD) and University of Maryland (UMd) Global Forest Change (GFC) over Europe

Christophe Sannier1, Heinz Gallaun2, Javier Gallego3, Ronald McRoberts4, Hans Dufourmont5, Alexandre Pennec1

1Systèmes d'Information à Référence Spatiale (SIRS), France; 2Joanneum Research, Austria; 3Joint Research Centre, Italy; 4US Forest Service, USA; 5European Environment Agency, Denmark

The validation of datasets such as the Copernicus Pan-European Tree Cover Density (TCD) high resolution layer (TCD) and UMD Global Forest Change (GFC) Tree percent requires considerable effort to provide validation results at Pan-European level over nearly 6 million km². A stratified systematic sampling approach was developed based on the LUCAS sampling frame. A two-stage stratified sample of 17,296 1ha square primary sampling units (PSU) was selected over EEA39 based on countries or groups of countries which area was greater than 90,000km² and a series of omission and commission strata. In each PSU, a grid of 5 x 5 Secondary Sample units (SSUs) with a 20 m step was applied. These points were photo-interpreted on orthophotos with a resolution better than 2.5m.

The UMD GFC data was processed to provide a 2012 Tree Percent layer comparable to the Copernicus High Resolution Layer by including Tree losses and gains over the selected period. An appropriate interpolation procedure was then applied to both the UMD GFC and Copernicus HRL TCD to provide a precise match with the PSU validation data and account for potential geographic differences between validation and map data and SSU sampling errors.

Initial results based on the binary conversion of the HRL TCD data by applying a 10, 15 and 30% thresholds indicate a level of omission errors in line with the required maximum level of 15% and exceeds the target level of 10% for commission errors set in the product specifications. However, disaggregated results were also computed at country / group of countries level as well as for biogeographical regions and showed considerable geographical variability. Best results were obtained in countries or biogeographical regions with high tree cover (e.g. Continental and Boreal regions) and the worst results in countries or biogeographical regions with low tree cover (e.g. Anatolian, Arctic). There is less variability between production lots and the outcome of the analysis of the scatterplots show that there is a strong relationship between validation and map data with the best results in the countries and biogeographical regions mentioned previously. However, there seems to be a general trend to slightly underestimate TCD. Results for the UMD GFC dataset are currently in progress but will be made available in time for the conference. Results provided at HRL TCD production lot, bio-geographical regions and country/group of country level, should provide a sound basis for targeting further improvement to the products.

The stratification procedure was based on a combination of the HRL TCD Layer and CORINE Land Cover (CLC) was effective for commission errors, but less so for omission errors. Therefore, the stratification should be simplified to only include a commission (tree cover mask 1-100) and an omission stratum (rest of the area) or an alternative stratification to CLC should be applied to better target omission. Finally, the approach developped could be rolled out at global scale for the complete validation of global datasets.


9:40am - 10:00am

Copernicus Global Land Hot Spot Monitoring Service – Accuracy Assessment and Area Estimation Approach

Heinz Gallaun1, Gabriel Jaffrain2, Zoltan Szantoi3, Andreas Brink3, Adrien Moiret2, Stefan Kleeschulte4, Mathias Schardt1, Conrad Bielski5, Cedric Lardeux6, Alex Petre7

1JOANNEUM RESEARCH, Austria; 2IGN FI, France; 3Joint Research Centre (JRC), European Commission; 4space4environment (s4e), Luxembourg; 5EOXPLORE, Germany; 6ONF International, France; 7GISBOX, Romania

The main objective of the Copernicus Global Land – Hot Spot Monitoring Service is to provide detailed land information on specific areas of interest, including protected areas or hot spots for biodiversity and land degradation. For such areas of interest, land cover and land cover change products which are mainly derived from medium (Landsat, Sentinel2) and high resolution satellite data are made available to global users. The service directly supports field projects and policies developed by the European Union (EU) in the framework of EU’s international policy interests. It is coordinated by the Joint Research Center of the European Commission and answers ad-hoc requests and focus mainly within the domain of the sustainable management of natural resources.

Comprehensive, independent validation and accuracy assessment of all thematic map products is performed before providing the land cover and land cover change products to the global users. The following rigorous methodology is conducted:

Spatial, temporal and logical consistency is assessed by determination of the positional accuracy, the assessment of the validity of data with respect to time, and the logical consistency of the data e.g. topology, attribution and logical relationships.

A Qualitative-systematic accuracy assessment is performed wall-to-wall by a systematic visual examination of the land cover and land cover change maps within a geographic information system and their accuracies are documented in terms of type of errors.

For quantitative accuracy assessment, a stratified random sampling approach is implemented which is based on inclusion probabilities. A web-based interpretation tool based on PostgreSQL is implemented which provides high resolution time series imagery e.g. from Sentinel 2, derived temporal trajectories of reflection, and ancillary information in addition to the very high resolution imagery. In order to quantify the uncertainty of the derived accuracy measures, confidence intervals are derived by analytic formulas as well as by applying bootstrapping.

Area Estimation is performed according to the requirements for international reporting. In general, the errors of omission and errors of commission are not equal and such bias shows up in the course of the quantitative accuracy assessment. As the implemented approach applies probability sampling and the error matrix is based on inclusion probabilities, area estimates are directly derived from the error matrix. The area estimates are complemented by confidence intervals.

The approach and methodology will be discussed in detail on the basis of a systematic accuracy assessment and area estimation results for sites in Africa which are the focus in the first year of implementation of the Copernicus Global Land Hot Spot Monitoring Service.


10:00am - 10:20am

Forest degradation assessment: accuracy assessment of forest degradation products using HR data

Naila Yasmin, Remi D’annunzio, Inge Jonckheere

FAO Forestry Department, FAO of UN, Rome Italy

REDD+, Reducing emissions from deforestation and forest degradation, is a mechanism developed by Parties to the United Nations Framework Convention on Climate Change (UNFCCC). It creates a financial value for the carbon stored in forests by offering incentives for developing countries to reduce emissions from forested lands and invest in low-carbon paths to sustainable development. Developing countries would receive results-based payments for results-based actions. Forest monitoring is always remained challenging to asses with the least error. So far, a lot of countries started to report on the deforestation on a national scale, but the technical issue to assess the degradation in-country is still a major point of research. Remote sensing options are actually being explored in order to map degradation.

The ForMosa project aim was to develop a sound methodology for forest degradation monitoring for REDD+ using medium coarse satellite data sources such as Landsat-8, Sentinel-2, and SPOT 5 in addition to Rapid Eye imagery. The project is carried out by three partners: Planet, the Wageningen University (Service Providers) and the Forest department of FAO, which carried out the accuracy assessment of the project products. Initially three pilot study sites, Kafa Tura in Ethiopia, Madre de Dios in Peru and Bac Kan in Vietnam were selected. The initial product developed at 10 m resolution with five classes, representing different levels of forest degradation with increasing intensity.

The Forest department of FAO used the in-house built open source tools for the accuracy assessment. The process consists of four steps, (i) map data, (ii) sample design, (iii) response design and (iv) analysis. A stratified random sampling approach was used to access the product by using high-resolution Google earth and Bing map imagery. In this paper, the methodology of the work and its results will be presented.


10:20am - 10:40am

A New Open Reference Global Dataset for Land Cover Mapping at a 100m Resolution

Myroslava Lesiv1, Steffen Fritz1, Linda See1, Nandika Tsendbazar2, Martin Herold2, Martina Duerauer1, Ruben Van De Kerchove3, Marcel Buchhorn3, Inian Moorthy1, Bruno Smets3

1International Institute for Applied Systems Analysis, Austria; 2Wageningen University and Research, the Netherlands; 3VITO, Belgium

Classification techniques are dependent on the quality and quantity of reference data, where the data should represent different land cover types, varying landscapes and be of a high overall quality. In general there is currently a lack of reference data for large scale land cover mapping. In particular, reference data are needed for a new dynamic 100m land cover product that will be added to the Copernicus Global Land services portfolio in the near future. These reference data must correspond to the 100m Proba-V data spatially and temporally. Therefore the main objectives of this study are as follows: to develop algorithms and tools for collecting high quality reference data; to collect reference data that correspond to Proba-V data spatially and temporally; to develop a strategy for validating the new dynamic land cover product and to collect validation data.

To aid in the collection of reference data for the development of the new dynamic land cover layer, the Geo-Wiki Engagement Platform (http://www.geo-wiki.org/) was used to develop tools and to provide user-friendly interface to collect high quality reference data. Experts, trained by staff at IIASA (International Institute for Applied Systems Analysis) interpreted various land cover types of a network of reference locations via the Geo-Wiki platform. At each reference location, experts were presented with 100m x 100m area, subdivided into 10m x 10m cells superimposed on high-resolution Google Earth or Microsoft Bing imagery. Using visual cues, acquired from multiple training sessions, the experts interpreted each cell into land cover classes which include trees, shrubs, water, arable land, burnt areas, etc. This information is then translated into different legends using the UN LCCS (United Nations Land Cover Classification System) as a basis. The distribution of sample sites is systematic, with the same distance between sample sites. However, land cover data are not collected at every sample site as the frequency depends on the heterogeneity of land cover types by region. Data quality is controlled by ensuring that multiple classifications are collected at the same sample site by different experts. In addition, there are control points that have been validated by IIASA staff for use as additional quality control measures.

As a result, we have collected the reference data (at approximately 20000 sites) for Africa, which has been chosen due to its high density in complex landscapes and areas covered by vegetation mosaics. Current efforts are underway to expand the reference data collection to a global scale.

For validation of the new global land cover product, a separate Geo-Wiki branch was developed and accessed by local experts who are not participated in the training data collection. A stratified random sampling procedure is used due to its flexibility and statistical rigorousness. For the stratification, a global stratification based on climate zones and human density by Olofsson et al (2012) was chosen owing to its independence from existing land cover maps and thus its suitability for future map evolutions.

The reference datasets will be freely available for use by the scientific community.

 
10:40am - 11:10amCoffee Break
Big Hall 
11:10am - 12:50pm3.2: Methods and Algorithms
Session Chair: Carsten Brockmann, Brockmann Consult GmbH
Session Chair: Mattia Marconcini, German Aerospace Center - DLR
Big Hall 
 
11:10am - 11:30am

Sentinel-2 cloud free surface reflectance composites for Land Cover Climate Change Initiative’s long-term data record extension

Grit Kirches1, Jan Wevers1, Olivier Arino2, Martin Boettcher1, Sophie Bontemps3, Carsten Brockmann1, Pierre Defourny3, Olaf Danne1, Tonio Fincke1, Céline Lamarche3, Thomas de Maet3, Fabrizio Ramoino2

1Brockmann Consult GmbH, Germany; 2ESA ESRIN, Italy; 3Université catholique de Louvain, Belgium

Long-term data records of Earth Observation data are a key input for climate change analysis and climate models. The goal of this research is to create cloud free surface reflectance composites over Africa using Sentinel-2 L1C TOA products, to extent end enrich a time series from multiple sensors (MERIS, SPOT VGT, Proba-V and AVHRR). While the focus of previous work was to merge the best available missions, providing near weekly optical surface reflectance data at global scale, to produce the most complete and consistent possible long-term data record, Sentinel-2 data will be used to map a prototype African land cover at 10-20 meters. To achieve this goal the following processing methodology was developed for Sentinel-2: Pixel identification, atmospheric correction and compositing. The term “Pixel identification” – IdePix – refers to a classification of a measurement made by a space borne radiometer, for the purpose of identifying properties of the measurement which are influencing further algorithmic processing steps. Most importantly is the classification of a measurement as being made over cloud and cloud shadow, a clear sky land surface or a clear sky ocean surface. This step was followed by atmospheric correction including aerosol retrieval to compute surface directional reflectance. The atmospheric correction includes the correction for the absorbing and scattering effects of atmospheric gases, in particular ozone, and water vapour, of the scattering of air molecules (Rayleigh scattering) and the correction of absorption and scattering due to aerosol particles. All components except aerosols can be rather easily corrected because they can be taken from external sources or can be retrieved from the measurements itself. Aerosols are spatially and temporally highly variable and the aerosol correction is the largest error contributor of the atmospheric correction. The atmospheric correction particularly in case of high-resolution data like Sentinel 2 data has to take into account the effects of the adjacent topography or terrain. Furthermore, the final step of the atmospheric correction should be an approximate correction of the adjacency effect, which is caused by atmospheric scattering over adjacent areas of different surface reflectance, and is required for high spatial resolution satellite sensors. The source of uncertainty associated with the atmospheric correction are observation and viewing geometry angles, aerosol optical thickness and aerosol type, digital elevation model, accuracy of ortho-rectification, pixel identification, atmospheric parameter (e.g. water vapour column), and accuracy of spectral / radiometric calibration. All sources of errors are taken into account for the uncertainty calculation, with only one exception, which corresponds to the pixel identification. In case of the uncertainty estimation for the Sentinel 2 data, the Monte Carlo simulation, a mostly used modelling approach, will be applied. Afterwards the data were binned to 10-day cloud free surface reflectance composites including uncertainty information on a specified grid. The used compositing technique includes multi-temporal cloud and cloud shadow detection, to reduce their influence. The results will be validated against CEOS LANDNET and RadCalNet sites measurements. This very large scale feasibility study should pave the way for regular global high resolution land cover mapping.


11:30am - 11:50am

Wide area multi-temporal radar backscatter composite products

David Small1, Christoph Rohner1, Adrian Schubert1, Nuno Miranda2, Michael Schaepman1

1University of Zurich, Switzerland; 2ESA-ESRIN, Frascati, Italy

Mapping land cover signatures with satellite SAR sensors has in the past been significantly constrained by topographic effects on both the geometry and radiometry of the backscatter products used. To avoid the significant distortions introduced by strong topography to radiometric signatures, many established methods rely on single track exact-repeat evaluations, at the cost of not integrating the information from revisits from other tracks.

Modern SAR sensors offer wide swaths, enabling shorter revisit intervals than previously possible. The open data policy of Sentinel-1 enables the development of higher level products, built on a foundation of level 1 SAR imagery that meets a high standard of geometric and radiometric calibration. We systematically process slant or ground range Sentinel-1 data to terrain-flattened gamma nought backscatter. After terrain-geocoding, multiple observations are then integrated into a single composite in map geometry.

Although composite products are ubiquitous in the optical remote sensing community (e.g. MODIS), no composite SAR backscatter products have yet seen similar widespread use. In the same way that optical composites are useful to avoid single-scene obstructions such as cloud cover, composite SAR products can help to avoid terrain-induced local resolution variations, providing full coverage backscatter information that can help expedite multitemporal analysis across wide regions. The composite products we propose exhibit improved spatial resolution (in comparison to any single acquisition-based product), as well as lower noise. Backscatter variability measures can easily be added as auxiliary channels.

We present and demonstrate methods that can be applied to strongly reduce the effects of topography, allowing nearly full seamless coverage even in Alpine terrain, with only minimal residual effects from fore- vs. backslopes.

We use data from the Sentinel-1A (S1A), Sentinel-1B (S1B), and Radarsat-2 (RS2) satellites, demonstrating the generation of hybrid backscatter products based on multiple sources. Unlike some other processing schemes, here data combinations are not restricted to single modes or tracks. We define temporal windows that support ascending/descending combinations given data revisit rates seen in archival data. Next, that temporal window is cycled forward in time merging all available acquisitions from the set of satellites chosen into a time series of composite backscatter images that seamlessly cover the region under study. We demonstrate such processing over the entirety of the Alps, as well as coastal British Columbia, and northern Nunavut, Canada. With S1A/S1B combinations, we demonstrate full coverage over the Alps with time windows of 6 days. Results generated at medium resolution (~90m) are presented together with higher resolution samples at 10m.

The radar composites demonstrated offer a potential level 3 product that simplify analysis of wide area multi-temporal land cover signatures, just as e.g. 16-day MODIS composite products have in the optical domain.

Use of the Radarsat-2 data was made possible through the SOAR-EU programme, and an initiative of the WMO’s Polar Space Task Group SAR Coordination Working Group (SARCWG). This work was supported by a subcontract from ESA Contract No. VEGA/AG/15/01757.


11:50am - 12:10pm

Large area land cover mapping and monitoring using satellite image time series

Jan Verbesselt, Nandin-Erdene Tsendbazar, Johannes Eberenz, Martin Herold, Johannes Reiche, Dainius Masiliunas, Eline Van Elburg

Wageningen University, Netherlands, The

Time series remote sensing data propose important features for land cover and cover change mapping and monitoring due to its capability in capturing intra and inter-annual variation in land reflectance. Higher spatial and temporal resolution time series data are particularly useful for mapping land cover types in areas with heterogeneous landscapes and highly fluctuating vegetation dynamics. Although, for large area land monitoring, satellite data such as PROBA-V that provides five-daily time series at 100 m spatial resolution, improves spatial detail and resilience against high cloud cover, it also creates challenges in handling increased data volume. Cloud-based processing platforms namely ESA (European Space Agency) Cloud Toolbox infrastructure can leverage large scale time series monitoring of land cover and its change.

We demonstrate current activities of Wageningen University and Research in time series based land cover mapping, change monitoring and map updating based on PROBA-V 100 m time series data. Using Proba-V based temporal metrics and cloud filtering in combination with machine learning algorithms, our approach resulted in improved land and forest cover maps for a large study area in West Africa. We further introduce an open source package for Proba-V data processing.

Aiming to address varied map user’s requirements, different machine learning algorithms are tested to map cover percentages of land cover types in a Boreal region. Our study also extends to automatic updating of land cover maps based on observed land cover changes using Proba-V full time series.

Cloud-based “big-data” driven land cover and change monitoring approaches showed clear advantages in large area monitoring. The advent of cloud-based platforms (e.g., PROBA-V mission exploitation platform), will not only revolutionize the way we deal with satellite data, but also enable the capacity to create multiple land cover maps for different end-users using various input data.


12:10pm - 12:30pm

Towards a new baseline layer for global land-cover classification derived from multitemporal satellite optical imagery

Mattia Marconcini, Thomas Esch, Annekatrin Metz, Soner Üreyen, Julian Zeidler

German Aerospace Center - DLR, Germany

In the last decades, satellite optical imagery has proved to be one of the most effective means for supporting land-cover classification; in this framework, the availability of data has been lately growing as never before mostly due to the launch of new missions as Landsat-8 and Sentinel-2. Accordingly, methodologies capable of properly handling huge amount of information are becoming more and more important.

So far most of the techniques proposed in the literature made use of single-date acquisitions. However, such an approach might often result in poor or sub-optimal performances, for instance, due to specific acquisition conditions or, above all, the presence of clouds preventing to sense what lies underneath. Moreover, the problem becomes even more critical when investigating large areas which cannot be covered by a single scene, as in the case of national, continental or global analyses. In such circumstances products are derived from data necessarily acquired at different times for different locations, thus generally resulting not spatially consistent.

In order to overcome these limitations we propose a novel paradigm for the exploitation of optical data based on the use of multitemporal imagery which can be effectively applied from local to global scale. First, for the given study area and the time frame of interest all the available scenes acquired from the chosen sensor are taken into consideration and pre-processed if necessary (e.g., radiometric calibration, orthorectification, spatial registration). Afterwards, cloud masking and, optionally, atmospheric correction are performed. Next, a series of features suitable for addressing the specific investigated application are derived for all scenes as, for instance, spectral indexes [e.g., the normalized different vegetation index (NDVI), the atmospherically resistant vegetation index (ARVI), the normalized difference water index (NDWI), etc.] or texture features (e.g., occurrence textures, co-occurrence texture, local coefficient of variation, etc.). The core idea is then to compute per each pixel key temporal statistics for all the extracted features, like temporal maximum, minimum, mean, variance, median, etc. Indeed, this allows compressing all the information contained in the different multi-temporal acquisitions, but at the same time to easily and effectively characterize the underlying dynamics.

In our experiments, we focused the attention on Landsat data. Specifically, we generated the so-called TimeScan-Landsat 2015 global product derived from almost 420,000 Landsat-7/8 scenes collected at 30m spatial resolution between 2013 and 2015 (for a total of ~500 terabytes of input data and more than 1.5 petabyte of intermediate products). So far, the dataset is being employed for supporting the detection of urban areas globally and estimating the corresponding built-up density. Additionally, it has also been tested for deriving a land-cover classification map of Germany. In the latter case, an ensemble of Support Vector Machines (SVM) classifiers trained by using labelled samples derived from the CORINE land-cover inventory was used (according to a novel strategy which properly takes into account its lower spatial resolution). Preliminary results are very promising and assess the great potential of the proposed approach which is planned to be applied at larger continental scale in the next months.


12:30pm - 12:50pm

Advancing Global Land Cover Monitoring

Matthew Hansen

University of Maryland College Park, Department of Geographical Sciences

Mapping and monitoring of global land cover and land use is a challenge, as each theme requires different inputs for accurate characterization.
This talk presents results on global tree cover, bare ground, surface water and crop type, with the goal of realizing a generic approach. The ultimate goal is to map land themes and their change over time, using the map directly to estimate areas. However, much work needs to be done in demonstrating that maps, particularly of land change, may be used in area estimation. Good practices require the use of probability-based samples in providing unbiased area estimates of land cover extent and change. All of the aforementioned themes will have sample-based reference data presented in explaining the challenges of generic mapping and monitoring at the global scale.

 
12:50pm - 1:50pmLunch
Canteen 
1:50pm - 3:10pm3.3: Platforms
Session Chair: Chris Steenmans, European Environment Agency
Session Chair: Mark Doherty, ESA
Big Hall 
 
1:50pm - 2:10pm

Land monitoring integrated Data Access - status and outlook on platforms

Bianca Hoersch1, Susanne Mecklenburg1, Betlem Rosich1, Sveinung Loekken1, Philippe Mougnaud1, Erwin Goor2

1European Space Agency, Italy; 2VITO, Belgium

For more than 20 years, “Earth Observation” (EO) satellites developed or operated by ESA have provided a wealth of data. In the coming years, the Sentinel missions, along with the Copernicus Contributing Missions as well as Earth Explorers and other, Third Party missions will provide routine monitoring of our environment at the global scale, thereby delivering an unprecedented amount of data.

As for global land monitoring and mapping the fleet of heritage and operational missions allow to analyse, extract, condense and derive relevant information on the status and change of our land cover, heading towards the development of a sustainable operational system for land cover classifications to meet the various user’s needs for land monitoring purposes.

ESA as either owner or operator has been and is handling a variety of heritage and operational missions such as Sentinel-2, Sentinel-3 on behalf of the European Commission, the latter based on 10 years MERIS heritage. Furthermore Proba-V is delivering since more than 3 years data to a growing land user base, again following on 15 years SPOT VGT heritage as operated by CNES and with data dissemination via Belgium including VITO as the current archive manager. Through missions such as Landsat, Earthnet builds on a heritage on land monitoring since >35 years.

While the availability of the growing volume of environmental data from space represents a unique opportunity for science and applications, it also poses a major challenge to achieve its full potential in terms of data exploitation.

In this context ESA has started in 2014 the EO Exploitation Platforms (EPs) initiative, a set of R&D activities that in the first phase (up to 2017) aims to create an ecosystem of interconnected Thematic Exploitation Platforms (TEPs) on European footing, addressing a variety of thematic areas.

The PROBA-V Mission Exploitation Platform (MEP), complements the PROBA-V user segment by offering an operational Exploitation Platform on the PROBA-V data, correlative data and derived products. The MEP PROBA-V addresses a broad vegetation user community with the final aim to ease and increase the use of PROBA-V data by any user. The data offering consists of the complete archive from SPOT-VEGETATION, PROBA-V , as well as selected high-resolution data/products in the land domain.

Together with the European Comission, ESA is furthermore preparing the way for a new era in Earth Observation, with a concept to bring users to the data, under the ‘EO Innovation Europe’ responding to paradigm shift in the exploitation of Earth Observation data.

The Copernicus Data Information and Access Service (DIAS) will focus on appropriate tools, concepts and processes that allow combining the Copernicus data and information with other, non-Earth Observation data sources to derive novel applications and services. It is foreseen that the data distribution and access initiatives will support, enable and complement the overall user and market uptake strategy for Copernicus.

The presentation will debrief on the current status and planning with regard to exploitation platforms in operations and planned with ESA involvement, as relevant for Land monitoring and land cover classification.


2:10pm - 2:30pm

Sentinel-powered land cover monitoring done efficiently

Grega Milcinski

Sinergise, Slovenia

Sentinel-2 data are being distributed for more than a year now. However, they are still not as widely used as they should be based on their usefulness. The reason probably lies in technical complexity of using S-2 data, especially if one wants to use full potential of multi-temporal and multi-spectral imaging. Vast volume of data to download, store and process is technically too challenging.
We will present a Copernicus Award [1] winning service for archiving, processing and distribution of Sentinel data, Sentinel Hub [2]. It makes it easy for anyone to tap into global Sentinel archive and exploit its rich multi-sensor data to observe changes in the land. We will demonstrate, how one is able not just observing imagery all over the world but also creating its own statistical analysis in a matter of seconds, performing comparison of different sensors through various time segments. The result can be immediately observed in any GIS tool or exported as a raster file for post-processing. All of these actions can be performed on a full, worldwide, S-2 archive (multi-temporal and multi-spectral). To demonstrate the technology, we created a simple web application, called "Sentinel Playground" [3], which makes it possible to query Sentinel-2 data anywhere in the world.
Sentinel-2 data are only as useful as the applications built on top of it. We would like people to not bother too much with basic processing and storing of data but rather to focus on value added services. This is why we are looking for partners, who would bring their remote sensing expertise and create interesting new services.
[1] http://www.copernicus-masters.com/index.php?anzeige=press-2016-03.html
[2] http://www.sentinel-hub.com
[3] http://apps.sentinel-hub.com/sentinel-playground/


2:30pm - 2:50pm

Bringing High Resolution to the Globe – A system for automatic Land Cover Mapping built on Sentinel data streams to fulfill multi-user application requirements

Michael Riffler, Andreas Walli, Jürgen Weichselbaum, Christian Hoffmann

GeoVille Information Systems, Austria

Accurate, detailed and continuously updated information on land cover is fundamental to fulfil the information requirements of new environmental legislation, directives and reporting obligations, to address sustainable development and land resource management and for supporting climate change impact and mitigation studies. Each of these applications has different data requirements in terms of spatial detail, thematic content, topicality, accuracy, and frequency of updates. To date, such demands were largely covered through bespoken services based on a variety of relevant EO satellite sensors and customized, semi-automated processing steps.

To address public and industry multi-user requirements, we present a system for retrieval of high resolution global land cover monitoring information, designed along Space 4.0 standards. The highly innovative framework provides flexible options to automatically retrieve land cover based on multi-temporal data streams from the Sentinel 1, Sentinel 2 as well as third party missions. Users can specify desired land cover data for any place on the globe for any given time period since the operational start of Sentinel-2, and receives a quality controlled output within hours or days (depending on product level).

The core of the operational mapping system is a modular chain consisting of sequential components for operational data access, pre-processing, time-series image analysis and classification, pre-acquired in-situ data supported calibration and validation, and service components related to product ordering and delivery. Based on the user’s selection for a target area, date/period, and the type of the requested product, the system modules are automatically configured into a processing chain tailored to sector-specific information needs.

The data access component retrieves all necessary data by connecting the processing system to Sentinel data archives (e.g. the Austrian EODC) as well as other online image and in-situ databases. After pre-processing, all satellite data streams are converged into data cubes hosting the time-series data in a scalable, tile-based system. Targeted land cover information is extracted in a class specific manner in the thematic image analysis, classification and monitoring module, representing the core of the processing engine. Key to the retrieval of thematic land cover data is an automated, iterative training sample extraction approach based on data from existing regional and global land cover products and in-situ data bases. The system is self-learning and -improving and thereby continuously building a global database of spatially and temporally consistent training samples for calibration and validation. Finally, the class specific land cover maps are assembled into a coherent land cover database according to the user’s specifications.

The developed system is currently tested in various R&D as well as operational customer projects. The aim is to solidify the performance of the various modules with a multi-staged opening of the system portal, starting with selected industry customers along B2B service models.

We will demonstrate the service capacity for a number of use cases, which are already applied for the current production of the High Resolution Layers within the Copernicus Land Monitoring Service, GlobalWetland-Africa, the Land Information System Austria (CadastrENV) and related mapping services for the international development sectors.


2:50pm - 3:10pm

Land Cover data to support a Change Detection system for the Space and Security community

Sergio Albani, Michele Lazzarini, Paulo Nunes, Emanuele Angiuli

European Union Satellite Centre, Spain

One of the main interests in exploiting Earth Observation (EO) data and collateral information in the Space and Security domain is related to the detection of changes on the Earth Surface; to this aim, having an accurate and precise information on Land Cover is essential. Thus it is crucial to improve the capability to access and analyse the growing amount of data produced with high velocity by a variety of EO and other sources as the current scenario presents an invaluable occasion to have a constant monitoring of Land Cover changes.

The increasing amount of heterogeneous data imposes different approaches and solutions to exploit such huge and complex datasets; the new paradigms are changing the traditional approach where the data are downloaded to users’ machines, and the key role of technologies such as Big Data and Cloud Computing are emerging as important enablers for productivity and better services, where the “processes are brought to the data”.

The European Union Satellite Centre (SatCen) is currently outlining a system using Big Data and Cloud Computing solutions, built on the results of two Horizon 2020 projects: BigDataEurope (Integrating Big Data, software & communities for addressing Europe’s Societal Challenges) and EVER-EST (European Virtual Environment for Research – Earth Science Themes). Main aims are: to simplify the access to satellite data (e.g. Sentinel missions); to increase the efficiency of the processing using distributed computing (e.g. by open source toolboxes); to detect and visualise changes potentially related to Land Cover variations; to integrate the final output with collateral information.

Through a web-based Graphical User Interface the user can define an Area of Interest (AoI) and a specific time range for the analysis. The system is directly connected to relevant catalogues (e.g. the Sentinels Data Hub) and the data (e.g. Sentinel-1) can be accessed and selected for the processing. Several SNAP operators (e.g. subset, calibration and terrain correction) have been chained, so the user can automatically trigger the pre-processing chain and the successive change detection algorithm (based on an in-house tool). The output is then mapped as clustered changes on the specific AoI.

The integration of the detected changes with Land Cover information and collateral data (e.g. from social media and news) allows to characterize and validate changes in order to provide decision-makers with clear and useful information.

 
3:10pm - 4:10pmConclusions by Chairs
Big Hall 

 
Contact and Legal Notice · Contact Address:
Conference: WorldCover 2017
Conference Software - ConfTool Pro 2.6.113
© 2001 - 2017 by H. Weinreich, Hamburg, Germany