Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Only Sessions at Location/Venue 
 
 
Session Overview
Date: Tuesday, 14/Mar/2017
8:15am - 9:00amRegistration
Big Hall 
9:00am - 10:30am1.1: Opening Session
Session Chair: Bianca Hoersch, ESA
Session Chair: Olivier Arino, ESA
Big Hall 
 
9:00am - 9:10am

Welcome Address

Nicolaus Hanowski

ESA

.


9:10am - 9:30am

Copernicus Programme

Michel Massart

DG-GROW, Belgium

Copernicus is the European system for monitoring the Earth. It consists of a complex set of instruments which collect data from multiple sources: earth observation satellites and in situ sensors. It processes these data and provides users with reliable and up-to-date information through a set of operational services related to environmental and security issues. The Copernicus Land service supports a wide range of applications including environment protection, management of urban areas, regional and local planning, agriculture, forestry, and sustainable development. Under its global component, the service delivers biophysical variables including land cover – land use information at different resolution. The main users of Copernicus services are policymakers and public authorities who need the information to develop and to monitor environmental legislation and policies including development policies. The Copernicus programme is coordinated and managed by the European Commission.


9:30am - 9:50am

Sentinel-1 Mission Status

Pierre Potin

European Space Agency, Italy

As part of the European Copernicus programme, the Sentinel-1 mission, based on a constellation of two SAR satellites, ensures continuity for Europe of C-band SAR observations. Sentinel-1A and Sentinel-1B were respectively launched from Kourou on 3rd April 2014 and 25th April 2016.

Full Operations Capacity (FOC) of the mission is expected to be achieved following the completion of the operational qualification of the Sentinel-1 constellation (Sentinel-1A and Sentinel-1B), by mid-2017 indicatively.

The presentation will give an overview of the overall mission status at the time of the workshop and will focus on the routine operations activities of the constellation. Topics including mission achievements, mission observation scenario of the constellation, ground segment operations performance, throughput and data access will be presented.


9:50am - 10:10am

Sentinel-2 Mission Status

Bianca Hoersch

European Space Agency, Italy

Copernicus is a joint programme of the European Commission (EC) and the European Space Agency (ESA), designed to establish a European capacity for the provision and use of operational monitoring information for environment and security applications.

Within the Copernicus programme, ESA is responsible for the development of the Space Component, a fully operational space-based capability to supply earth-observation data to sustain environmental information Services in Europe.

The Sentinel missions are Copernicus dedicated Earth Observation missions composing the essential elements of the Space Component. In the global Copernicus framework, they are complemented by other satellites made available by third-parties or by ESA and coordinated in the synergistic system through the Copernicus Data-Access system versus the Copernicus Services.

The Copernicus Sentinel-2 mission provides continuity to services relying on multi-spectral high-resolution optical observations over global terrestrial surfaces [1]. Sentinel-2 will capitalize on the technology and the vast experience acquired in Europe and the US to sustain the operational supply of data for services such as forest monitoring, land cover changes detection or natural disasters management.

The Sentinel-2 mission offers an unprecedented combination of the following capabilities:

○ Systematic global coverage of land surfaces: from 56°South to 84°North, coastal waters and Mediterranean sea;

○ High revisit: every 5 days at equator under the same viewing conditions with 2 satellites;

○ High spatial resolution: 10m, 20m and 60m;

○ Multi-spectral information with 13 bands in the visible, near infra-red and short wave infra-red part of the spectrum;

○ Wide field of view: 290 km.

The data from the Sentinel-2 mission are available openly and freely for all users with online easy access since December 2015. The presentation will give a status report on the Sentinel-2 mission, and outlook for the remaining ramp-up Phase, the completion of the constellation and a view to ongoing evolutions.


10:10am - 10:30am

Sentinel-3 Mission Status and Performance

Susanne Mecklenburg

European Space Agency, Italy

The first satellite of the Sentinel-3 constellation, Sentinel-3A, was launched in February 2016, with a launch of Sentinel-3B being expected at the end of 2017. The main objectives of the Sentinel-3 constellation, building on the heritage of ESA’s ERS and ENVISAT missions, are to measure sea-surface topography, sea- and land-surface temperature and ocean- and land-surface colour in support of ocean forecasting systems, and for environmental and climate monitoring. The series of Sentinel-3 satellites will ensure global, frequent and near-real time ocean, ice and land monitoring, with the provision of observation data in a routine, long-term (up to 20 years of operations) and continuous fashion, with a consistent quality and a high level of reliability and availability.
Sentinel-3A passed its commissioning phase in July 2016 and is now in the so called ramp-up phase, leading to full operational capacity in spring 2017. The Sentinel-3 missions is jointly operated by ESA and EUMETSAT. ESA will be responsible for the data acquisition, satellite and payload long-term performance monitoring and the operations, maintenance and evolution of the Sentinel-3 ground segment on land related products and EUMETSAT on the marine products and the satellite monitoring and control.
The presentation will give an overview of the overall mission status at the time of the workshop and provide in particular an overview on the status of the Sentinel-3 core data products and their provision in the mission’s ramp-up phase.

 
10:30am - 11:00amCoffee Break
Big Hall 
11:00am - 12:20pmOpening Session (cont'd)
Big Hall 
 
11:00am - 11:20am

Meeting Evolving Needs for Large-area Land Cover and Land Change Information: The USGS Perspective

Thomas R. Loveland

U.S.Geological Survey, United States of America

The US Geological Survey has a long land cover history, starting with the 1976 landmark A Land Use and Land Cover Classification System for use with Remote Sensor Data, and including global land cover mapping and the ongoing production of the National Land Cover Database. While these past projects have had a significant impact, land cover data needs are changing due to the demand for increasingly innovative and timely land cover products needed to meet the community’s insatiable appetite for science quality geospatial land cover and land change data. New strategies are needed that generate higher quality results that include additional land cover variables, more detailed legends, and more frequent land cover and land change geospatial and statistical information.

The USGS response to the growing requirements for land use, cover, and condition data, information, and knowledge, is the Land Change Monitoring, Assessment, and Projection (LCMAP) initiative. LCMAP is an end-to-end capability that uses the rich Landsat record to continuously track and characterize changes in land cover, use, and condition and translate such information into assessments of current and historical processes of cover and change. LCMAP aims to generate science-quality land cover and land change products from current and near-real time Landsat data. All available Landsat data for any given location are used to characterize land cover and change at any point across the full Landsat record and to detect and characterize land cover and land change as it occurs. The basis for this is the Continuous Change Detection and Classification (CCDC) algorithm developed by Zhu and Woodcock (2014). The initial LCMAP land cover and land change products are annual land cover maps, maps of key change attributes (e.g., date, type, and magnitude of change), corresponding accuracy assessments, and land cover and land change area estimates. This presentation focuses on the overall LCMAP strategy and reviews early results that demonstrates how an improved land and change information lead to a better understanding of the rates, causes, and consequences of land change.

Zhu, Z. and Woodcock, C.E., 2014. Continuous change detection and classification of land cover using all available Landsat data. Remote sensing of Environment, 144: 152-171.


11:20am - 11:40am

Multi-Source Land Imaging for Developing Continental and Global Land-Cover/Use Products in the NASA LCLUC Program

Garik Gutman

NASA, United States of America

The NASA Land-Cover/Land-Use Change (LCLUC) program is developing interdisciplinary research combining aspects of physical, social and economic sciences, with a high level of societal relevance, using remote sensing tools, methods and data.One of its stated goals is todevelop the capability for periodic satellite-based inventories of land cover and monitoring and characterizing land-cover and land-use change. Synergistic use of data from different satellites is an efficient way to get most out of current remote sensing capabilities to study land changes. This presentation will focus on the recent achievements in multi-source land imaging projects under LCLUC program, in which optical mid-resolution data from Landsat system and Sentinel-2, as well as radar data from Sentinel-1, are combined to develop higher-level medium spatial resolution (10-60m) satellite products in analyzing changes in land cover at the global and continental scales. This presentation will include illustrations of recent, significant results in the LCLUC program for various sectors, including forestry, agriculture and urban.


11:40am - 12:00pm

Land Cover – An Essential Element for Multilateral Environmental Agreements

Barbara J. Ryan, Gary N. Geller, Andre Obregon

GEO Secretariat, Geneva, Switzerland

The Group on Earth Observations (GEO) is the organization building the Global Earth Observation Systems of Systems ( GEOSS). The organization is comprised of over 100 member countries and over 100 Participating Organizations that are coordinating their efforts to improve the use of Earth observations for decision making. GEO has eight Societal Benefit Areas (SBAs) and more than 70 different activities that are filling in observational gaps, increasing data sharing, and enhancing discovery, access and application of data contained in the GEOSS Common Infrastructure (GCI).

Land cover is one of GEO’s top-priority areas due to its importance to many SBAs. It is critically important for ecosystems, biodiversity, water and disasters, and plays a major role in climate change processes. Land cover and land use information is required by a variety of the Multilateral Environmental Agreements (MEAs) such as the Sustainable Development Goals (SDGs), the Convention on Biological Diversity (UNCBD), the Convention to Combat Desertification (UNCCD), the Ramsar Convention on Wetlands, and several others. GEO is actively involved with the SDGs, focused on providing the Earth observations and derived products that can support achieving them and generating the indicators that mark progress. And at the national level, governments need land cover and land use information, particularly change information, to meet their many internal needs.

Contributors to the GEO Land Cover (LC) and Land Cover Change (LCC) Task work to improve the availability and quality of LC and LCC data by helping to convene and coordinate various sectors of the LC community, including data providers and consumers. Stakeholders include environmental agencies, science communities, national mapping agencies, commercial users, and MEAs. Activities include work to improve access to existing LC and LCC information, development of shared tools to facilitate validation of LC datasets, and working towards sustainable systems for LC product generation.

Because LC products are needed for so many users and applications, meeting these varied needs is a big challenge. Most current LC product generation approaches are labour intensive and unable to meet the needs of all users. Consequently, GEO is facilitating discussions within the LC community on how to move towards automated, sustainable, operational systems that can support a wider range of user needs.


12:00pm - 12:20pm

Challenges and Opportunities for Monitoring Land and its Cover Change through the use of Geospatial Information

John Latham

UN/FAO

Land cover is one of the most easily detected indicators of human interventions on the land. Information on land cover is therefore critical for the implementation of environmental, food security and humanitarian programmes of UN, international and national institutions.

The Food and Agriculture Organisation (FAO) has long experiences in developing land cover data sets and in using space-technology-based data for: in situ monitoring, data collection, agriculture and environment monitoring and development of sustainable agriculture policy in the country members. During the last 3 decades FAO has devoted considerable attention to the development of techniques for land cover and land cover changes mapping using enhanced methodologies and tools underpinned by standards (LCML, LCCS).

FAO adopt two main approaches for producing estimates of land cover and land cover change: i) wall-to-wall approach, an analysis that covers the full spatial extent of the study area, derived from FAO GLCN programme; and ii) sample base approach derived from the methodology developed under the FAO’s EcoNet programme. Both methods take advantage of geospatial technology and FAO Land Cover Classification System/LCML language (ISO standard) for description of the land cover features by using a set of independent diagnostic criteria that allow correlation with existing classifications and legends. LCML provides a general framework of rules from which more exclusive conditions can be derived to create specific legends. The LCCS describes the “real world” according to specific needs of the end users, combining the LCML Elements, by means of a specific software, to form their own category (class).

The key application of FAO land cover mapping approach stand on the exploitation of the time series geospatial information for the assessment of land use and land cover change over time and generation of specific information in particular monitoring of changes on agriculture, forests, rural and urban communities. The assessment of the level of land with focus on crop changes in a long period of time can significantly support the analysing in the types, rates, and causes of change and the effect on the environment and climate.

The focus of this paper is to give an overview of the FAO commitment in developing state-of-the-art techniques and methods integrated with geospatial technology for the monitoring and analysing of the landscape evolution as well as the assessment of the impact of the land uses and land covers change in response to evolving economic, social, and biophysical conditions. Geospatial technology enables using the integrated assessment of biophysical and socioeconomic variables for a coherent, reliable science-based approach to address a number of related parameters that are key to food security and sustainable agricultural landscapes.

The latest, free or lower-cost generation of remote sensing data products (including thermal and radar imagery) has now achieved a level of spatial, temporal and spectral resolution that may be directly applied to tracking the location and performance of complex, fragmented, low-input smallholder farming systems and landscapes in various environments.

 
12:20pm - 1:00pm1.2: Responding to User Needs
Session Chair: Barbara Ryan, GEO
Session Chair: Martin Herold, Wageningen University & Research - WUR
Big Hall 
 
12:20pm - 12:40pm

Uncertainty in Land Cover observations and its impact on climate simulations

Goran Georgievski, Stefan Hagemann

Max Planck Institute for Meteorology, Germany

Land Cover (LC) and its bio-geo-physical feedbacks are important for the understanding of climate and its vulnerability to changes on the surface of the Earth. Recently ESA has published a new LC map derived by combining remotely sensed surface reflectance and ground-truth observations. For each grid-box at 300m resolution, an estimate of confidence is provided. This LC data set can be used in climate modelling to derive land surface boundary parameters for the respective Land Surface Model (LSM). However, the ESA LC classes are not directly suitable for LSMs, therefore they need to be converted into the model specific surface presentations. Due to different design and processes implemented in various climate models they might differ in the treatment of artificial, water bodies, ice, bare or vegetated surfaces. Nevertheless, usually vegetation distribution in models is presented by means of plant functional types (PFT), which is a classification system used to simplify vegetation representation and group different vegetation types according to their biophysical characteristics. The method of LC conversion into PFT is also called “cross-walking” (CW) procedure. The CW procedure is another source of uncertainty, since it depends on model design and processes implemented and resolved by LSMs. These two sources of uncertainty, (i) due to surface reflectance conversion into LC classes, (ii) due to CW procedure, have been studied by Hartley et al 2016 to investigate their impact on LSM state variables (albedo, evapotranspiration (ET) and primary productivity) by using three standalone LSMs. The present study is a follow up to that work and aims at quantifying the impact of these two uncertainties on climate simulations performed with the Max Planck Institute for Meteorology Earth System Model (MPI-ESM) using prescribed sea surface temperature and sea ice. The main focus is on the terrestrial water cycle, but the impacts on surface albedo, wind patterns, 2m temperatures, as well as plants productivity are also examined.

The analysis of vegetation covered area indicates that the range of uncertainty might be about the same order of magnitude as the estimated historical anthropogenic LC change. For example, the area covered with managed grasses (crops and pasture in MPI-ESM PFT classification) varies from 17 to 26 million km2, and area covered with trees ranges from 15 million km2 up to 51 million km2. These uncertainties in vegetation distribution lead to noticeable variations in atmospheric temperature, humidity, cloud cover, circulation, and precipitation as well as local, regional and global climate forcing. For example, the amount of terrestrial ET ranges from 73 to 77 × 103 km3yr-1in MPI-ESM simulations and this range has about the same order of magnitude as the current estimate of the reduction of annual ET due to recent anthropogenic LC change. This and more impacts of LC uncertainty on the near surface climate will be presented and discussed in the context of LC change.

Hartley, A.J., MacBean, N., Georgievski, G., Bontemps, S.: Uncertainty in plant functional type distributions and its impact on land surface models (in review with Remote Sensing of Environment Special Issue)


12:40pm - 1:00pm

Uncertainty in satellite-derived land cover information and its impact on land surface models

Andrew Hartley1, Natasha MacBean2, Goran Georgievski3, Sophie Bontemps4

1Met Office Hadley Centre, United Kingdom; 2Laboratoire des Sciences du Climat et l'Environnement, Institut Pierre Simon Laplace, France; 3Max Planck Institut für Meteorologie, Germany; 4Université catholique de Louvain, Belgium

The spatial distribution and fractional cover of plant functional types (PFTs) is a key uncertainty in land surface models (LSMs) that is closely linked to uncertainties in global carbon, hydrology and energy budgets. In this study, we assess the largest plausible range of PFT uncertainty derived from land cover maps produced by the European Space Agency (ESA) Land Cover Climate Change Initiative (LC_CCI) on simulations of land surface fluxes using 3 leading LSMs. PFT maps used in LSMs can be derived from a land cover (LC) class map and cross-walking (CW) table that allocates the fraction of each PFT that occurs within each LC class. We evaluate the impact of uncertainty due to both LC classification algorithms, and CW procedure.

We examined the impact of this PFT uncertainty on 3 key variables in the carbon, water and energy cycles (gross primary production (GPP), evapo-transpiration (ET), and albedo), for 3 LSMs (JSBACH, JULES and ORCHIDEE) at global and regional scales. Results showed a greater uncertainty in PFT fraction due to CW as opposed to LC uncertainty, for all three variables. CW uncertainty in tree fraction was found to be particularly important in the northern boreal forests for simulated LSM albedo. Uncertainty in the balance between grass and bare soil fraction in arid parts of Africa, central Asia, and central Australia was also found to influence albedo and ET in all models.

These results show that inter-model uncertainty for key variables in LSMs can be reduced by more accurate representation of PFT distributions. Future efforts in land cover mapping should therefore be focused on reducing CW uncertainty through better understanding of the fractional cover of PFTs within a land cover class. Efforts to reduce LC uncertainty should particularly be focused on more accurate mapping of grass and bare soil fractions in arid areas. We suggest that both issues can be significantly improved through the integration of very high spatial resolution satellite observations with more frequent and thematically detailed medium resolution observations.

 
2:10pm - 3:30pmResponding to User Needs (cont'd)
Big Hall 
 
2:10pm - 2:30pm

Monitoring and Assessing Land Use: Progress for Land System Science through Climate Change Research and SDGs

Martin Herold1, Patrick Hostert2, Sebastian van der Linden2, Jan Verbesselt1

1Wageningen University & Research, The Netherlands; 2Humboldt-Universität zu Berlin, Germany

We should rethink our common Earth Observation paradigm that one cannot observe land use and land use change (as compared to monitoring land cover) from remote sensing given the new opportunities we have at hand. Observation density from the Sentinel constellation plus the long-term legacy of the Landsat system are major cornerstones of this development. Open data policies push the use of data with a spatial grain from 10m to 30m and observation densities of a few days, which allow assessing land changes with a focus on how the observed land is used.

The climate change context has been one of the driving forces in global, regional and national monitoring of land cover and land changes. Relatively clear and documented user requirements (i.e. ECVs, REDD+, climate modelling) have stimulated dedicated observation programs. We here use the experiences and progress in this arena to highlight that observations are becoming available that lead to much more data-driven evidence/analysis of land changes and dynamics. At the same time, a dialog on opportunities and limitations of Earth Observations with the land change and system science community is emerging that will lead to a novel focus on monitoring land use with highly automated procedures. We will present several case studies on how that can become possible.

The recently signed Paris Climate agreement will lead to more investments in assessing land use effects on climate and to mitigation activities related to the land use sector. However, it seems essential and possible to move beyond purely climate change focused progress when monitoring land change with novel Earth Observation data. An important point of departure was created with the Sustainable Development Goals (SDGs) and related indicators and monitoring needs. Next to SDG 13 on “Climate Action”, specific opportunities may therefore arise in support of SDG 2 “Zero hunger” (e.g. on sustainable agriculture) and of SDG 15 “Life on Land” with a focus on the sustainable use of terrestrial ecosystems.


2:30pm - 2:50pm

Land Cover Information Requirements for Supporting Countries to set Land Degradation Neutrality Targets and Ensure Further Monitoring

Sara Minelli1, Sven Walter2, Alain Retière3

1UNCCD Secretariat, Bonn, Germany; 2UNCCD Global Mechanism, Rome, Italy; 3UNCCD Global Mechanism, France

Ongoing climate change, biodiversity collapse and food insecurity are largely caused by the global accelerating land degradation process resulting from destruction of natural ecosystems and inappropriate land management and agricultural practices. The need for urgent and bold action to avoid, reduce and reverse land degradation, thereby achieving land degradation neutrality (LDN), became a firmly established global political target with the adoption of the 2030 Agenda for Sustainable Development and Sustainable Development Goals target 15.3. In close conjunction with the other Rio Conventions, the United Nations Convention to Combat Desertification (UNCCD) provides the international framework for LDN monitoring and implementation.

Building on lessons learned through a pilot project conducted in 14 voluntary countries and on recent efforts by the UNCCD Science-Policy Interface to design a scientific conceptual framework for LDN, the UNCCD is currently supporting over 100 countries in operationalizing the LDN concept. The approach consists in setting voluntary national LDN baselines as well as targets and associated measures, removing legal and economic barriers to sustainable land management, reinserting restored land in sustainable production and monitoring progress based on a harmonized set of three measurable indicators.

The three indicators used as proxies of the ecosystem services that LDN is intended to deliver are 1) land cover, 2) land productivity and 3) soil organic carbon. Earth observations from space have proven their reliability to track over long periods land cover change and biomass activity. As many countries face difficulties to access this type of information, the UNCCD has established partnerships with the European Space Agency (ESA), the Joint Research Centre of the European Commission (JRC) and ISRIC - World Soil Information to provide all interested countries with national estimates derived from global datasets as default information for their national LDN target setting processes.

Our presentation will provide an overview of the indicator framework and associated data requirements at various scales. It will focus on the UNCCD experience in using the ESA Climate Change Initiative Global Land Cover product to provide default data on land cover and land cover changes occurred between 2000 and 2010, aggregated in 6 main categories (forest, shrub and grassland, cropland, wetland, artificial areas and bare land) and analyzed in conjunction with: i) NDVI-based land productivity dynamics data made available by the JRC; and ii) soil organic carbon estimates extracted from the ISRIC SoilGrids250m. The presentation will highlight the limitations faced in different ecological contexts (climate, topography, hydrology, geology) with different categories of land cover such as tree-based cropping systems, complex mosaic of small plots of crops and natural vegetation, wetlands and finally in identifying expansion of urban areas over croplands, that should be better addressed in the next generation of higher resolution Global Land Cover databases. It will also highlight the need to count on a 1990 epoch in the new release.


2:50pm - 3:10pm

Biodiversity and ecosystem service community user needs for global land cover and land use mapping

Brian O'Connor1, Neil Burgess1, Rachael Petersen2, Andrew Skidmore3

1UNEP-WCMC, 219 Huntingdon Road, Cambridge, CB3 0DL, United Kingdom; 2World Resources Institute, 10 G Street NE, Washington, DC 20002, United States of America; 3ITC, University Twente, Enschede, 7500 AA, Netherlands

Existing multi-purpose global land cover maps are of limited use in applications relating to biodiversity conservation. The reasons are twofold:

  • Most current global land cover products are not operational or are infrequently updated, preventing their use for monitoring status and trends in biodiversity.
  • Land cover maps that are frequently updated lack the detail in thematic classes required for conservation purposes.

Despite these shortcomings, multiple international environmental conventions cite land cover change as a key source of information to inform policy. The potential use cases for an annually-updated, synoptic view of land cover change are vast; however, the scientific community has yet to produce operational land cover monitoring systems that meet the thematic, spatial and temporal requirements of policy makers for tracking progress on global biodiversity commitments. For example, land use change is one of the four main causes of species extinction, linked to habitat loss, fragmentation and degradation and is a key pressure on biodiversity that needs to be tracked systematically. Yet land uses such as fertilizing a particular land cover (crop) can be very challenging to discern from the multispectral signal.

For scientific end users land cover can be both a discrete layer to monitor habitat cover and habitat change, as well as an input layer for species distribution models, the derivation of higher-level indicators of land use (in combination with contextual information) as well as for up-scaling Essential Biodiversity Variables (EBVs) on ecosystem structure, such as height, and function, such as the functional diversity of terrestrial plant communities.

Meanwhile, great advances have been made in the field of forest cover monitoring. The advent of freely available, multi-decadal and high resolution satellite imagery has been a game-changer for monitoring global forest land cover change. High performance, cloud computing has made on-the-fly analysis of huge volumes of pixels feasible. The global community can now evaluate the extent of habitat loss for rare and endangered biodiversity, from tropical to boreal forests. For example, over one million visitors (1,500 daily) have logged onto the interactive online Global Forest Watch platform to view and analyse forest cover change datasets.

Yet many non-forested habitats harbour biodiversity of key importance for global conservation and are being lost at alarming rates that often outstrip the rate of forest loss, e.g. temperate grasslands of Eurasia, montane grasslands of Africa, and global wetlands. The progress of satellite remote sensing for mapping these land covers and their changes periodically and systematically lags behind that for forest cover. The 2000-2005-2010 ESA Climate Change Initiative (CCI) 300m land cover datasets shows the potential for wall to wall land cover mapping using a standardised classification system.

This presentation will further explore these issues and present examples of international biodiversity and ecosystem service policy targets which urgently require timely and robust land cover data in order to track progress towards their achievement. These requirements will be framed in the context of the candidate set of EBVs proposed by the Group on Earth Observation Biodiversity Observation Network (GEO BON).


3:10pm - 3:30pm

Principles and Criteria for Creating, Disseminating, and Maintaining Operational Land Cover Monitoring Systems: Lessons Learned from More than 1 Million Users of Global Forest Watch

Mikaela J Weisse, Rachael Petersen, Fred Stolle

World Resources Institute

Improvements in the availability of earth observation satellite data, as well as increasing computation power and decreasing storage costs have made consistent, repeatable, global-scale forest change monitoring at medium resolutions a reality. The results of these monitoring efforts are delivered to the general public through the free, online Global Forest Watch platform. Since the launch of the Global Forest Watch platform in 2014, over 1 million unique users have accessed the platform from every country in the world, including users from national government agencies, commodity buying companies, journalists, researchers, and local communities. Forest change information from remotely sensed data is no longer simply a tool used by scientists, but increasingly an independent input into day-to-day forest management decisions by non-scientists.

The experience of Global Forest Watch in providing remotely-sensed data to the non-science community in usable, interactive formats has yielded many insights on how to build, disseminate, and maintain operational land cover monitoring systems. This presentation will reflect upon lessons learned from our over one million users to inform principles and criteria for future operational land cover monitoring.

This presentation will:

  1. Illuminate challenges in the use of remotely senses data by non-scientists (e.g. communicating accuracy, data limitations, and appropriate applications of data to a variety of end-users)
  2. Suggest key principals and criteria for data generation, including accuracy, timeliness, transparency, repeatability, and sustainability
  3. Identify current data gaps related to land cover monitoring that are most relevant to our audiences

We will provide evidence for these discussions by reflecting on the myriad ways our audiences are using GFW data and tools. Successful examples include law enforcement officials in Peru and Uganda using near-real time alerts to find and respond to illegal activities, palm oil traders in Indonesia identifying risk in their supply chains using GFW tools, and journalists raising awareness about ecologically-important forest under threat with forest change data and satellite imagery.

The success of operational forest monitoring to add value to decision-making has given rise to increasing demand for new and improved remotely sensed products. For example, law enforcement agencies would benefit from higher spatial and temporal resolution of near-real time change detection. International policy-makers are interested in better information on forest recovery and regrowth, and differentiation of primary, secondary, degraded, and planted forests. We have also seen increasing interest in global, annual data on land cover and land cover change beyond forests to improve land-use planning by governments, civil society, and the private sector, as well as to provide an independent benchmark for progress on ambitious forest- and land-related commitments, like the New York Declaration, the Sustainable Development Goals and the Aichi Targets. In addition, users from different disciplines all require data to be accurate, easy to understand, and reliably updated into the future. This presentation will emphasize the essential task of considering end-user requirements in order to build, maintain, and disseminate land cover monitoring systems for maximum impact on decision-making at local-to-global scales.

 
3:30pm - 4:00pmCoffee Break
Big Hall 
4:00pm - 5:40pmResponding to User Needs (cont'd)
Big Hall 
 
4:00pm - 4:20pm

Using Earth Observation and Other Geospatial Data to Improve OECD's Environmental and Green Growth Indicators and Its Policy Guidance

Ivan Hascic, Alexander Mackie, Miguel Cardenas Rodriguez

Organisation for Economic Co-operation and Development (OECD), France

One of the OECD’s core functions is the production of internationally harmonised data, statistics and indicators. Earth observation and other geospatial data offer opportunities to develop new or improved indicators, particularly in the domain of the environment and green growth. Earth observation data is often a unique source of relevant information that is commensurable across countries and at multiple spatial scales, and thus provides opportunities to help fill the many information gaps that OECD countries face at the national and sub-national levels (especially when it comes to monitoring natural resources and environmental sinks) and assessing environmental risks (to humans, built property and economic activity). Importantly, EO data can be combined with socio-demographic and economic data, thereby improving the policy relevance of the indicators. The OECD seeks to build on the rising body of EO and other geospatial data and develop internationally harmonised indicator methodologies to respond to the growing demands for a more granular and more policy-relevant information base and better targeted policy advice.

Applications of Earth observation and other geospatial data have been gaining momentum in OECD's work on the environment and green growth. Examples drawing on on-going work using geospatial data will be presented, such as air pollution exposure, the related economic costs, and distributional aspects. A specific current priority relates to land cover monitoring, aiming to quantify the rate of conversions from natural and semi-natural land cover types, to more anthropogenic land cover associations in a comparable way across all member countries as a proxy indicator of pressures on biodiversity and ecosystems. The associated user requirements for the underlying data will be discussed. Other data needs to support future developments include those arising from demands to better measure the environmental dimension of quality of life, the resilience to environmental risks, and the availability of natural assets.


4:20pm - 4:40pm

National Forest and Land Use Monitoring in Africa Countries: Cameroon and Malawi

Thomas Haeusler, Sharon Gomez, Fabian Enssle

GAF AG, Germany

A formal requirement from the United Nations Framework Convention on Climate Change (UNFCCC) for developing countries to implement the Reducing Emissions from Deforestation and Degradation (REDD+) policy process are national forest monitoring systems (NFMS). African countries committed to this process require national forest and land use class definitions as well as mapping/monitoring systems adjusted to their national circumstances. This paper examines the history of developing these components into operational Earth Observation (EO) based monitoring systems in the past 8 years with the support of the European Space Agency (ESA) projects in the Congo Basin and southern Africa; the user requirements from specific countries will be presented for Cameroon and Malawi. The challenges of transferring the policy and user requirements to technical specifications for EO-products and addressing the variability in forest types and land use change assessment will be noted. The NFMS in these countries started with the baseline year 2010 and aims to provide a continuous monitoring of forest cover and forest cover changes from 2015 onwards using multi-sensoral and -temporal satellite data (Sentinel-1/2, Landsat 8), which are validated with VHR optical data. The advent of the Sentinel-2 data series enhanced dramatically the utilisation of dense time series of multi-temporal satellite imagery to resolve problems caused by phenology changes of forest canopies between the seasons. Furthermore, this data is also needed to monitor forest degradation which can be better detected by assessing forest canopy disturbances with high frequency time series. The data availability improvement also addresses the problem of cloud cover in the tropics. Based on Sentinel 2 data and integration of Landsat 8 imagery the automatic processing chain for the NFMS is comprised of geometric, radiometric and topographic pre-processing steps and an iterative classification procedure that includes a rule based correction system which yields to thematic accuracies above 85%. It was noted that especially for the dry forest biomes these high accuracy levels could not be achieved when using global datasets. Due to the data volume generated with the application of Sentinel for the near real time forest monitoring there is a necessity for cloud processing in the operational systems and this further enables the user community to be directly involved in different aspects of the processing chain. The paper will emphasise the value and merit of user-driven approaches to developing national NFMS and land use monitoring systems in terms of in-country capacity and ownership of the processes.


4:40pm - 5:00pm

New land cover data requirements for environmental accounting in Australia and globally

Albert Van Dijk, Michael Vardon, David Summers

Australian National University, Australia

The need and value of environmental accounting is well recognised internationally. However the development of environmental accounts has exposed some important gaps in the available spatial land cover information. The same issues often also limit the usefulness of land cover classification data in other land and water management applications. This presentation emphasises some of the key innovation needs into the future. Specifically, the requirements for land, water, carbon and environmental condition accounting will be discussed using Australian and global examples. Formal land accounting is currently complicated by the well-known confusion between land cover, land use and land tenure in available spatial data products. Water accounting specifically requires dynamic information on water bodies and irrigated crops, as well as delineation of other hydrological landscape elements such as floodplains, irrigable land and impermeable surfaces. Carbon accounting will require new land cover classification approaches that relate more closely to carbon stocks than to ecological communities. Finally, environmental condition accounting is a conceptually complex challenge but in essence requires new approaches to distinguish man-made from pre-existing land cover types with an indication of the degree of disturbance. In all cases, accounting demands that mapping occurs routinely on a regular (typically annual) basis using a consistent and transparent methodology. New technologies are rapidly changing the way in which land cover information is derived: data processing facilities such as Google Earth Engine empower a large community to develop bespoke land cover products, whereas new sensors (e.g. Sentinel 2) and sensor combinations relax the traditional trade-off between spatial and temporal resolution, supporting new classification approaches that simultaneously consider the spatial, temporal and spectral dimensions.


5:00pm - 5:20pm

Global Mapping of Forest Carbon Stocks using Spaceborne Radar

Oliver Cartus1, Maurizio Santoro1, Stephane Mermoz2, Alexandre Bouvet2, Thuy Le Toan2, Adam Erickson3, Nuno Carvalhais3, Valerio Avitabile4, Martin Herold4, Christiane Schmullius5

1GAMMA Remote Sensing, Switzerland; 2Centre d’Etudes Spatiales de la Biosphère, France; 3Max Planck Institute for Biogeochemistry, Germany; 4Wageningen University, The Netherlands; 5Friedrich-Schiller-University, Germany

Existing global inventories of forest carbon stocks are up to debate because regionally strongly divergent. Inventory-based inference generally allows for estimates of carbon stocks at national or sub-national scales with low uncertainty in countries with established national forest inventories. The scarcity of such information across large forest areas, however, advises the use of spaceborne remote sensing imagery for obtaining wall-to-wall, and spatially explicit, carbon stock information. However, spaceborne measurements from optical or radar sensors are only indirectly related to the carbon variable of interest. Spaceborne radar has found limited use in global forest mapping applications so far, despite the availability of global observations from, by now, several missions and the proven sensitivity of, in particular, long wavelength radar backscatter observations to forest variables closely related to forest carbon stocks, e.g., the growing stock volume (GSV) or aboveground biomass (AGB). Large-scale applications of radar data face a number of specific challenges, such as the pronounced sensitivity of the radar measurements to changing environmental imaging conditions and forest structural differences altering the relationship of SAR backscatter observations to the forest biophysical variable of interest.

In this paper, we discuss options for using spaceborne radar data jointly with optical and lidar data, auxiliary datasets from forest inventories, climatological variables and ecosystem classifications, to map forest aboveground biomass globally, while minimizing the reliance on inventory data. A first set of global biomass maps as well as spatially explicit depictions of the associated uncertainties will be presented that have been produced from hyper-temporal stacks of ENVISAT ASAR C-band backscatter data (@1km resolution) and ALOS PALSAR L-band mosaics released by JAXA (@25m resolution). The maps are being produced in the frame of the ESA DUE GlobBiomass project. The spatially explicit datasets of forest aboveground biomass and carbon stocks are the first of its kind, obtained with a single, globally consistent, retrieval approach that allows for local tuning to account for the spatial variability of forest structure. We present current results from our investigations on the reliability of the estimates and compare with existing regional datasets, including inventory data and derived regional statistics and existing national or regional map products. While the systematic global assessment of the accuracy of the biomass estimates has not yet started, a few preliminary indications can be provided. The spatial distribution of aboveground biomass was well captured, with the largest values found in the tropical forests, in temperate forests of the US Pacific Northwest, Chile and South Australia. Carbon stocks of the northern hemisphere are in line with existing observations from on ground surveys. In the tropics, the estimates appear to be in agreement with previous mapping activities except for Southeast Asia where we are currently estimating less biomass. A systematic validation to be preformed in the coming year will help to identify strengths and limitations of carbon stock inventories relying on currently available global SAR datasets and will allow for further optimization of the retrieval approaches regionally.


5:20pm - 5:40pm

Towards a New Philosophy for Generating Land Cover Products

Gary Neil Geller1, Andre Obregon1, Alan Belward2, Jun Chen3, Ivan Hascic4, Martin Herold5, Peng Gong6, Thomas Loveland7, Brice Mora5,8, Gregory Scott9, Zoltan Szantoi2

1Group on Earth Observations (GEO) Secretariat, Switzerland; 2Joint Research Centre (JRC), Italy; 3National Geomatics Center of China (NGCC), China; 4Organisation for Economic Co-operation and Development (OECD); 5Wageningen University, The Netherlands; 6Tsinghua University, China; 7United States Geological Survey (USGS), United States; 8GOFC-GOLD Land Cover Office, The Netherlands; 9UN Global Geospatial Information Management (GGIM)

Up-to-date information on land cover and how it is changing is required by many Sustainable Development Goals and other Multilateral Environmental Agreements. National governments need this information to meet their commitments to these agreements and for their internal regulations and applications. Various assessment bodies and other entities also have important needs. However, because most current approaches to generating land cover products are labor-intensive they have difficulty meeting the varied needs of these users. This results in a variety of important limitations that leave many user needs unmet. These limitations include but are not limited to: fixed number and types of classes; difficulty in generating products for large areas; infrequent and irregular updates; and long latency periods so the product may be out of date by the time it is available. Additionally, because products are generated by a variety of organizations with different mandates they are often inconsistent, making it difficult or impossible to combine or compare products. A better approach is needed.

Fortunately, advances in both science and technology now enable approaches that do not have these limitations. Specifically, improved algorithms that utilize multi-temporal and ancillary information are now practical, and increased data availability and decreased computing costs, among other advances, enable automated, on-demand systems that accept inputs from users to meet their specific needs. Several systems that take advantage of these advances and that support on-demand requests are already being developed. Developing on-demand land cover product generation systems has a variety of significant challenges, particularly for very large or, especially, global areas; reference data for training and validation is probably the most significant challenge at all scales but there are others. These topics were the focus of a workshop held in May, 2016 focused on exploring concepts for a sustainable land cover generation approach that can meet the varied needs of users. The outcome of that workshop and follow-on discussions has led to a suggested, generic architecture for land cover generation; while there are many good variants a “data cube like” approach is a common theme. In this presentation we discuss this new approach, its challenges, and some key steps forward to help it become more widespread so the needs of users can be better met.

 
Date: Wednesday, 15/Mar/2017
9:00am - 10:20am2.1: Global/Continental LC Products
Session Chair: Pierre Defourny, UCLouvain-Geomatics
Session Chair: Jun Chen, National Geomatics Center of China
Big Hall 
 
9:00am - 9:20am

Consistent 1992-2015 global land cover time series at 300 m thanks to a state-of-the-art reprocessing of multi-mission archives

Pierre Defourny1, Sophie Bontemps1, Céline Lamarche1, Carsten Brockmann2, Grit Kirches2, Martin Boettcher2, Julien Radoux1, Thomas De Maet1, Eric Vanbogaert1, Paolo Gamba3, Goran Georgievski4, Martin Herold5, Stefan Hagemann4, Andrew Hartley6, Gianni Lisini3, Natasha MacBean7, Inès Moreau1, Catherine Ottlé7, Philippe Peylin7, Maurizio Santoro8, Christiane Schmullius9, Marian Vittek1, Frédéric Achard10, Fabrizio Ramoino11, Olivier Arino11

1UCLouvain-Geomatics (Belgium), Belgium; 2Brockmann Consult, Germany; 3University of Pavia, Italy; 4Max Planck Institute, Germany; 5Wageningen University, The Netherlands; 6Met Office, United Kingdom; 7Laboratoire des Sciences du Climat et de l'Environnement, France; 8Gamma RS, Switzerland; 9Jena University, Germany; 10Joint Research Center, Italy; 11European Space Agency, Italy

Temporal consistency of land cover time series and detection of major land cover change are key requirements from the users’ community to describe terrestrial surface over time. Land cover was listed as an Essential Climate Variables (ECVs) by the Global Climate Observing System (GCOS) as critical information to further understand the climate system and support climate feedback modelling. However, only global land cover maps from the same instruments have been produced till now, keeping the time series rather short. In the framework of the Climate Change Initiative (CCI) supported by the European Space Agency (ESA) climate modelling and remote sensing teams joined forces to design, implement and deliver global datasets matching the climate science needs for long-term global products.

Building on the ESA-GlobCover experiences, this research first revisited the land cover definition into a stable component and seasonal component and designed a new approach to produce consistent land cover time series decoupling consistent mapping and change detection. Then, the archives of several satellite missions including ENVISAT Meris FR and RR, SPOT-Vegetation, the more recent PROBA-V, and the 90s’ archive of 1 km AVHRR, were reprocessed using state-of-the art methods to produce weekly surface reflectance composite and quality flags throughout the years. According to a stratification splitting the world into 22 equal-reasoning areas from ecological and remote sensing points of view, different seasonal composites were compiled to enhance the land cover discrimination. A typology of 22 land cover classes was defined based on the UN Land Cover Classification System and its classifiers to support the further conversion into Plant Functional Types distribution required by the Earth System Models (ESM). The classification processes combined first machine learning and unsupervised algorithms at 300 m resolution from the whole MERIS FR archives using most of the MERIS bands to establish the land cover baseline. Based on similar algorithms annual global land cover maps from 1 km AVHRR HRPT, 1 km SPOT-Vegetation data sets and from 300 m PROBA-V time series were then produced to serve as input in the land cover change detection algorithm. Systematic analysis of the temporal trajectory of each pixel allows depicting the main change for a simplified land cover typology matching the IPCC classes. This new land cover change detection method was found quite reliable for SPOT-Vegetation and PROBA-V thanks to their excellent temporal co-registration. In contrast, the poorer radiometric and geometric quality of AVHRR HRPT time series only provided major change in contrasted landscapes. The change detected at 1 km was then disaggregated at 300 m whenever higher resolution imagery was available. Finally, these products were validated from an independent reference dataset built by a network of international experts.

All land cover maps can be visualized from an interactive web interface and downloaded along with an aggregation tool, enabling re-projection and re-sampling as well as the translation from LC classes into Plant Functional Types for different climate models. Three major ESM successfully completed several joint experiments based on the CCI land cover products.


9:20am - 9:40am

The Dynamic Global Land Cover Layer at 100m Resolution from Copernicus Global Land

Marcel Buchhorn1, Ruben Van De Kerchove1, Martin Herold2, Nandika Tsendbazar2, Jan Verbesselt2, Steffen Fritz3, Myroslava Lesiv3, Bruno Smets1

1VITO, Belgium; 2University of Wageningen, Wageningen, the Netherlands; 3International Institute for Applied Systems Analysis, Laxenburg, Austria

The Copernicus Global Land Service is the component of the European Copernicus service which ensures a global systematic monitoring of the Earth’s land surface. It provides bio-geophysical variables in near real time describing the daily state, and changes in state, of vegetation, land surface processes and is currently preparing the release of a Moderate resolution Dynamic Global Land Cover layer.

In this presentation the methodology and rationale behind the Moderate resolution Dynamic Global Land cover layer is explained. This layer complements several global land cover ‘epoch’ datasets which have been created at medium (and high) spatial resolution during the last decade by providing a yearly dynamic land cover layer at 100m resolution. We will present the sub-product covering continental Africa for the year 2015. To build this global land cover layer, 100m spatial resolution PROBA-V data are used as primary EO data. Data fusion techniques are applied for areas with insufficient 5-daily PROBA-V100 m data and daily 300 m datasets are fused in. Next, time series metrics together with ancillary data sets (e.g. other Copernicus global land service biophysical products) are used in a supervised classification approach. Finally, at a third level, we build upon the success of previous global mapping efforts and focus on the improvement in areas where the thematic accuracy of the respective maps was insufficient to perform the final classification of each pixel. The map uses a hierarchical legend based on the United Nations Land Cover Classification System (LCCS). Compatibility with existing global land cover products is hereby taken into account, and extended by providing several cover layers. Training data have been collected from multiple sources, among others by using existing reference datasets (e.g. GOFC-GOLD) and by collecting reference data through Geo-Wiki (http://geo-wiki.org/). The product has been validated by local experts. The validation sample design has been random stratified where each sample site has been classified by visual interpratation of a high resolution imagery (Google and Bing).


9:40am - 10:00am

Validation and Change detection-based Updating of GlobeLand30

Jun Chen

National Geomatics Center of China, China, People's Republic of

GlobeLand30 is an open-access 30-m resolution global land cover (GLC) data product with 10 major classes for years 2000 and 2010. Since its first release on the 22 Sept, 2014, it has been utilized by users from about 120 countries and found applications in many Societal Benefit Areas. At the same time, the users have put forward new demands, such as providing more land cover classes, up-to-dateness and time-series. This has led to an international validation and the preparation of the updating of GlobeLand30.

The validation of 30-m GLC data products is facing several critical challenges related to the high spatial heterogeneity of land cover in the entire earth land surface, and the lacking of standardized approaches and efficient on-line tools to support collaborative practices. With the support of GEO and UN-GGIM, a technical specification of validation has been formulated and a web-based validation system has been developed. About 40 GEO-UN_GGIM members have participated in the joint validation of GlobeLand30.

The updating of 30-m GlobeLand30 is different than its original creation, and aims to produce a 2015’s version product. From the technical point of view, change detection with remote sensed imagery is the major approach and the rapidly increasing crowdsourcing information provides another valuable resource. Due to the extreme complexity of spectral heterogeneity of land cover classes, no one change detection algorithm could be universally applicable to all kinds of imagery and geographic regions. In order to support efficient updating with the consideration of the existing land cover data sets, a specific on-line system was developed to facilitate the design and execution of suitable change detection workflow with the help of a domain knowledge-based service relation model and dynamic service composition.


10:00am - 10:20am

Mapping Africa land cover at 10 m with Sentinel-2: challenges and current achievements of the Land Cover component of the ESA Climate Change Initiative

Céline Lamarche1, Pierre Defourny1, Frédéric Achard2, Martin Boettcher3, Carsten Brockmann3, Grit Kirches3, Thomas De Maet1, Julien Radoux1, Jan Militzer3, Maurizio Santoro4, Goran Georgievski5, Stefan Hagemann5, Martin Herold6, Andrew Hartley7, Natasha MacBean8, Catherine Ottlé8, Philippe Peylin8, Inès Moreau1, Christiane Schmullius9, Marian Vittek1, Fabrizio Ramoino10, Olivier Arino10

1UCLouvain-Geomatics (Belgium), Belgium; 2Joint Research Center, Italy; 3Brockmann Consult, Germany; 4Gamma RS, Switzerland; 5Max Planck Institute, Germany; 6Wageningen University, The Netherlands; 7Met Office, United Kingdom; 8Laboratoire des Sciences du Climat et de l'Environnement, France; 9Jena University, Germany; 10European Space Agency, Italy

In the context of the Climate Change Initiative supported by ESA, the Land Cover team aims to map the whole Africa based on the entire archive of Sentinel-2 mission. To address the requirement of a high spatial resolution LC map expressed by the climate science community, the research team will generate a prototype map at 10 m resolution over the whole Africa with a consistent legend of 10 classes. This pioneer experiment faces several challenges including the big data management issues, the preprocessing chain development for improved and cloud screened Sentinel-2 surface reflectance, the precise definition of a scalable land cover typology, the land cover processing chain development and the design of reference database collection for validation.
The increase in spatial resolution requires indeed significant methodological adjustments and innovations to the processing chains developed for medium spatial resolution imagery at global scale. For the pre-processing, for example, the topography and adjacency effects have to be taken into account in the atmospheric correction. In addition, due to the lower revisiting capacity of high spatial resolution sensors such as Sentinel-2, the spatial consistency of surface reflectance between few images becomes a critical aspect in the production of high spatial resolution composites.

Sentinel-2 imagery requires several quality control procedures in order to be processed by large scale processing facilities. From a data management point of view, 72 TB of Sentinel-2 data have been downloaded and preprocessed into surface reflectance L3. A surface reflectance product covering one month of acquisition corresponds to 10 TB and continental surface reflectance mosaic reaches 2 TB of data.

For the LC classification, challenges are of a different nature. The decametric resolution captures the landscape elements diversity and distinct evolution through time due to slightly different seasonality and ecological gradients. Specific effort is made to ensure the consistency between this decametric map and the medium resolution global LC maps already developed within the Climate Change Initiative. While the rich literature on LC mapping at high resolution supports the processing chain development, the data flow provided by Sentinel-2 forces to revisit the classification strategy to map the LC consistently over space and time.

A review of various regional and global mapping efforts completed at different scales from 30 to 300 m resolution, a land cover typology is proposed and currently tested over different regions. The compilation and harmonization of all available land cover maps have been completed in order to support the processing strategy. A set of 10 test sites widely distributed in Africa and representative of different EO conditions and ecoregions allows benchmarking several automated methods. These benchmarking results support the processing chain development in order to optimize the performances of the each step. Finally, results over the whole Africa as well as regional results are presented, highlighting the potential of Sentinel-2 for global land cover mapping and the challenges ahead.

 
10:20am - 10:50amCoffee Break
Big Hall 
10:50am - 12:30pm2.2: Large-scale Mapping of Specific LC
Session Chair: Matthew C. Hansen, University of Maryland
Session Chair: Frédéric Achard, Joint Research Centre - European Commission
Big Hall 
 
10:50am - 11:10am

Global Mapping of Human settlement with Sentinel-1 and Sentinel-2 data: Recent developments in the GHSL

Christina Corbane, Martino Pesaresi, Vasileios Syrris, Thomas Kemper, Panagiotis Politis, Pierre Soille, Aneta J. Florczyk, Filip Sabo, Dario Rodriguez, Luca Maffenini, Stefano Ferri

Joint Research Centre, Italy

The new global policy framework for the sustainable development of urban areas calls for timely, consistent and accurate information on human settlements. Free and open earth observation data (e.g. Landsat, Sentinel) offer a great potential for large area mapping of human settlements. The Global Human Settlement Layer (GHSL) is the first open and free information layer describing the spatial evolution of human settlements in the past 40 years. It has been produced from Landsat image collections (1975, 1990, 2000 and 2014) and publically released on the JRC open data portal. The recent availability of Sentinel-1 and Sentinel-2 data is expected to bring land cover mapping and monitoring to an unprecedented level. With the great advantage of being free and immediately available for the users, Sentinel data can provide up-to-date global information on the status and evolution of human settlements. With the shift to Sentinel imagery, regular updates and incremental improvements of the GHSL will become more feasible and reliable. This study presents the recent developments in global mapping of human settlements with Sentinel-1 data. Taking advantage of the capabilities offered by the Symbolic Machine learning approach and the functionalities of JRC Big Data infrastructure, the challenges posed by the processing and analytics of the Sentinel-1 global coverage were effectively addressed. In view of the future deployment of the GHSL framework on Sentinel-2 data, a benchmark experiment over selected European cities has been performed in order to assess the added-value of Sentinel-1 and Sentinel-2 with respect to Landsat for improving global high-resolution human settlement mapping. The results show that noticeable improvement could be gained from the increased spatial detail and from the thematic contents of Sentinel-2 compared to the Landsat derived product as well as from the complementarity between Sentinel-1 and Sentinel-2 images.


11:10am - 11:30am

Mapping urban areas globally by jointly exploiting optical and radar imagery – the GUF+ layer

Mattia Marconcini, Soner Üreyen, Thomas Esch, Annekatrin Metz, Julian Zeidler

German Aerospace Center - DLR, Germany

From the beginning of the years 2000, more than half of the global population is living in urban environments and the dynamic trend of urbanization is growing at an unprecedented speed. Accordingly, an effective monitoring of urbanization represents a key issue to analyze and understand the complexity of human settlements and ensure their sustainable development.

To this purpose, starting from the last decade different global maps outlining urban areas have started being produced. In this framework, the two currently most largely employed are JRC’s Global Human Settlement Layer (GHSL) derived at 38m spatial resolution from Landsat data and, especially, DLR’s Global Urban footprint (GUF) derived at 12m spatial resolution from TanDEM-X/TerraSAR-X data. However, it is worth noting that, despite generally accurate, these layers still exhibit both some over- and underestimation issues. Specifically, this is mostly due to the fact that they have been generated by means of: i) single-date scenes (which can be strongly affected by the specific acquisition conditions) and ii) solely using either optical or radar data, which are sensible to different structures on the ground (e.g., with optical imagery bare soil and sand generally tend to be misclassified as urban, while this does not occur with radar data; on the contrary, with radar imagery complex topography areas or forested regions can be wrongly categorized as urban areas, whereas this does generally not happen if optical data are employed).

In order to overcome these limitations, in the framework of the ESA SAR4Urban project, we have developed a novel methodology that jointly exploits multitemporal optical and radar data for automatically outlining urban areas. In particular, the basic assumption of the intended approach is that the temporal dynamics of urban settlements over time are sensibly different than those of all other non-urban classes. Hence, given all the multitemporal images available over the region of interest in the selected time interval we first extract key temporal statistics (i.e., temporal mean, minimum, maximum, etc.) of: i) the original backscattering value in the case of radar data; and ii) different spectral indices (e.g., vegetation index, built-up index, etc.) derived after performing cloud masking in the case of optical imagery. Then, different classification schemes based on Support Vector Machines are separately applied to the optical and radar temporal features, respectively, and, finally, the two outputs are properly combined together.

At present, the technique is being employed for generating the so-called GUF+ 2015, a global map of urban areas at 10m spatial resolution derived jointly using the TimeScan-Landsat 2015 product (i.e., a dataset including temporal statistics for several spectral indexes derived from ~420,000 Landsat-7/8 scenes produced within the ESA Urban-TEP platform) and temporal statistics of Sentinel-1 IW GRDH data computed globally using Google Earth Engine. The whole classification activities are supported by the Urban-TEP infrastructure and the GUF+ is expected to be completed by March 2017.

Experimental results are extremely promising and confirm the great potential of combining optical and radar imagery and the higher accuracy of the GUF+ compared to the other existing layers.


11:30am - 11:50am

Envisat ASAR and Sentinel-1: a decade of observations exploited to map inland water bodies

Maurizio Santoro1, Oliver Cartus1, Urs Wegmüller1, Andreas Wiesmann1, Penelope Kourkouli1, Celine Lamarche2, Sophie Bontemps2, Pierre Defourny2, Fabrizio Ramoino3, Olivier Arino3

1GAMMA Remote Sensing, Switzerland; 2Université catholique de Louvain, Belgium; 3ESA/ESRIN, Italy

Ten years of operations of the Envisat ASAR instrument have generated an invaluable archive of repeated observations of the SAR backscatter over land masses. The potential of such data is still being unravelled in applications despite the Envisat mission ended in 2012. The CCI Water Bodies dataset is one example of a global thematic dataset encompassing the ASAR data archive. More in general, thematic applications over land have been possible in spite of an uncoordinated acquisition strategy. Only at coarse resolution (1,000 m), the complement of all ASAR acquisition modes yielded wall-to-wall repeated coverage throughout the Envisat mission. Multi-temporal metrics of the SAR backscatter were found to overcome typical confusion between water and land occurring in single images under windy or frozen conditions. Nonetheless, such observables where not totally unique over water since specific land surface types such as glaciers and sand dunes were characterized by similar values under specific imaging conditions.

With the Sentinel-1 mission, it is envisaged that several caveats identified when mapping inland water bodies with ASAR are somewhat overcome. This appears to be the case after two years of operations and the operations of two satellites. Repeated acquisitions are planned according to a predefined strategy aiming at maximizing the information content in the scene observed. Stacks of multi-temporal observations, often in dual-polarization, are being created. The 6-12 days acquisition rate over Europe and other intensively observed regions even opens possibility to track water seasonality, which was only possible with the ASAR mission locally or for a short time period.

Here, two global scale applications of the Envisat ASAR data archive to map water bodies and some examples of the contribution of Sentinel-1 to map water bodies are presented.

The SAR Water Body Indicator derived to support the CCI Water Body Dataset is briefly reviewed. We then present the contribution of ASAR multi-year observations (2005-2012) to capture inland water dynamics at 1,000 m with a weekly time step. A novel approach based on the functional relationship between ASAR backscatter and local incidence angle is applied. The water seasonality appears to be well identified in the northern hemisphere thanks to the very dense ASAR observations in time. In regions characterized by small water bodies and dynamics, or when the data sampling was irregular, the dynamics appear to be underrepresented.

While the detection of water bodies with ASAR had to rely on a sophisticated construct and required multi-temporal observations, the availability of cross-polarized backscatter from the Sentinel-1 satellites relaxes the constraints on the input data source and allows for improved thematic accuracy. In boreal landscapes, the detection of water bodies using a simple threshold-based approach on a summer mean of cross-polarized backscatter images performed at 20 m with over 90% accuracy when compared to samples interpreted in high-resolution images. We are currently extending our investigations to other landscapes in Europe and Africa, here with a focus to complement the land cover mapping activities based on Sentinel-2 within the CCI Land Cover Project. Results will be presented at the conference.


11:50am - 12:10pm

Global scale mapping of the when and where of inland and coastal waters over 32 years at 30m resolution.

Jean-Francois Pekel1, Andrew Cottam1, Noel Gorelick2, Alan Belward1

1European Commission - Joint Research Centre; 2Google Earth Outreach

The location and persistence of surface water is both affected by climate and human activity and affects climate, biological diversity and human wellbeing.
Global datasets documenting surface water location and seasonality have been produced, but measuring long-term changes at high resolution remains a challenge.
To address the dynamic nature of water, the European Commission’s Joint Research Centre (JRC), working with the Google Earth Engine (GEE) team, has processed each single pixel acquired by Landsat 5, 7, and 8 between 16th March 1984 and 10th October 2015 (> 3,000,000 Landsat scenes, representing > 1,823 Terabytes of data).
The produced dataset records the months and years when water was present across 32 years, where occurrence changed and what form changes took in terms of seasonality and persistence, and documents intra-annual persistence, inter-annual variability, and trends.
This validated dataset shows that impacts of climate change and climate oscillations on surface water occurrence can be measured and that evidence can be gathered showing how surface water is altered by human activities.
Freely available, we anticipate that this dataset will provide valuable information to those working in areas linked to security of water supply for agriculture, industry and human consumption, for assessing water-related disaster reduction and recovery, and for the study of waterborne pollution and disease spread. The maps will also improve surface boundary condition setting in climate and weather models, improve carbon emissions estimates, inform regional climate change impact studies, delimit wetlands for biodiversity and determine desertification trends. Issues such as dam building (and less widespread dam removal), disappearing rivers, the geopolitics of water distribution and coastal erosion are also addressed.


12:10pm - 12:30pm

Large Scale decametric Cropland Mapping from Sentinel-2 and Validation: Lessons Learned from 2016 nationwide Demonstration for different Countries

Pierre Defourny1, Sophie Bontemps1, Bellemans Nicolas1, Matton Nicolas1, Cara Cosmin2, Dedieu Gerard3, Hagolle Olivier3, Inglada Jordi3, Guzzonato Eric4, Savinaud Michael4, Udroiu Cosmin2, Grosu Alex2, Rabaute Thierry4, Nicola Laurentiu2, Koetz Benjamin5

1UCLouvain-Geomatics, Belgium; 2CS-Romania, Romania; 3CESBIO, France; 4CS-France; 5ESA-ESRIN

Amongst all land use change processes, the agricultural area change due to spatial expansion or cultivated lands abandonment is one of the most dynamic land cover change. Furthermore, cropland or agriculture areas corresponds to very diverse land features which vary over time as the interaction result of crop management practices and seasonal weather conditions. In order to capture the cropland evolution, the JECAM network has adopted a restrictive definition of cropland which corresponds to annually cultivated lands; in spite of its remote sensing perspective this definition still makes the cropland and crop type mapping particularly challenging.

Early 2017, the Sentinel-2 (S2) mission will reach the optimal capacity for cropland mapping and agriculture monitoring in terms of resolution (10-20 m), revisit frequency (5 days with two satellites) and systematic coverage (global). In order to exploit these new capabilities, specific methods for dynamic cropland mapping and main crop type classification have been developed in the framework of the Sentinel-2 for Agriculture project funded by ESA.

Dynamic Cropland masks correspond to a set of successive masks to depict annually cultivated areas. The production can rely on two alternative approaches depending on the availability or not of in-situ data. Both methods are based on a random forest classifier, trained in the first case with in-situ data and in the second case, with samples collected from an existing reference land cover map. The Crop Type map classifies the main crop groups, i.e. irrigated versus rainfed and summer versus winter crops. The map is produced at the half and at the end of the season using a random forest classifier over a combination of S2 and L8 time series.

During the 2016 growing season, these methods were applied at national scale over Ukraine, Mali and South Africa covering more than 500 000 km² in each country, where fast track nationwide field campaigns were organized by national partners. These demonstration cases delivered a new type of land mask delineating the cropland at 10 m resolution over the entire country for a given season and delivered less than few weeks after the last observation.

The mapping results obtained over Ukraine for 2016 have been thoroughly validated through an independent accuracy assessment. National workshops are also organized with various key users in order to discuss the timeliness and relevance of the products accuracy for different operational applications. A similar approach is going on for Mali and South Africa. The accuracy assessment results are very high for Ukraine and will be available for Mali and South Africa.

Beyond these very encouraging results of the Sen2Agri system, such an automatic production of high resolution land map obtained from freely and continuously available time series changes completely the classical remote sensing approach. Indeed this system is designed to deliver products on a yearly basis and to run over very large areas providing a new capability for regional to continental mapping. Based on the lessons learnt from the Sen2Agri system demonstration, challenges ahead are discussed towards a more general land cover mapping system.

 
1:30pm - 2:50pmLarge-scale Mapping of Specific LC (cont'd)
Big Hall 
 
1:30pm - 1:50pm

Mapping Paddy Rice in Asia - A Multi-Sensor, Time-Series Approach

Kersten Clauss1, Marco Ottinger1, Wolfgang Wagner2, Claudia Kuenzer3

1Department of Remote Sensing, Institute of Geography and Geology, University of Wuerzburg, Germany; 2Department of Geodesy and Geoinformation, Vienna University of Technology, Austria; 3German Remote Sensing Data Center (DFD), Earth Observation Center (EOC), German Aerospace Center

Rice is the most important food crop in Asia and the mapping and monitoring of paddy rice fields is an important task in the context of food security, food trade policy, water management and greenhouse gas emissions modelling. Asia’s biggest rice producers are facing increasing pressure in terms of food security due to population and economic growth while agricultural areas are confronted with urban encroachment and the limits of yield increase. At the same time demand for rice imports is increasing, spurred by global population growth.

Despite the importance of knowledge about rice production the countries official land cover products and rice production statistics are of varying quality and sometimes even contradict each other. Available remote sensing studies focused either on time-series analysis from optical sensors or from Synthetic Aperture Radar (SAR) sensors. We try to address the sensor specific limitations by proposing a paddy rice mapping approach that combines medium spatial resolution, temporally dense time-series from the optical MODIS sensors and high spatial resolution time-series from the Sentinel-1 A/B SAR sensors.

We developed a method to use MODIS time-series and a one-class classifier to create medium resolution rice maps [1]. In a next step we used these medium resolution rice maps to mask Sentinel-1 Interferometric Wide Swath images, which limits the amount of data to process and allows efficient rice mapping over larger areas. The high resolution rice masks are then created by segmentation of multi-temporal SAR images into objects, from which backscatter time-series are derived and classified. We created 10m resolution rice-maps that also allow seasonality extraction, given enough Sentinel-1 acquisitions. This method allows concurrent, accurate and high resolution mapping of paddy rice areas from freely available data. Results of our paddy rice classification will be presented for selected study sites in Asia.

1. Clauss, K.; Yan, H.; Kuenzer, C. Mapping Paddy Rice in China in 2002, 2005, 2010 and 2014 with MODIS Time Series. Remote Sensing. 2016, 8, 434.


1:50pm - 2:10pm

Mapping disturbances in tropical humid forests over the past 33 years

Christelle Vancutsem, Frédéric Achard

Joint Research Centre (EU), Italy

The need for accurate information on the state and evolution of tropical forest types at regional and continental scales is widely recognized, particularly to analyze the forest diversity and dynamics, to assess degradation and deforestation processes and to better manage these natural resources (Achard et al. 2014).

A few global and continental land cover or forest cover products have been derived from Landsat satellite imagery at 30m resolution: they either contain detailed thematic information without temporal dynamics (Chen et al. 2015, Giri and Long 2010) or contain information on forest-cover changes over long time periods (10 to 30 years) without thematic classes such as the discrimination of evergreen forests (Kim et al. 2014, Hansen et al. 2013, Potapov et al. 2015).

The objective of this study is to map undisturbed evergreen and semi-deciduous forests at 30m resolution over the full tropical humid domain and to better characterize the changes and disturbances which occurred during the last 33 years in these forests. Therefore we exploited the full archive of Landsat imagery between 1984 and 2016 and developed a pixel-based automatic methodology which includes four steps: (i) pre-processing of the Landsat time series with cloud masking and filtering of sensor artefacts, (ii) single-date image classification (driven by a large spectral library) into three basic classes (evergreen forest, vegetative non-forest cover and poorly/non vegetated cover), (iii) creating forest/non-forest maps for three epochs based on the occurrence of non-forest classes, and (iv) production of a final map of detailed forest types based on the temporal succession of observed basic classes from 1984 to 2016.

The resulting map includes six classes: undisturbed forest cover, old and young vegetation regrowths, old or recent deforested areas (during last 10 years), recently disturbed areas (during last 3 years) and other land cover. This map at 30 m resolution allows the identification of small linear features such as gallery forests and of small disturbance events such as skid trails and logging decks. The use of a 33-year Landsat time series allows (i) to identify most deforestation and degradation events (when > 0.1 ha) that occurred during this period, (ii) to provide the dates of the forest disturbances, and (iii) to considerably reduce the confusion that usually occurs with small scale agricultural fields (shifting cultivation, tree plantation and irrigated crops…). Finally we characterize the deforested and disturbed classes by providing their timing and occurrence (date of first and last events, number of events).

The accuracy of the forest map was assessed over Africa from an independent sample of reference data (3830 plots) created through visual expert interpretation of Landsat imagery at several dates and finer resolution satellite imagery with an overall agreement of 90%. The pan-tropical map and the accuracy assessment results over Africa will be presented at the conference.

It is intended in the future to adapt and apply the methodology on Sentinel-2 data for a better characterization of forest-cover disturbances.


2:10pm - 2:30pm

Mapping forest disturbances in European temperate forests using Landsat time series: Issues of disturbance attribution in coupled human and natural systems

Cornelius Senf1,2, Dirk Pflugmacher1, Rupert Seidl2, Patrick Hostert1,3

1Geography Department, Humboldt-Universität zu Berlin, Germany; 2Institute for Silviculture, University of Natural Resources and Life Sciences (BOKU) Vienna, Austria; 3Integrative Research Institute on Transformation of Human-Environment Systems (IRI THESys), Humboldt-Universität zu Berlin, Germany

Remote sensing is an important tool for understanding the spatial and temporal dynamics of forest disturbances over large areas. The most recent developments in disturbance detection algorithms utilize dense time series information that enables the detailed characterization of abrupt and gradual disturbance events. These advances allow the mapping of a wide variety of disturbance agents, including harvest, fire, blowdown, and insect attacks. However, current algorithms have primarily been developed and tested in the forest ecosystems of North America, which are characterized by relatively homogeneous coniferous forests, little to none management, and medium to large-scale disturbance patches. Those algorithms have not been validated for mapping forest disturbances in Europe yet, where forest ecosystems are much more variable in terms of species composition, landscape structure, and forest management. Consequently, our aim was to evaluate current disturbance detection algorithms for mapping forest disturbance in Europe by i) comparing spectral-temporal characteristics of forest disturbances across a sample of five protected forest areas in the temperature forests of central Europe; and by ii) comparing disturbance characteristics inside the protected forests to the surrounding unprotected forests in order to understand the compound effect of natural and management disturbances on spectral-temporal characteristics. We utilized dense Landsat time series from the USGS and ESA archives covering the time period 1985 to 2016. We mapped forest disturbance characteristics (i.e., magnitude and duration) using state-of-the-art time series tools in conjunction with random forests classification and a set of photo-interpreted reference plots. Primary results show that disturbances were detected with high accuracies (>90% overall accuracy) across all sites. Spectral-temporal characteristics varied substantially within and outside protected forests. In particular, unmanaged areas showed more long-term disturbances – likely related to blowdown and following bark beetle mortality – whereas disturbances in managed forests were mainly short and of high magnitude. However, temporal patterns of disturbances were similar inside and outside the protected areas, suggesting that spectral disturbance patters outside protected forests are superimposed by a strong management signal. Our analysis improves the understanding of forest disturbances – and how to map them using remote sensing – for European temperate forests, a forest ecosystem of high economic and ecological value. As such, this study paves the way for using long and dense time series of optical satellite data, i.e. Landsat and Sentinel-2, for mapping and understanding forest dynamics across Europe.


2:30pm - 2:50pm

Towards a global high resolution wetland inventory based on optical and radar imagery

Michael Riffler1, Christina Ludwig1, Wolfgang Wagner2, Vahid Naemi2, Christian Tottrup3, Marc Paganini4

1GeoVille Information Systems, Innsbruck, Austria; 2Vienna University of Technology (TU Wien), Vienna, Austria; 3DHI GRAS, Hørsholm, Denmark; 4European Space Agency, Esrin, Italy

Wetlands are amongst the planet‘s most productive ecosystems providing a wealth of ecosystem services, e.g., nutrition, flood control and protection, or support of biodiversity, to name a few. Nevertheless, wetlands are exposed to multiple threats due to climate change, agricultural pressure, hydrological modifications, fragmentations, etc. Thus, a consistent mapping and monitoring of global wetland ecosystems is very important to track changes and trends aiming at supporting wetland conservation and sustainable management. Although EO data are ideal for large scale inventorying of wetlands, the large diversity of them makes remote detection particular challenging. This diversity and resulting challenge has been tackled by many researchers applying different sensors (optical and radar) and mapping techniques to delineate wetland from non-wetland areas. A global and homogenous inventory, however, is still not available and scope of ongoing research.

Herein, we present an innovative and operational water and wetness product building on data from Sentinel-1 SAR and Sentinel-2 MSI complemented with historical data from the Landsat missions. Rather than trying to detect wetlands in the ecological sense, we derive wetlands in the physical meaning identifying the wetness of the underlying land surface.

Using a hybrid sensor approach, i.e., combining optical and radar observations, provides a more robust wetland delineation with optical imagery being more sensitive to the vegetation cover and radar imagery to soil moisture content. Additionally, the higher frequency of observations stemming from the combined data streams contributes to a better characterization of seasonal dynamics which is important so that seasonal and temporary changes do not lead to false conclusions of the overall long-term trend in wetland extent. Within the domain of optical remote sensing, the identification of wetlands is based on the enhancement of the spectral signature using bio-physical indices sensitive to water and wetness and subsequent derivation of a water and wetness probability index. The radar-based algorithm builds on geophysical parameters, surface soil moisture dynamics and water bodies, derived from historical Envisat ASAR and Sentinel-1 backscatter time series to identify permanent/temporary wet and flooded areas. In addition, it is possible to identify flooded vegetation according to the double-bounce scattering principle in densely vegetated wetlands. The non-flood prone areas are masked using the Height Above Nearest Drainage (HAND) index. After the separate processing of the optical and radar imagery, the data is fused into a combined water and wetness product. With our methods we aim at detecting the current status of wetland areas, but also to capture the historic evolution taking into account the past 25 years in a fully automated manner.

The above described methods are currently applied for several large regional sites throughout Africa within the GlobWetland Africa project and for the Pan-European production of the “water and wetness” High Resolution Layer of the Copernicus Land Monitoring Service. We will further present a thorough validation of the product for different wetland ecosystems and discuss remaining issues, mainly due to global data availability and coverage.

 
2:50pm - 3:20pmCoffee Break
Big Hall 
3:20pm - 5:00pm2.3: Classification Systems
Session Chair: John Latham, UN/FAO
Session Chair: Curtis Woodcock, Boston University
Big Hall 
 
3:20pm - 3:40pm

Assessing and modelling a functional relationship of L.C. and L.U. a possible new path forward. The LCHML (Land Characterization Metal-Language) a new proposed FAO UML schema.

Antonio DiGregorio

FAO Consultatnt

Land-cover and Land Use information are important parameters in most of the studies related to natural environment, ecosystem services and many other important disciplines. However despite its importance and the many efforts toward data harmonization (especially for LC) do not exist an accepted model on how to link and functionally correlate those two information. On the contrary there is often a contamination of LCLU terms in many LC nomenclatures (Anderson, Corine, etc) and surprising also in some LU classifications (UNFCC, NLUD, etc). Even when the two information are keep correctly clearly distinct (E.U. Inspire spatial data infrastructure) no effort is made to model/describe their functional relationship. FAO (and UNEP) have a long prominent role on the efforts to develop standardized LCLU classification and data harmonization. Especially in LC the development of the LCCS parametric method and subsequently the “object oriented” approach underlining the LCML (Land Cover Meta-Language) model (ISO Standard 19144-2) has open a new path forward for the representation/harmonization of LC information. Based on this experience and using as base part of the original LCML UML schema, a new model is under development, the LCHML (Land Characterization Meta-Language). LCHML not only propose a revised LC and a new LU model but also try to create a comprehensive standardized framework were is possible to create an exhaustive and functional correlation of both biophysical and human related activities. LCHML therefore try to integrate in a unique model both LC and LU. The objective is to create a standardized framework were it is possible to describe any geographic area from different perspectives: pure LC, pure LU, or a functional combination of the two called (tentatively) LCH (Land Characterization).


3:40pm - 4:00pm

Advances in Copernicus High-Resolution Land Monitoring

Gernot Ramminger1, Juergen Weichselbaum2, Baudouin Desclée3, Regine Richter1, David Herrmann1, Markus Probeck1, Linda Moser1, Christian Schleicher2, Andreas Walli2, Christophe Sannier3

1GAF AG; 2GeoVille Information Systems GmbH; 3Systèmes d’Information à Référence Spatiale (SIRS) SAS

The Copernicus Programme, headed by the European Commission (EC) in partnership with the European Space Agency (ESA), offers Earth observation-based services for six core thematic areas: Land, Atmosphere, Oceans, Climate Change, Emergency and Security. Among these services – mainly based on Earth Observation (EO) data provided by ESA through the Copernicus Space Component – the Copernicus Land Monitoring Service delivers products on local, continental and global levels. As part of the pan-European Copernicus Land Service, coordinated by the European Environment Agency (EEA), the High Resolution Layers (HRLs) map multi-temporal land cover characteristics for five thematic areas (Imperviousness, Forest, Grassland, Water/Wetness, Small Woody Features) in 20 meters spatial resolution and in a consistent manner for 39 European countries. All thematic HRLs contain specific information on current environmental conditions and temporal variance of major land cover types with thematic accuracies exceeding 80–90% (depending on the product). The HRL products are tailored towards a multi-user community and are freely provided for download on the Copernicus website.

With the current production for the 2015 reference year, the HRLs are entering the era of big data multi-temporal image processing, incorporating large data volumes from different sensors in a decentralized processing framework in a network of industrial service providers. Our contribution will describe the framework, methodology and first results of the current HRL 2015 production comprising the update of the existing (2012) pan-European HRLs Imperviousness and Forest, including 2012–2015 change products, as well as 2015 mapping of other newly defined HRLs (Grassland, Water/Wetness, Small Woody Features).

Primary information source are multi-temporal, high-resolution satellite images from Sentinel-1 and -2, as well as data from SPOT, Resourcesat and Landsat contributing missions. Whereas the 2015 Forest and Imperviousness HRLs will be produced based on optical time series imagery, the newly defined Grassland and Water/Wetness products will benefit from innovative approaches on the basis of a fusion of optical and synthetic aperture radar (SAR) time series data. The novel HRL on Small Woody Features is the HRL using very high resolution (VHR) data as primary input. The VHR data sets will also be used for reference data collection and validation alongside national and pan-European in-situ data sets.

The full chain of image and in-situ data acquisition, pre-processing, generation of biophysical variables, multi-temporal image classification and validation will be demonstrated, and first results will be presented for all five HRLs. Semi-automatic classification techniques based on multi-temporal pixel-based as well as segment-based approaches are applied specifically tailored towards each HRL, resulting in raster products in full 20m resolution, as well as vector products on 1:5000 scale for the Small Woody Features.

The Copernicus HRLs are designed for a broad user community as basis for environmental and regional geo-spatial analyses as well as for supporting political decision-making. With future updates, the HRLs will significantly benefit from ESA’s growing Sentinel-1/-2 archive, further improving the products’ consistency, timeliness and accuracy. An outlook concludes on the potential usability of the presented methods and products for future European to global LC/LU applications on a HR scale.


4:00pm - 4:20pm

The National Land Cover Database (NLCD): A Successful National Land Change Monitoring System

Jonathan Henry Smith

United States Geological Survey, United States of America

The National Land Cover Database (NLCD) is an example of a national land cover change monitoring system that incorporates user requirements, scientific advances and the results from rigorous accuracy assessments to provide accurate and current data products that are useful to land managers and the public. It is managed by a consortium of United States governmental agencies, the Multi-Resolution Land Characteristics (MRLC) consortium, that require land cover information to assess environmental quality and promote the sustainable use of natural resources. This consortium is a collaborative forum, where members share research, methodological approaches, and data to establish protocols promoting the development and use of integrated land cover data products. The NLCD began as a one-time land cover thematic mapping effort of the conterminous US in 1992 and now encompasses four epochs (1992, 2001, 2006 and 2011) of thematic land cover data, as well as continuous field datasets such as percent impervious surface, required for water quality assessments and percent canopy cover, required for biodiversity, biomass and carbon sequestration assessments. All datasets are derived from Landsat imagery and so have a spatial resolution of 30 metres by 30 metres. Monitoring land cover change is accomplished by integrating a remote sensing image change analysis with a knowledge-based system on land change. This system consists of rules and/or attributes derived from the spectral, spatial, and temporal characteristics of remote sensing data and historical knowledge on land cover change and its trajectories. All of the datasets have undergone rigorous accuracy assessments that have been used to guide continuous improvements in the data, as well as advancing remote sensing science. The consortium has stressed advancing the technological aspects of land cover data production, including data preprocessing, classification methodologies and advancing an integrated database paradigm that enables change monitoring over time. The result has been a dramatic decrease in the amount of time required to create a dataset and its cost, as well as the identification of new compatible datasets such as the newly formulated percent bareground and shrub cover. The results of the consortium’s efforts have been a suite of land cover data products that provide valuable, tangible societal benefits. These benefits include: water quality monitoring and assessment; identifying potential relationships between land cover patterns and human well-being; assessing the impacts of natural disasters on ecosystem services; and influencing federal energy policies on land use change and potential environmental degradation that may arise from land use change.


4:20pm - 4:40pm

Global to Local Land Cover and Habitat Mapping: The Ecopotential Approach

Richard Lucas1, Palma Blonda2, Ioannis Manakos3, Anthea Mitchell1, Joan Maso4, Cristina Domingo4, Antonello Provenzale2

1University of New South Wales, Australia; 2Consiglio Nazionale delle Ricerche, Italy; 3Centre for Research and Technology Hellas, Greece; 4Universitat Autonoma de Barcelona, Spain

A component of the EU-funded ECOPOTENTIAL project funded under the Horizon 2020 Program (Reference 641762) has been the development of the EO Data for Ecosystem Monitoring (EODESM) system. This system allows classification of land covers according to the Food and Agricultural Organisation (FAO) Land Cover Classification System (LCCS-2) taxonomy from EO-derived biophysical and thematic information. The presentation will a) demonstrate the EODESM system, giving examples from protected areas in Europe, Africa and the Middle East, and its wider application (including at the global level), and b) convey the uncertainty associated with including layers generated at different scales within the classification.

The EODESM system has been developed such that LCCS classifications can be generated from pre-prepared (including global) data layers as well as satellite data (e.g., as acquired by the Sentinel’s and Landsat). These layers relate to the broader LCCS Level 3 categories (natural/semi-natural and cultivated/managed landscapes, natural and artificial bare areas, and water) and include additional information (including lifeform, structure, phenology, hydroperiod, and urban density). Inputs can also include existing thematic information (e.g., cadastral and infrastructure maps), modeled outputs (e.g., hydrological) or knowledge. These layers are used to generate the component LCCS codes (e.g., A3 or A4 for trees or shrubs, D1 or D2 for broadleaved or evergreen), which are then combined. The system has also been adapted to consider changes in these codes but also biophysical and other attributes (including those unrelated to the classification; e.g., above ground biomass), and can attribute changes to specific causes and indicate or suggest consequences and the impacts on, for example, protected areas. An advantage is that it can be applied at any scale (local to global)and also integrate data from sensors operating in different modes (lidar, radar, optical) and spatial and temporal resolutions. Furthermore, consistent and highly detailed land cover classifications are provided together with a diverse range of information describing landscapes (e.g., biomass, hydrology, Leaf Area Index, soil moisture). The system is also well suited to follow the LCCS-3 or Land Cover Macro Language (LCML) and is being modified accordingly. In the context of ECOPOTENTIAL, the EODESM approach has been applied to a wide range of protected areas and their surrounds with the intention of providing timely and historical information to ensure protection and facilitate restoration of ecosystem services. The EODESM system has potential for global application and this will be conveyed within the presentation with reference to other land cover classification schemes. The EODESM system will be made available through a Virtual Laboratory being prepared by the ECOPOTENTIAL project and integrated within GEOSS. Land cover maps and change maps will also be accompanied by comprehensive metadata including data quality documentation.


4:40pm - 5:00pm

Assessment of Trends in Ecosystem Health and Condition

Curtis Woodcock

Boston University, United States of America

Traditionally remote sensing of land cover has focused on mapping land cover types, with monitoring being devoted to finding the conversions between land cover types. As time series of observations have become available, it is now possible to detect more subtle characteristics about landscapes, including trends in ecosystem health and condition. Using many observations it is possible to separate effects related to seasonality from those providing longer term indications of the condition of ecosystems. For example, it is possible to observe trends indicative of growth in forests, decline related to pests or degradation, and interannual variability related to climate. One particularly interesting case is to monitor the recovery of ecosystems following disturbance. Additionally, it is possible to begin to identify events that influence ecosystems, such as extreme weather. The net effect is the ability to derive more subtle information about land cover that will prove helpful in both ecosystem modeling efforts and land management.

 
5:00pm - 6:00pmRound table discussion: Roadmap for High-resolution (10- to 30-meter) WorldCover2017
Chair: Stephen Briggs (ESA) Participants: John Latham (UN-FAO), Tobias Langanke (EU-EEA), Matthew Hansen (University of Maryland), Tom Loveland (USGS), Jun Chen (NGCC, China), Christian Hoffmann (EARSC)
Big Hall 
Date: Thursday, 16/Mar/2017
9:00am - 10:40am3.1: Validation and Accuracy
Session Chair: Thomas R. Loveland, U.S.Geological Survey
Session Chair: Sophie Bontemps, Université catholique de Louvain
Big Hall 
 
9:00am - 9:20am

Validation of global annual land cover map series and land cover change: experience from the Land Cover component of the ESA Climate Change Initiative

Sophie Bontemps1, Frédéric Achard2, Céline Lamarche1, Philippe Mayaux3, Olivier Arino4, Pierre Defourny1

1UCLouvain-Geomatics (Belgium), Belgium; 2Joint Research Center, Italy; 3European Commission, Belgium; 4European Space Agency, Italy

In the framework of the ESA Climate Change Initiative (CCI), the Land Cover (LC) team delivered a new generation of satellite-derived series of 300 m global annual LC products spanning a period from 1992 to 2015. These maps, consistent in space and time, were obtained from AVHRR (1992 - 1999), SPOT-Vegetation (1999 - 2012), MERIS (2003 - 2012) and PROBA-V (2012 - 2015) time series. The typology was defined using the UN Land Cover Classification System (LCCS) and counts 22 classes compatible with the GLC2000, GlobCover 2005 and 2009 products.

A critical step in the acceptance of these products by users is providing confidence in the products’ quality and in their uncertainties through validation against independent data. Building on the GlobCover experience, a validation strategy was designed to globally validate the maps series and change at four points in time: 2000, 2005, 2010 and 2015.

In order to optimize data and resources availability, 2600 Primary Sample Units (PSUs), defined as a 20 × 20 km box, were selected based on a stratified random sampling. Over each PSU, five Secondary Sample Units (SSUs) were defined, located at the center of each 20 × 20 km box. This cluster sampling strategy increased the number of sample sites and thus lowered standard error of accuracy estimates.

An international network of land cover specialists with regional expertise was in charge of interpreting high spatial resolution images over SSUs to build the reference database that eventually allowed assessing the accuracy of the CCI global LC map series.

Over each SSU, the visual interpretation of very high resolution imagery close to 2010 allowed labeling each object derived by segmentation according to the CCI LC legend. Change between 2010, 2005 and 2000 was then systematically evaluated on Landsat TM or ETM+ scenes acquired from the Global Land Surveys (GLS) for the respective years. Annual NDVI profiles derived from the SPOT-Vegetation time series facilitated image interpretation by providing seasonal variations of vegetation greenness. This reference validation database was then complemented for 2015 thanks to the large FAO data set obtained using the Collect Earth tool.

Overall, users and producers accuracies of the CCI LC map series are then derived. In addition, various quality indices, related to specific use in the climate models (carbon content, net primary productivity, methane emissions, etc.), will be constructed by taking into account the semantic distance between LC classes.

Such validation scheme was made possible thanks to a tailored user-friendly validation web interface which integrated large amount of very high spatial resolution imagery, pixel-based NDVI profiles from various sensors, as well as GIS interactive tools to facilitate LC and change interpretation. This efficient web platform was already selected by several other international initiatives to collect reference data sets as this unique object-based approach provides reference database which can be used to validate land cover maps at different resolution.


9:20am - 9:40am

Comparative validation of Copernicus Pan-European High Resolution Layer (HRL) on Tree Cover Density (TCD) and University of Maryland (UMd) Global Forest Change (GFC) over Europe

Christophe Sannier1, Heinz Gallaun2, Javier Gallego3, Ronald McRoberts4, Hans Dufourmont5, Alexandre Pennec1

1Systèmes d'Information à Référence Spatiale (SIRS), France; 2Joanneum Research, Austria; 3Joint Research Centre, Italy; 4US Forest Service, USA; 5European Environment Agency, Denmark

The validation of datasets such as the Copernicus Pan-European Tree Cover Density (TCD) high resolution layer (TCD) and UMD Global Forest Change (GFC) Tree percent requires considerable effort to provide validation results at Pan-European level over nearly 6 million km². A stratified systematic sampling approach was developed based on the LUCAS sampling frame. A two-stage stratified sample of 17,296 1ha square primary sampling units (PSU) was selected over EEA39 based on countries or groups of countries which area was greater than 90,000km² and a series of omission and commission strata. In each PSU, a grid of 5 x 5 Secondary Sample units (SSUs) with a 20 m step was applied. These points were photo-interpreted on orthophotos with a resolution better than 2.5m.

The UMD GFC data was processed to provide a 2012 Tree Percent layer comparable to the Copernicus High Resolution Layer by including Tree losses and gains over the selected period. An appropriate interpolation procedure was then applied to both the UMD GFC and Copernicus HRL TCD to provide a precise match with the PSU validation data and account for potential geographic differences between validation and map data and SSU sampling errors.

Initial results based on the binary conversion of the HRL TCD data by applying a 10, 15 and 30% thresholds indicate a level of omission errors in line with the required maximum level of 15% and exceeds the target level of 10% for commission errors set in the product specifications. However, disaggregated results were also computed at country / group of countries level as well as for biogeographical regions and showed considerable geographical variability. Best results were obtained in countries or biogeographical regions with high tree cover (e.g. Continental and Boreal regions) and the worst results in countries or biogeographical regions with low tree cover (e.g. Anatolian, Arctic). There is less variability between production lots and the outcome of the analysis of the scatterplots show that there is a strong relationship between validation and map data with the best results in the countries and biogeographical regions mentioned previously. However, there seems to be a general trend to slightly underestimate TCD. Results for the UMD GFC dataset are currently in progress but will be made available in time for the conference. Results provided at HRL TCD production lot, bio-geographical regions and country/group of country level, should provide a sound basis for targeting further improvement to the products.

The stratification procedure was based on a combination of the HRL TCD Layer and CORINE Land Cover (CLC) was effective for commission errors, but less so for omission errors. Therefore, the stratification should be simplified to only include a commission (tree cover mask 1-100) and an omission stratum (rest of the area) or an alternative stratification to CLC should be applied to better target omission. Finally, the approach developped could be rolled out at global scale for the complete validation of global datasets.


9:40am - 10:00am

Copernicus Global Land Hot Spot Monitoring Service – Accuracy Assessment and Area Estimation Approach

Heinz Gallaun1, Gabriel Jaffrain2, Zoltan Szantoi3, Andreas Brink3, Adrien Moiret2, Stefan Kleeschulte4, Mathias Schardt1, Conrad Bielski5, Cedric Lardeux6, Alex Petre7

1JOANNEUM RESEARCH, Austria; 2IGN FI, France; 3Joint Research Centre (JRC), European Commission; 4space4environment (s4e), Luxembourg; 5EOXPLORE, Germany; 6ONF International, France; 7GISBOX, Romania

The main objective of the Copernicus Global Land – Hot Spot Monitoring Service is to provide detailed land information on specific areas of interest, including protected areas or hot spots for biodiversity and land degradation. For such areas of interest, land cover and land cover change products which are mainly derived from medium (Landsat, Sentinel2) and high resolution satellite data are made available to global users. The service directly supports field projects and policies developed by the European Union (EU) in the framework of EU’s international policy interests. It is coordinated by the Joint Research Center of the European Commission and answers ad-hoc requests and focus mainly within the domain of the sustainable management of natural resources.

Comprehensive, independent validation and accuracy assessment of all thematic map products is performed before providing the land cover and land cover change products to the global users. The following rigorous methodology is conducted:

Spatial, temporal and logical consistency is assessed by determination of the positional accuracy, the assessment of the validity of data with respect to time, and the logical consistency of the data e.g. topology, attribution and logical relationships.

A Qualitative-systematic accuracy assessment is performed wall-to-wall by a systematic visual examination of the land cover and land cover change maps within a geographic information system and their accuracies are documented in terms of type of errors.

For quantitative accuracy assessment, a stratified random sampling approach is implemented which is based on inclusion probabilities. A web-based interpretation tool based on PostgreSQL is implemented which provides high resolution time series imagery e.g. from Sentinel 2, derived temporal trajectories of reflection, and ancillary information in addition to the very high resolution imagery. In order to quantify the uncertainty of the derived accuracy measures, confidence intervals are derived by analytic formulas as well as by applying bootstrapping.

Area Estimation is performed according to the requirements for international reporting. In general, the errors of omission and errors of commission are not equal and such bias shows up in the course of the quantitative accuracy assessment. As the implemented approach applies probability sampling and the error matrix is based on inclusion probabilities, area estimates are directly derived from the error matrix. The area estimates are complemented by confidence intervals.

The approach and methodology will be discussed in detail on the basis of a systematic accuracy assessment and area estimation results for sites in Africa which are the focus in the first year of implementation of the Copernicus Global Land Hot Spot Monitoring Service.


10:00am - 10:20am

Forest degradation assessment: accuracy assessment of forest degradation products using HR data

Naila Yasmin, Remi D’annunzio, Inge Jonckheere

FAO Forestry Department, FAO of UN, Rome Italy

REDD+, Reducing emissions from deforestation and forest degradation, is a mechanism developed by Parties to the United Nations Framework Convention on Climate Change (UNFCCC). It creates a financial value for the carbon stored in forests by offering incentives for developing countries to reduce emissions from forested lands and invest in low-carbon paths to sustainable development. Developing countries would receive results-based payments for results-based actions. Forest monitoring is always remained challenging to asses with the least error. So far, a lot of countries started to report on the deforestation on a national scale, but the technical issue to assess the degradation in-country is still a major point of research. Remote sensing options are actually being explored in order to map degradation.

The ForMosa project aim was to develop a sound methodology for forest degradation monitoring for REDD+ using medium coarse satellite data sources such as Landsat-8, Sentinel-2, and SPOT 5 in addition to Rapid Eye imagery. The project is carried out by three partners: Planet, the Wageningen University (Service Providers) and the Forest department of FAO, which carried out the accuracy assessment of the project products. Initially three pilot study sites, Kafa Tura in Ethiopia, Madre de Dios in Peru and Bac Kan in Vietnam were selected. The initial product developed at 10 m resolution with five classes, representing different levels of forest degradation with increasing intensity.

The Forest department of FAO used the in-house built open source tools for the accuracy assessment. The process consists of four steps, (i) map data, (ii) sample design, (iii) response design and (iv) analysis. A stratified random sampling approach was used to access the product by using high-resolution Google earth and Bing map imagery. In this paper, the methodology of the work and its results will be presented.


10:20am - 10:40am

A New Open Reference Global Dataset for Land Cover Mapping at a 100m Resolution

Myroslava Lesiv1, Steffen Fritz1, Linda See1, Nandika Tsendbazar2, Martin Herold2, Martina Duerauer1, Ruben Van De Kerchove3, Marcel Buchhorn3, Inian Moorthy1, Bruno Smets3

1International Institute for Applied Systems Analysis, Austria; 2Wageningen University and Research, the Netherlands; 3VITO, Belgium

Classification techniques are dependent on the quality and quantity of reference data, where the data should represent different land cover types, varying landscapes and be of a high overall quality. In general there is currently a lack of reference data for large scale land cover mapping. In particular, reference data are needed for a new dynamic 100m land cover product that will be added to the Copernicus Global Land services portfolio in the near future. These reference data must correspond to the 100m Proba-V data spatially and temporally. Therefore the main objectives of this study are as follows: to develop algorithms and tools for collecting high quality reference data; to collect reference data that correspond to Proba-V data spatially and temporally; to develop a strategy for validating the new dynamic land cover product and to collect validation data.

To aid in the collection of reference data for the development of the new dynamic land cover layer, the Geo-Wiki Engagement Platform (http://www.geo-wiki.org/) was used to develop tools and to provide user-friendly interface to collect high quality reference data. Experts, trained by staff at IIASA (International Institute for Applied Systems Analysis) interpreted various land cover types of a network of reference locations via the Geo-Wiki platform. At each reference location, experts were presented with 100m x 100m area, subdivided into 10m x 10m cells superimposed on high-resolution Google Earth or Microsoft Bing imagery. Using visual cues, acquired from multiple training sessions, the experts interpreted each cell into land cover classes which include trees, shrubs, water, arable land, burnt areas, etc. This information is then translated into different legends using the UN LCCS (United Nations Land Cover Classification System) as a basis. The distribution of sample sites is systematic, with the same distance between sample sites. However, land cover data are not collected at every sample site as the frequency depends on the heterogeneity of land cover types by region. Data quality is controlled by ensuring that multiple classifications are collected at the same sample site by different experts. In addition, there are control points that have been validated by IIASA staff for use as additional quality control measures.

As a result, we have collected the reference data (at approximately 20000 sites) for Africa, which has been chosen due to its high density in complex landscapes and areas covered by vegetation mosaics. Current efforts are underway to expand the reference data collection to a global scale.

For validation of the new global land cover product, a separate Geo-Wiki branch was developed and accessed by local experts who are not participated in the training data collection. A stratified random sampling procedure is used due to its flexibility and statistical rigorousness. For the stratification, a global stratification based on climate zones and human density by Olofsson et al (2012) was chosen owing to its independence from existing land cover maps and thus its suitability for future map evolutions.

The reference datasets will be freely available for use by the scientific community.

 
10:40am - 11:10amCoffee Break
Big Hall 
11:10am - 12:50pm3.2: Methods and Algorithms
Session Chair: Carsten Brockmann, Brockmann Consult GmbH
Session Chair: Mattia Marconcini, German Aerospace Center - DLR
Big Hall 
 
11:10am - 11:30am

Sentinel-2 cloud free surface reflectance composites for Land Cover Climate Change Initiative’s long-term data record extension

Grit Kirches1, Jan Wevers1, Olivier Arino2, Martin Boettcher1, Sophie Bontemps3, Carsten Brockmann1, Pierre Defourny3, Olaf Danne1, Tonio Fincke1, Céline Lamarche3, Thomas de Maet3, Fabrizio Ramoino2

1Brockmann Consult GmbH, Germany; 2ESA ESRIN, Italy; 3Université catholique de Louvain, Belgium

Long-term data records of Earth Observation data are a key input for climate change analysis and climate models. The goal of this research is to create cloud free surface reflectance composites over Africa using Sentinel-2 L1C TOA products, to extent end enrich a time series from multiple sensors (MERIS, SPOT VGT, Proba-V and AVHRR). While the focus of previous work was to merge the best available missions, providing near weekly optical surface reflectance data at global scale, to produce the most complete and consistent possible long-term data record, Sentinel-2 data will be used to map a prototype African land cover at 10-20 meters. To achieve this goal the following processing methodology was developed for Sentinel-2: Pixel identification, atmospheric correction and compositing. The term “Pixel identification” – IdePix – refers to a classification of a measurement made by a space borne radiometer, for the purpose of identifying properties of the measurement which are influencing further algorithmic processing steps. Most importantly is the classification of a measurement as being made over cloud and cloud shadow, a clear sky land surface or a clear sky ocean surface. This step was followed by atmospheric correction including aerosol retrieval to compute surface directional reflectance. The atmospheric correction includes the correction for the absorbing and scattering effects of atmospheric gases, in particular ozone, and water vapour, of the scattering of air molecules (Rayleigh scattering) and the correction of absorption and scattering due to aerosol particles. All components except aerosols can be rather easily corrected because they can be taken from external sources or can be retrieved from the measurements itself. Aerosols are spatially and temporally highly variable and the aerosol correction is the largest error contributor of the atmospheric correction. The atmospheric correction particularly in case of high-resolution data like Sentinel 2 data has to take into account the effects of the adjacent topography or terrain. Furthermore, the final step of the atmospheric correction should be an approximate correction of the adjacency effect, which is caused by atmospheric scattering over adjacent areas of different surface reflectance, and is required for high spatial resolution satellite sensors. The source of uncertainty associated with the atmospheric correction are observation and viewing geometry angles, aerosol optical thickness and aerosol type, digital elevation model, accuracy of ortho-rectification, pixel identification, atmospheric parameter (e.g. water vapour column), and accuracy of spectral / radiometric calibration. All sources of errors are taken into account for the uncertainty calculation, with only one exception, which corresponds to the pixel identification. In case of the uncertainty estimation for the Sentinel 2 data, the Monte Carlo simulation, a mostly used modelling approach, will be applied. Afterwards the data were binned to 10-day cloud free surface reflectance composites including uncertainty information on a specified grid. The used compositing technique includes multi-temporal cloud and cloud shadow detection, to reduce their influence. The results will be validated against CEOS LANDNET and RadCalNet sites measurements. This very large scale feasibility study should pave the way for regular global high resolution land cover mapping.


11:30am - 11:50am

Wide area multi-temporal radar backscatter composite products

David Small1, Christoph Rohner1, Adrian Schubert1, Nuno Miranda2, Michael Schaepman1

1University of Zurich, Switzerland; 2ESA-ESRIN, Frascati, Italy

Mapping land cover signatures with satellite SAR sensors has in the past been significantly constrained by topographic effects on both the geometry and radiometry of the backscatter products used. To avoid the significant distortions introduced by strong topography to radiometric signatures, many established methods rely on single track exact-repeat evaluations, at the cost of not integrating the information from revisits from other tracks.

Modern SAR sensors offer wide swaths, enabling shorter revisit intervals than previously possible. The open data policy of Sentinel-1 enables the development of higher level products, built on a foundation of level 1 SAR imagery that meets a high standard of geometric and radiometric calibration. We systematically process slant or ground range Sentinel-1 data to terrain-flattened gamma nought backscatter. After terrain-geocoding, multiple observations are then integrated into a single composite in map geometry.

Although composite products are ubiquitous in the optical remote sensing community (e.g. MODIS), no composite SAR backscatter products have yet seen similar widespread use. In the same way that optical composites are useful to avoid single-scene obstructions such as cloud cover, composite SAR products can help to avoid terrain-induced local resolution variations, providing full coverage backscatter information that can help expedite multitemporal analysis across wide regions. The composite products we propose exhibit improved spatial resolution (in comparison to any single acquisition-based product), as well as lower noise. Backscatter variability measures can easily be added as auxiliary channels.

We present and demonstrate methods that can be applied to strongly reduce the effects of topography, allowing nearly full seamless coverage even in Alpine terrain, with only minimal residual effects from fore- vs. backslopes.

We use data from the Sentinel-1A (S1A), Sentinel-1B (S1B), and Radarsat-2 (RS2) satellites, demonstrating the generation of hybrid backscatter products based on multiple sources. Unlike some other processing schemes, here data combinations are not restricted to single modes or tracks. We define temporal windows that support ascending/descending combinations given data revisit rates seen in archival data. Next, that temporal window is cycled forward in time merging all available acquisitions from the set of satellites chosen into a time series of composite backscatter images that seamlessly cover the region under study. We demonstrate such processing over the entirety of the Alps, as well as coastal British Columbia, and northern Nunavut, Canada. With S1A/S1B combinations, we demonstrate full coverage over the Alps with time windows of 6 days. Results generated at medium resolution (~90m) are presented together with higher resolution samples at 10m.

The radar composites demonstrated offer a potential level 3 product that simplify analysis of wide area multi-temporal land cover signatures, just as e.g. 16-day MODIS composite products have in the optical domain.

Use of the Radarsat-2 data was made possible through the SOAR-EU programme, and an initiative of the WMO’s Polar Space Task Group SAR Coordination Working Group (SARCWG). This work was supported by a subcontract from ESA Contract No. VEGA/AG/15/01757.


11:50am - 12:10pm

Large area land cover mapping and monitoring using satellite image time series

Jan Verbesselt, Nandin-Erdene Tsendbazar, Johannes Eberenz, Martin Herold, Johannes Reiche, Dainius Masiliunas, Eline Van Elburg

Wageningen University, Netherlands, The

Time series remote sensing data propose important features for land cover and cover change mapping and monitoring due to its capability in capturing intra and inter-annual variation in land reflectance. Higher spatial and temporal resolution time series data are particularly useful for mapping land cover types in areas with heterogeneous landscapes and highly fluctuating vegetation dynamics. Although, for large area land monitoring, satellite data such as PROBA-V that provides five-daily time series at 100 m spatial resolution, improves spatial detail and resilience against high cloud cover, it also creates challenges in handling increased data volume. Cloud-based processing platforms namely ESA (European Space Agency) Cloud Toolbox infrastructure can leverage large scale time series monitoring of land cover and its change.

We demonstrate current activities of Wageningen University and Research in time series based land cover mapping, change monitoring and map updating based on PROBA-V 100 m time series data. Using Proba-V based temporal metrics and cloud filtering in combination with machine learning algorithms, our approach resulted in improved land and forest cover maps for a large study area in West Africa. We further introduce an open source package for Proba-V data processing.

Aiming to address varied map user’s requirements, different machine learning algorithms are tested to map cover percentages of land cover types in a Boreal region. Our study also extends to automatic updating of land cover maps based on observed land cover changes using Proba-V full time series.

Cloud-based “big-data” driven land cover and change monitoring approaches showed clear advantages in large area monitoring. The advent of cloud-based platforms (e.g., PROBA-V mission exploitation platform), will not only revolutionize the way we deal with satellite data, but also enable the capacity to create multiple land cover maps for different end-users using various input data.


12:10pm - 12:30pm

Towards a new baseline layer for global land-cover classification derived from multitemporal satellite optical imagery

Mattia Marconcini, Thomas Esch, Annekatrin Metz, Soner Üreyen, Julian Zeidler

German Aerospace Center - DLR, Germany

In the last decades, satellite optical imagery has proved to be one of the most effective means for supporting land-cover classification; in this framework, the availability of data has been lately growing as never before mostly due to the launch of new missions as Landsat-8 and Sentinel-2. Accordingly, methodologies capable of properly handling huge amount of information are becoming more and more important.

So far most of the techniques proposed in the literature made use of single-date acquisitions. However, such an approach might often result in poor or sub-optimal performances, for instance, due to specific acquisition conditions or, above all, the presence of clouds preventing to sense what lies underneath. Moreover, the problem becomes even more critical when investigating large areas which cannot be covered by a single scene, as in the case of national, continental or global analyses. In such circumstances products are derived from data necessarily acquired at different times for different locations, thus generally resulting not spatially consistent.

In order to overcome these limitations we propose a novel paradigm for the exploitation of optical data based on the use of multitemporal imagery which can be effectively applied from local to global scale. First, for the given study area and the time frame of interest all the available scenes acquired from the chosen sensor are taken into consideration and pre-processed if necessary (e.g., radiometric calibration, orthorectification, spatial registration). Afterwards, cloud masking and, optionally, atmospheric correction are performed. Next, a series of features suitable for addressing the specific investigated application are derived for all scenes as, for instance, spectral indexes [e.g., the normalized different vegetation index (NDVI), the atmospherically resistant vegetation index (ARVI), the normalized difference water index (NDWI), etc.] or texture features (e.g., occurrence textures, co-occurrence texture, local coefficient of variation, etc.). The core idea is then to compute per each pixel key temporal statistics for all the extracted features, like temporal maximum, minimum, mean, variance, median, etc. Indeed, this allows compressing all the information contained in the different multi-temporal acquisitions, but at the same time to easily and effectively characterize the underlying dynamics.

In our experiments, we focused the attention on Landsat data. Specifically, we generated the so-called TimeScan-Landsat 2015 global product derived from almost 420,000 Landsat-7/8 scenes collected at 30m spatial resolution between 2013 and 2015 (for a total of ~500 terabytes of input data and more than 1.5 petabyte of intermediate products). So far, the dataset is being employed for supporting the detection of urban areas globally and estimating the corresponding built-up density. Additionally, it has also been tested for deriving a land-cover classification map of Germany. In the latter case, an ensemble of Support Vector Machines (SVM) classifiers trained by using labelled samples derived from the CORINE land-cover inventory was used (according to a novel strategy which properly takes into account its lower spatial resolution). Preliminary results are very promising and assess the great potential of the proposed approach which is planned to be applied at larger continental scale in the next months.


12:30pm - 12:50pm

Advancing Global Land Cover Monitoring

Matthew Hansen

University of Maryland College Park, Department of Geographical Sciences

Mapping and monitoring of global land cover and land use is a challenge, as each theme requires different inputs for accurate characterization.
This talk presents results on global tree cover, bare ground, surface water and crop type, with the goal of realizing a generic approach. The ultimate goal is to map land themes and their change over time, using the map directly to estimate areas. However, much work needs to be done in demonstrating that maps, particularly of land change, may be used in area estimation. Good practices require the use of probability-based samples in providing unbiased area estimates of land cover extent and change. All of the aforementioned themes will have sample-based reference data presented in explaining the challenges of generic mapping and monitoring at the global scale.

 
1:50pm - 3:10pm3.3: Platforms
Session Chair: Chris Steenmans, European Environment Agency
Session Chair: Mark Doherty, ESA
Big Hall 
 
1:50pm - 2:10pm

Land monitoring integrated Data Access - status and outlook on platforms

Bianca Hoersch1, Susanne Mecklenburg1, Betlem Rosich1, Sveinung Loekken1, Philippe Mougnaud1, Erwin Goor2

1European Space Agency, Italy; 2VITO, Belgium

For more than 20 years, “Earth Observation” (EO) satellites developed or operated by ESA have provided a wealth of data. In the coming years, the Sentinel missions, along with the Copernicus Contributing Missions as well as Earth Explorers and other, Third Party missions will provide routine monitoring of our environment at the global scale, thereby delivering an unprecedented amount of data.

As for global land monitoring and mapping the fleet of heritage and operational missions allow to analyse, extract, condense and derive relevant information on the status and change of our land cover, heading towards the development of a sustainable operational system for land cover classifications to meet the various user’s needs for land monitoring purposes.

ESA as either owner or operator has been and is handling a variety of heritage and operational missions such as Sentinel-2, Sentinel-3 on behalf of the European Commission, the latter based on 10 years MERIS heritage. Furthermore Proba-V is delivering since more than 3 years data to a growing land user base, again following on 15 years SPOT VGT heritage as operated by CNES and with data dissemination via Belgium including VITO as the current archive manager. Through missions such as Landsat, Earthnet builds on a heritage on land monitoring since >35 years.

While the availability of the growing volume of environmental data from space represents a unique opportunity for science and applications, it also poses a major challenge to achieve its full potential in terms of data exploitation.

In this context ESA has started in 2014 the EO Exploitation Platforms (EPs) initiative, a set of R&D activities that in the first phase (up to 2017) aims to create an ecosystem of interconnected Thematic Exploitation Platforms (TEPs) on European footing, addressing a variety of thematic areas.

The PROBA-V Mission Exploitation Platform (MEP), complements the PROBA-V user segment by offering an operational Exploitation Platform on the PROBA-V data, correlative data and derived products. The MEP PROBA-V addresses a broad vegetation user community with the final aim to ease and increase the use of PROBA-V data by any user. The data offering consists of the complete archive from SPOT-VEGETATION, PROBA-V , as well as selected high-resolution data/products in the land domain.

Together with the European Comission, ESA is furthermore preparing the way for a new era in Earth Observation, with a concept to bring users to the data, under the ‘EO Innovation Europe’ responding to paradigm shift in the exploitation of Earth Observation data.

The Copernicus Data Information and Access Service (DIAS) will focus on appropriate tools, concepts and processes that allow combining the Copernicus data and information with other, non-Earth Observation data sources to derive novel applications and services. It is foreseen that the data distribution and access initiatives will support, enable and complement the overall user and market uptake strategy for Copernicus.

The presentation will debrief on the current status and planning with regard to exploitation platforms in operations and planned with ESA involvement, as relevant for Land monitoring and land cover classification.


2:10pm - 2:30pm

Sentinel-powered land cover monitoring done efficiently

Grega Milcinski

Sinergise, Slovenia

Sentinel-2 data are being distributed for more than a year now. However, they are still not as widely used as they should be based on their usefulness. The reason probably lies in technical complexity of using S-2 data, especially if one wants to use full potential of multi-temporal and multi-spectral imaging. Vast volume of data to download, store and process is technically too challenging.
We will present a Copernicus Award [1] winning service for archiving, processing and distribution of Sentinel data, Sentinel Hub [2]. It makes it easy for anyone to tap into global Sentinel archive and exploit its rich multi-sensor data to observe changes in the land. We will demonstrate, how one is able not just observing imagery all over the world but also creating its own statistical analysis in a matter of seconds, performing comparison of different sensors through various time segments. The result can be immediately observed in any GIS tool or exported as a raster file for post-processing. All of these actions can be performed on a full, worldwide, S-2 archive (multi-temporal and multi-spectral). To demonstrate the technology, we created a simple web application, called "Sentinel Playground" [3], which makes it possible to query Sentinel-2 data anywhere in the world.
Sentinel-2 data are only as useful as the applications built on top of it. We would like people to not bother too much with basic processing and storing of data but rather to focus on value added services. This is why we are looking for partners, who would bring their remote sensing expertise and create interesting new services.
[1] http://www.copernicus-masters.com/index.php?anzeige=press-2016-03.html
[2] http://www.sentinel-hub.com
[3] http://apps.sentinel-hub.com/sentinel-playground/


2:30pm - 2:50pm

Bringing High Resolution to the Globe – A system for automatic Land Cover Mapping built on Sentinel data streams to fulfill multi-user application requirements

Michael Riffler, Andreas Walli, Jürgen Weichselbaum, Christian Hoffmann

GeoVille Information Systems, Austria

Accurate, detailed and continuously updated information on land cover is fundamental to fulfil the information requirements of new environmental legislation, directives and reporting obligations, to address sustainable development and land resource management and for supporting climate change impact and mitigation studies. Each of these applications has different data requirements in terms of spatial detail, thematic content, topicality, accuracy, and frequency of updates. To date, such demands were largely covered through bespoken services based on a variety of relevant EO satellite sensors and customized, semi-automated processing steps.

To address public and industry multi-user requirements, we present a system for retrieval of high resolution global land cover monitoring information, designed along Space 4.0 standards. The highly innovative framework provides flexible options to automatically retrieve land cover based on multi-temporal data streams from the Sentinel 1, Sentinel 2 as well as third party missions. Users can specify desired land cover data for any place on the globe for any given time period since the operational start of Sentinel-2, and receives a quality controlled output within hours or days (depending on product level).

The core of the operational mapping system is a modular chain consisting of sequential components for operational data access, pre-processing, time-series image analysis and classification, pre-acquired in-situ data supported calibration and validation, and service components related to product ordering and delivery. Based on the user’s selection for a target area, date/period, and the type of the requested product, the system modules are automatically configured into a processing chain tailored to sector-specific information needs.

The data access component retrieves all necessary data by connecting the processing system to Sentinel data archives (e.g. the Austrian EODC) as well as other online image and in-situ databases. After pre-processing, all satellite data streams are converged into data cubes hosting the time-series data in a scalable, tile-based system. Targeted land cover information is extracted in a class specific manner in the thematic image analysis, classification and monitoring module, representing the core of the processing engine. Key to the retrieval of thematic land cover data is an automated, iterative training sample extraction approach based on data from existing regional and global land cover products and in-situ data bases. The system is self-learning and -improving and thereby continuously building a global database of spatially and temporally consistent training samples for calibration and validation. Finally, the class specific land cover maps are assembled into a coherent land cover database according to the user’s specifications.

The developed system is currently tested in various R&D as well as operational customer projects. The aim is to solidify the performance of the various modules with a multi-staged opening of the system portal, starting with selected industry customers along B2B service models.

We will demonstrate the service capacity for a number of use cases, which are already applied for the current production of the High Resolution Layers within the Copernicus Land Monitoring Service, GlobalWetland-Africa, the Land Information System Austria (CadastrENV) and related mapping services for the international development sectors.


2:50pm - 3:10pm

Land Cover data to support a Change Detection system for the Space and Security community

Sergio Albani, Michele Lazzarini, Paulo Nunes, Emanuele Angiuli

European Union Satellite Centre, Spain

One of the main interests in exploiting Earth Observation (EO) data and collateral information in the Space and Security domain is related to the detection of changes on the Earth Surface; to this aim, having an accurate and precise information on Land Cover is essential. Thus it is crucial to improve the capability to access and analyse the growing amount of data produced with high velocity by a variety of EO and other sources as the current scenario presents an invaluable occasion to have a constant monitoring of Land Cover changes.

The increasing amount of heterogeneous data imposes different approaches and solutions to exploit such huge and complex datasets; the new paradigms are changing the traditional approach where the data are downloaded to users’ machines, and the key role of technologies such as Big Data and Cloud Computing are emerging as important enablers for productivity and better services, where the “processes are brought to the data”.

The European Union Satellite Centre (SatCen) is currently outlining a system using Big Data and Cloud Computing solutions, built on the results of two Horizon 2020 projects: BigDataEurope (Integrating Big Data, software & communities for addressing Europe’s Societal Challenges) and EVER-EST (European Virtual Environment for Research – Earth Science Themes). Main aims are: to simplify the access to satellite data (e.g. Sentinel missions); to increase the efficiency of the processing using distributed computing (e.g. by open source toolboxes); to detect and visualise changes potentially related to Land Cover variations; to integrate the final output with collateral information.

Through a web-based Graphical User Interface the user can define an Area of Interest (AoI) and a specific time range for the analysis. The system is directly connected to relevant catalogues (e.g. the Sentinels Data Hub) and the data (e.g. Sentinel-1) can be accessed and selected for the processing. Several SNAP operators (e.g. subset, calibration and terrain correction) have been chained, so the user can automatically trigger the pre-processing chain and the successive change detection algorithm (based on an in-house tool). The output is then mapped as clustered changes on the specific AoI.

The integration of the detected changes with Land Cover information and collateral data (e.g. from social media and news) allows to characterize and validate changes in order to provide decision-makers with clear and useful information.

 
3:10pm - 4:10pmConclusions by Chairs
Big Hall 

 
Contact and Legal Notice · Contact Address:
Conference: WorldCover 2017
Conference Software - ConfTool Pro 2.6.113
© 2001 - 2017 by H. Weinreich, Hamburg, Germany