Step by step: Radiometric intercalibration of multi-source nighttime light images at high resolution for disaster mapping

SDGSAT-1


The SDGSAT-1 satellite is the first Earth observation satellite developed to support the implementation of the United Nations 2030 Agenda for Sustainable Development Goals (SDGs). It was developed and operated by the International Research Center of Big Data for Sustainable Development Goals (CBAS) and was launched on 5 November 2021.
SDGSAT-1 carries multiple onboard sensors, including a thermal infrared spectrometer, a glimmer imager (GLI), and a multispectral imager. The GLI sensor is designed for nighttime light (NTL) observation and includes one panchromatic band with 10 m spatial resolution and three multi-color bands with 40 m spatial resolution.
SDGSAT-1 data can be accessed through the SDGSAT-1 open data platform provided by CBAS (https://www.sdgsat1.org.cn/) after user registration. In this recommended practice, the RGB bands from the SDGSAT-1 GLI imagery are used for the analysis.


Yangwang-1:


The Yangwang-1 satellite (also called “Look Up 1”) was launched on 11 June 2021 and developed by Origin Space Corporation in China. It is a dual-band commercial space telescope equipped with both an optical camera and an ultraviolet camera.
The optical camera operates in the visible wavelength range of approximately 420–700 nm, while the ultraviolet camera covers a narrow spectral range around 250–280 nm. In this recommended practice, nighttime light (NTL) imagery is derived exclusively from the optical camera.
Owing to its high spatial resolution and low-light detection capability, the Yangwang-1 optical sensor can acquire high-resolution nighttime light imagery for Earth observation applications, including road network extraction, disaster monitoring, and other NTL-based analyses.

Before intercalibration, we perform geometric registration of the Yangwang-1 image to the SDGSAT-1 image by selecting ground control points, achieving a spatial error of less than 30 meters (i.e., one pixel) to ensure spatial integrity and alignment. For both SDGSAT-1 and Yangwang-1 images, background noise was eliminated by substracting the background threshold from 90th percentile of the delineated unlit areas.

Radiometric Intercalibration Workflow


Remote sensing images acquired from different satellites may show differences in brightness values because of sensor characteristics and imaging conditions. To make multi-source nighttime light (NTL) images comparable, a radiometric intercalibration process is required. In this practice, SDGSAT-1 and Yangwang-1 images are used as an example to demonstrate the intercalibration workflow.
The procedure consists of three main steps: identifying stable pixels, estimating the regression relationship between sensors, and applying the derived transformation to perform radiometric correction.

 

1. Identification of Stable Pixels


The first step of radiometric intercalibration is to identify stable pixels that show little change in nighttime light intensity. These pixels are often referred to as pseudo-invariant pixels (PIPs).
Because disasters may cause large variations in nighttime light intensity, not all pixels can be used for model fitting. Therefore, only pixels with relatively stable brightness values between the two images are selected as training samples.
To identify candidate pixels, threshold segmentation is applied to both SDGSAT-1 and Yangwang-1 images to determine lit areas. Only pixels that belong to the common lit area in both images are retained for further analysis.
Let the selected stable pixels from Yangwang-1 be

 

Figure 1

and the corresponding RGB values from SDGSAT-1 be

Figure 2

 

where n denotes the number of selected stable pixels.


2. Regression-Based Sensor Relationship Estimation


After selecting stable pixels, a regression model is used to establish the relationship between the two sensors.
In this practice, the RGB bands of SDGSAT-1 brightness value is modeled as a linear combination of Yangwang-1. The regression model can be written as

Figure 3

 

where

l  i is the brightness value of the i-th Yangwang-1 pixel

r  i  ,  g   i   ,  b    i   are the RGB values of the corresponding SDGSAT-1 pixel 

α0, α1, α2, α3 are regression coefficients

 

To improve robustness, an iterative regression process is used to remove outliers. The difference between observed and predicted values is calculated as

Figure 4

 

where L' denotes the predicted Yangwang-1 brightness values obtained from the regression model.
Pixels with large residual errors are removed from the training dataset, and the regression model is recalculated until the model converges.


3. Radiometric Intercalibration


Once the regression relationship between the two sensors has been obtained, it can be applied to all pixels of the SDGSAT-1 image.
The transformed SDGSAT-1 brightness value can be calculated as

Figure 5

 

where IYangwang-1-like represents the Yangwang-1-like image generated from SDGSAT-1 data. 

 

After this transformation, the resulting image has brightness values that are consistent with Yangwang-1 observations, allowing the two datasets to be used together for further analysis.

 

The calibrated Yangwang-1-like image generated from SDGSAT-1 data enables consistent comparison with the observed Yangwang-1 imagery. By combining the pre-disaster Yangwang-1-like image and the post-disaster Yangwang-1 image, a high-resolution nighttime light loss rate map can be derived.


The results shown in Figure 1 demonstrate that the spatial distribution of light loss clearly reflects the impact of disasters. In the Turkey–Syria earthquake cases, large areas with significant nighttime light reduction can be identified in Antakya city.


These high-resolution light loss patterns provide important information about changes in human activities and infrastructure conditions after disasters. Therefore, the radiometric intercalibration approach enables the generation of detailed nighttime light change maps that can support disaster impact assessment, damage evaluation, and post-disaster recovery monitoring.

 

 Figure 1. Nighttime light changes in Antakya

Figure 1. Nighttime light changes in Antakya, Hatay before and after the disaster. The post-disaster Yangwang-1-like image is generated from SDGSAT-1, while the pre-disaster image is acquired by Yangwang-1, enabling consistent comparison of nighttime light intensity.
Image source: UNOSAT report, 2023 (https://unosat.org/products/3497).

In Detail: Radiometric intercalibration of multi-source nighttime light images at high resolution for disaster mapping

Disasters such as earthquakes and conflicts can cause sudden changes in nighttime light. Monitoring these changes is important for understanding disaster impacts and supporting recovery efforts. High-resolution nighttime light (NTL) remote sensing provides detailed observations of human activities and infrastructure conditions at night, making it a valuable data source for disaster assessment. However, NTL images acquired by different satellites often have inconsistent radiometric characteristics, which makes their brightness values difficult to compare directly. Radiometric intercalibration is therefore required to reduce these sensor differences and enable the integration of multi-source NTL datasets.
The recommended practice introduces an automatic radiometric intercalibration workflow for high-resolution nighttime light imagery using SDGSAT-1 and Yangwang-1 data. The procedure identifies stable pixels, estimates the regression relationship between sensors, and applies the derived transformation to generate radiometrically consistent images. The workflow is implemented in Python and provided through an open-source GitHub repository, enabling users to integrate multi-source nighttime light datasets and improve disaster monitoring capabilities.

Background


Disasters such as earthquakes, conflicts can cause sudden changes in nighttime light. These changes can be observed by satellites and analyzed using nighttime light (NTL) remote sensing data. High-resolution NTL imagery provides detailed information about human activities and infrastructure conditions at night, making it useful for disaster assessment and recovery monitoring. 
To improve monitoring frequency, researchers often combine NTL images from different satellites. However, images acquired by different sensors may have different radiometric characteristics. These differences make the brightness values difficult to compare directly. Traditional radiometric intercalibration methods usually rely on pseudo-invariant pixels (PIPs), which are areas assumed to remain stable over time. In disaster situations, however, stable reference areas can be difficult to identify because large regions may experience significant changes in nighttime illumination. 
In this procedure, an automatic radiometric intercalibration approach is introduced to improve the comparability of high-resolution NTL images from different sensors. The workflow is implemented using Python, and the processing scripts are provided through a public GitHub repository. This approach helps integrate multi-source nighttime light datasets and supports more frequent observations for disaster monitoring.

 
Radiometric Intercalibration Principle


Radiometric intercalibration is a process used to make satellite images from different sensors comparable. Because different satellites may have different sensor designs, spectral responses, and imaging conditions, the brightness values recorded by each sensor may not be directly consistent. Radiometric intercalibration adjusts these differences so that images from different sensors can be analyzed together. 
A common approach for radiometric intercalibration is to first identify stable regions that remain relatively unchanged over time. These areas are often referred to as pseudo-invariant regions or pixels. The brightness values from the stable regions in the two images are then used to establish a regression relationship between the sensors. Once the relationship is obtained, it can be applied to transform the image from one sensor into a radiometrically consistent image with the reference sensor.

Radiometric Intercalibration Workflow


Remote sensing images acquired from different satellites may show differences in brightness values because of sensor characteristics and imaging conditions. To make multi-source nighttime light (NTL) images comparable, a radiometric intercalibration process is required. In this practice, SDGSAT-1 and Yangwang-1 images are used as an example to demonstrate the intercalibration workflow.
The procedure consists of three main steps: identifying stable pixels, estimating the regression relationship between sensors, and applying the derived transformation to perform radiometric correction.


1. Identification of Stable Pixels


The first step of radiometric intercalibration is to identify stable pixels that show little change in nighttime light intensity. These pixels are often referred to as pseudo-invariant pixels (PIPs).
Because disasters may cause large variations in nighttime light intensity, not all pixels can be used for model fitting. Therefore, only pixels with relatively stable brightness values between the two images are selected as training samples.
To identify candidate pixels, threshold segmentation is applied to both SDGSAT-1 and Yangwang-1 images to determine lit areas. Only pixels that belong to the common lit area in both images are retained for further analysis.


Let the selected stable pixels from Yangwang-1 be

 

Figure 1

and the corresponding RGB values from SDGSAT-1 be

 

Figure 2

 

where n denotes the number of selected stable pixels.


2. Regression-Based Sensor Relationship Estimation


After selecting stable pixels, a regression model is used to establish the relationship between the two sensors.
In this practice, the RGB bands of SDGSAT-1 brightness value is modeled as a linear combination of Yangwang-1. The regression model can be written as

Figure 3

 

where

l  i is the brightness value of the i-th Yangwang-1 pixel

r  i  ,  g   i   ,  b    i   are the RGB values of the corresponding SDGSAT-1 pixel 

α0, α1, α2, α3 are regression coefficients

 

To improve robustness, an iterative regression process is used to remove outliers. The difference between observed and predicted values is calculated as

 

Figure 4

 

where L' denotes the predicted Yangwang-1 brightness values obtained from the regression model.
Pixels with large residual errors are removed from the training dataset, and the regression model is recalculated until the model converges.


3. Radiometric Intercalibration
Once the regression relationship between the two sensors has been obtained, it can be applied to all pixels of the SDGSAT-1 image.
The transformed SDGSAT-1 brightness value can be calculated as

 

Figure 5

 

where IYangwang-1-like represents the Yangwang-1-like image generated from SDGSAT-1 data. 

 

After this transformation, the resulting image has brightness values that are consistent with Yangwang-1 observations, allowing the two datasets to be used together for further analysis.

This practice can be applied to disaster events anywhere in the world.

Advantages

  • Robustness and Flexibility: The workflow can be applied to different regions and disaster cases. Yangwang-1 is used as a case example, while the method can be extended to intercalibrate other high-resolution nighttime light datasets.
  • Consistency: The regression model converts SDGSAT-1 RGB imagery into Yangwang-1-like images, ensuring similar radiometric scales and spatial patterns so that multi-sensor nighttime light datasets can be directly compared.
  • Temporal coverage: By integrating multi-source nighttime light imagery, the method improves observation frequency and enables monitoring of dynamic changes in human activities after disasters.
  • Reproducibility: The workflow is computationally efficient and implemented in Python, with open-source scripts available on GitHub so users can reproduce the process and adapt it to other datasets.

Disadvantages

  • Data availability: The method requires high-resolution nighttime light datasets from multiple sensors. In regions where such data are unavailable or limited, the intercalibration workflow may be difficult to apply.
  • Stable pixels: The approach relies on the presence of sufficient pseudo-invariant pixels. In areas with rapid nighttime light changes, such as large disasters or fast urbanization, identifying stable pixels may be challenging.


GitHub repository
https://github.com/UN-SPIDER-Wuhan/ntl_rad_intercalibration.git 

 

●Yin, Z., Li, X., Tong, F., Li, Z., & Jendryke, M. (2020). Mapping urban expansion using night-time light images from Luojia1-01 and International Space Station. International Journal of Remote Sensing, 41, 2603-2623.

Step by Step:Angular Normalization of Daily Night-time Light Data

This Recommended Practice utilizes Python for processing NASA's Black Marble daily night-time light (NTL) data. The procedure involves data filtering, angular normalization to remove viewing zenith angle (VZA) effects, gap-filling, and calculating disaster recovery indices.

1. Data Acquisition:
  • Download the daily at-sensor TOA night-time radiance (VNP46A1) and daily moonlight-adjusted NTL (VNP46A2) from NASA's Black Marble product suite for your study area and timeframe (e.g., 1 month pre-disaster to several months post-disaster).
  • Download the GlobeLand30 land-cover dataset (30m resolution) to identify and extract urban built-up areas (artificial surfaces), as these areas concentrate human activity and artificial light.
 
2. Data Filtering & Quality Control:

To remove low-quality data and non-artificial light, apply the following strict selection criteria to each pixel:

  • Remove pixels with a solar zenith angle less than 108 degrees to eliminate solar illumination interference.
  • Filter out cloud-contaminated pixels using the QF cloud mask from the VNP46A2 product.
  • Perform a secondary selection using the mandatory quality flag to remove abnormal values.
  • Remove pixels with a moon illumination fraction above 60% to precisely identify cloud-polluted pixels.

3. Spatial Smoothing:
  • Apply a 3 × 3 moving window to calculate the average radiance. This alleviates geometric errors and image noise in the night-time light images.
Step 1: Angular Normalization


Satellite-observed NTL radiance has a strong nonlinear relationship with the Viewing Zenith Angle (VZA), causing significant time-series fluctuations. This step normalizes the radiance as if the VZA is always zero.


1. Principle:


The angular normalization algorithm is designed to remove the variations in observed night-time light radiance derived from changes in the Viewing Zenith Angle (VZA). Previous research identified a strong nonlinear relationship between night-time light radiance and VZA, which can be expressed as:

Figure 1

 

Where  denotes the VZA,  denotes the night-time light radiance, and , , and  represent the coefficients. This model is called the Zenith-Radiance Quadratic (ZRQ) model.
The purpose of the normalization algorithm is to estimate the radiance time series assuming the VZA is equal to zero over time. Based on previous studies, we assume that the anisotropy of night-time light radiance remains constant if the land use of an area does not change over a short period. Therefore, the radiance in all directions will change by the same percentage even if the total light emission of the region changes. Based on this basic hypothesis, the radiance of night-time light over a period is modeled as:

 

Figure 2

 

Where R (Z,t) epresents the night-time light radiance under the VZA of Z at moment t , c (t)  is the actual radiance at moment t assuming the VZA is zero, (α'Z2 + b Z + 1) is the function changing with VZA, and α'  and b'  are the coefficients. This model effectively decomposes the satellite-observed time series radiance dynamic into two components: the real light emission changes (represented by the radiance at a VZA of zero, c(t) ), and the VZA-change-derived radiance observation due to the anisotropy.


2. Implementation Steps:


1) Define the Objective Function: The algorithm assumes that if land use remains unchanged, the anisotropy of NTL radiance remains consistent over a short period. The goal is to estimate the time series radiance c(t) at a VZA of zero. The objective function minimizes the correlation (R2) between the angle-normalized time series and the VZA using a Zenith-Radiance Quadratic (ZRQ) model.


2) Optimize and Solve:

  • Utilize the Nelder-Mead algorithm to minimize the objective function and solve for the required coefficients.
  • In Python, this can be implemented using the scipy.optimize.fmin package.

 

Fig. 1. The pixels of night-time light time series curves before and after the angular normalization in two different regions: (a) Arecibo; (b) Bayamon. Image Source: Jia et al. 2023 https://doi.org/10.1016/j.jag.2023.103359
Fig. 1. The pixels of night-time light time series curves before and after the angular normalization in two different regions: (a) Arecibo; (b) Bayamon.
Image Source: Jia et al. 2023 https://doi.org/10.1016/j.jag.2023.103359 

 

Step 2: Time Series Gap-Filling


After obtaining the angle-normalized time series T, we need to use an additive time series model named Prophet to gap-filling the missing data which makes the time series more complete.


The Prophet model is a generalized time series model that can handle various types of patterns, including seasonal and non-seasonal characteristics, which mainly includes the trend term, seasonal term, and the error term. The three terms are optimized by the L-BFGS algorithm to obtain the fitted value. We use the real observation data to fit the time series, and only fill in the fitted values of night-time light radiance at the missing moments thereby completing the time series gap-filling.


Due to the strict filtering criteria in the pre-processing stage (e.g., removing cloud or moonlight contaminated pixels), the resulting time series will have missing data points.

  • Apply the Prophet additive time series model to fill in the missing gaps.
  • The Prophet model handles seasonal and non-seasonal characteristics by optimizing trend, seasonal, and error terms using the L-BFGS algorithm. Fill in the fitted values only at the missing moments to complete the time series.

Step 3: Estimation of Power Restoration

Once the stable and continuous time series is generated, it can be used to assess disaster damage and track power recovery.


1. Power Supply Index (PSI): Calculate the PSI to quantify the current power supply relative to the pre-disaster baseline.

Figure 3
(Where TNLi  is the total night-time light radiance at time i, and TNLpre-disaster  is the stable total night-time light before the disaster).

 

2. Power Restoration Index (PRI): Calculate the PRI to measure resilience and the chronological progression of recovery from the maximum point of damage.

 

(Where  represents the total night-time light at the most damaged moment).
(Where TNLdarkest  represents the total night-time light at the most damaged moment).

 

 

Fig. 2. Estimation of power supply index in Puerto Rico
Fig. 2. Estimation of power supply index in Puerto Rico: (a) non-angle-normalized time series estimation; (b) angle-normalized time series estimation.
Image Source: Jia et al. 2023 https://doi.org/10.1016/j.jag.2023.103359 

In Detail: Angular Normalization of Daily Night-time Light Data

Satellite-observed night-time light (NTL) data is a widely utilized proxy for human activity and economic health. Following rapid-onset natural disasters such as hurricanes and earthquakes, disaster-affected regions often experience severe power outages, resulting in a sharp decline in NTL. While monthly or annual NTL composites obscure the chronological dynamics of disaster impact and recovery, daily NTL data offers the high temporal frequency necessary to track these rapid changes. However, daily data incorporates strong uncertainty—primarily the angular effect caused by variations in the satellite's viewing zenith angle (VZA)—which hinders accurate time-series analysis. This recommended practice introduces an angular normalization algorithm to generate a highly stable NTL time series, enabling highly accurate estimations of post-disaster power outages and their corresponding economic losses.

Background 

Evaluating the progress of United Nations Sustainable Development Goals (SDGs), particularly Goal 11 (sustainable cities and communities) and Goal 13 (climate action), requires accurate measurements of disaster-induced economic losses and infrastructure disruptions. Traditional in-situ investigations for statistical damage data are difficult, time-consuming, and sometimes impossible immediately following a severe disaster.
Compared to day-time remote sensing imagery, which struggles to directly capture socioeconomic dimensions like power outages and GDP , NTL remote sensing has a unique advantage in recording human activities. By utilizing the daily Black Marble product suite (VNP46A1 and VNP46A2) from the Suomi-NPP VIIRS sensor , this practice establishes a robust methodology to evaluate community resilience and recovery speeds based on electricity restoration.


Assessing Economic Impact via Night-time Light 

Disasters generate significant economic impacts extending far beyond physical, structural damage. Damaged electrical infrastructure forces residents to reduce or entirely halt industrial production and service activities, directly leading to a decline in Gross Domestic Product (GDP). This practice operates on the assumption that the loss rate of GDP in the industry and services sectors is strongly correlated to the loss rate of power supply. By quantifying the total night-time light loss rate, stakeholders can mathematically estimate the regional GDP loss rate.

This practice is globally applicable for monitoring power disruptions and tracking the recovery phases following major natural disasters, such as hurricanes, typhoons, and earthquakes. It provides vital decision-making evidence for authorities to allocate rescue resources prioritize infrastructure repairs in heavily affected municipalities, and evaluate a region's overall adaptive capacity to climate-related hazards.

Advantages

  • High Temporal Resolution: Utilizing daily NTL data captures precise and timely information on sudden electricity demand changes and the immediate impact of natural disasters, which monthly composites would obscure.
  • High Accuracy: The improved time series achieves a high Pearson correlation coefficient  with official power authority reports, proving it to be a highly reliable reflection of true power restoration.
  • Economic Proxy: Demonstrates a strong correlation between NTL loss and GDP loss in service and industry sectors.
     

Disadvantages

  • Assumption of Unchanged Land Cover: The angular normalization algorithm operates on the hypothesis that the region's land use does not change during the short observation period.
  • Resolution Limits: Currently evaluated at the regional/municipal scale; further optimization is needed to accurately assess disaster impacts at the micro/community scale.


GitHub repository

https://github.com/UN-SPIDER-Wuhan/ntl_angle_normalization.git 

  • Li, X., Ma, R., Zhang, Q., Li, D., Liu, S., He, T., Zhao, L., 2019. Anisotropic characteristic of artificial light at night—Systematic investigation with VIIRS DNB multi-temporal observations. Remote Sens. Environ. 233, 111357.
  • Román, M.O., Stokes, E.C., Shrestha, R., Wang, Z., ... & Enenkel, M., 2019. Satellite-based assessment of electricity restoration efforts in Puerto Rico after Hurricane Maria. PLoS One 14 (6), e218883.
  • Wang, Z., Román, M.O., Kalb, V.L., Miller, S.D., Zhang, J., Shrestha, R.M., 2021. Quantifying uncertainties in nighttime light retrievals from Suomi-NPP and NOAA-20 VIIRS Day/Night Band data. Remote Sens. Environ. 263, 112557.
  • Jia, M., Li, X., Gong, Y., Belabbes, S., Dell'Oro, L., 2023. Estimating natural disaster loss using improved daily night-time light data. International Journal of Applied Earth Observation and Geoinformation, 120, 103359.