Published April 10, 2024 | Version 1.0.0
Software Open

Road Network Mapping from Multispectral Satellite Imagery: Leveraging Deep Learning and Spectral Bands

Description

Road Network Mapping from Multispectral Satellite Imagery: Leveraging Deep Learning and Spectral Bands

Submitted to AGILE24

Abstract

Updating road networks in rapidly changing urban landscapes is an important but difficult task, often challenged by the complexity and errors of manual mapping processes. Traditional methods that primarily use RGB satellite imagery struggle with obstacles in the environment and varying road structures, leading to limitations in global data processing. This paper presents an innovative approach that utilizes deep learning and multispectral satellite imagery to improve road network extraction and mapping. By exploring U-Net models with DenseNet backbones and integrating different spectral bands we apply semantic segmentation and extensive post-processing techniques to create georeferenced road networks. We trained two identical models to evaluate the impact of using images created from specially selected multispectral bands rather than conventional RGB images. Our experiments demonstrate the positive impact of using multispectral bands, by improving the results of the metrics Intersection over Union (IoU) by 6.5%, F1 by 5.4%, and the newly proposed relative graph edit distance (relGED) and topology metrics by 2.2% and 2.6% respectively.

Data

To use the code in this repository, download the required data from SpaceNet Challenge 3 (https://spacenet.ai/spacenet-roads-dataset/) via AWS.

The SpaceNet Dataset by SpaceNet Partners is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

SpaceNet was accessed on 05.01.2023 from https://registry.opendata.aws/spacenet

Software

The analysis and results of this research were achieved with Python and several software packages such as:

- tensorflow

- networkx

- Pillow, cv2

- GDAL, rasterio, shapely

- APLS

For a fully reproducible environment and software versions refer to 'environment.yml'.

All data is licensed under CC BY 4.0, all software files are licensed under the MIT License.

Reproducibility

To execute the scripts and train your model, first refer to the 'Data' section of this file to download the data from the providers. Apply the preprocessing steps from 'preprocessing.py', but consider that to avoid redundancy, preprocessing steps not included in this repository are the conversion of geojson road data into training images, the reduction of satellite images to an 8-bit format, and their conversion into '.png' files. These steps can be achieved by applying and, if necessary, modifying the APLS library which is publicly available under https://github.com/CosmiQ/apls. Apply preprocessing to both RGB and MS images. To generate the latter execute the 'ms_channel_seperation.py' script while specifying the wanted multispectral channels. Execute the 'train_model.py' script to train your semantic segmentation model, and apply post-processing procedures with 'postprocessing.py'. Generate the metrics results by executing 'evaluation.py'.

To save storage space, not all the used data is made available in this repository. Please refer to the 'Data' section of this file to access and download the data from the providers. Exemplary preprocessed training data (100 split images of Las Vegas) is included in the folders './data/tiled512/small_test_sample/ms/' and './data/tiled512/small_test_sample/rgb/'. Post-processed results are provided in the corresponding folders './results/UNetDense_MS_512/' and './results/UNetDense_RGB_512/'. These include the stitched and recombined images, without any post-processing applied to them, as well as the extracted and post-processed graphs as '.pickle' files. This provided data was used to calculate the metrics Intersection over Union (IoU), F1 score, relGED, and topology metric as presented in the paper.

The figures included in the paper can be reproduced by saving images created during the preprocessing, training, and post-processing steps. To generate the plots of resulting graphs, refer to the corresponding functions and enable the boolean parameter 'plot'. Bounding boxes seen in the figures were drawn manually and only serve an explanatory purpose.

Please be advised that file paths and folder structure have to be adapted manually in the scripts to suit the users folder structure. Be aware of selecting uniform file paths and storing the results in folders named after their model. Furthermore, the code is not meant to be executed from the terminal, running the individual scripts in an IDE is recommended.

Files

code.zip
Files (359.3 MiB)
Name Size
md5:fa810708c873bccc492320d29620d608
87.0 MiB Preview Download
md5:b08b72ed2c09458718cf591b1569d75f
240.6 MiB Preview Download
md5:ceb002490aab74c3ec499ba391fd326f
31.7 MiB Preview Download
md5:9065e86c4242938ea19d75f4592f57c4
25.1 KiB Preview Download

Additional details

Created:
April 16, 2024
Modified:
April 17, 2024