Characterisation of urban environment and activity across space and time using street images and deep learning in Accra
Creators
- Nathvani, Ricky1
- Clark, Sierra1
- Muller, Emily1
- Alli, Abosede2
- Bennett, James1
- Nimo, James3
- Bedford Moses, Josephine3
- Baah, Solomon3
- Metzler, Antje Barbara1
- Brauer, Michael4
- Suel, Esra1
- Hughes, Allison3
- Rashid, Theo1
- Gemmell, Emily4
- Moulds, Simon1
- Baumgartner, Jill5
- Toledano, Mireille1
- Agyemang, Ernest3
- Owusu, George3
- Agyei-Mensah, Samuel3
- Arku, Raphael2
- Ezzati, Majid1
- 1. Imperial College London
- 2. University of Massachusetts, Amherst
- 3. University of Ghana
- 4. University of British Columbia
- 5. McGill University
Description
This repository contains the image labelling protocol, analysis code, trained object detection model, object count data and site metadata for "Characterisation of urban environment and activity across space and time using street images and deep learning in Accra", Scientific Reports (2022).
To reproduce all main figures of the paper, extract the object_detection_analysis.zip file, navigate into the directory "OD_paper_release" and run the Jupyter notebook "Data processing and plotting.ipynb"
So that results are reproducable and consistent, we have prepared a Docker image with all necessary libraries pre-installed for running this notebook out of the box.
This can be done using the two bash commands on a unix system (installed with Docker)
> docker pull thicknavyrain/tensorflow-object-detection-api:latest
> docker run -v "$PWD":/local -w /local -p 8888:8888 -e GRANT_SUDO=yes --user root thicknavyrain/tensorflow-object-detection-api:latest jupyter-notebook --allow-root --ip=0.0.0.0 --port=8888 --no-browser
This should open up a Jupyter environment inside the docker image which can run the notebook without needing any library installations.
To use our pre-trained object detection model, extract "object_detection_model.zip" inside which there is a "scripts" directory with the file "OD_to_file.py". Run this script in Python 3 with the "-h" flag in command line for full instructions. Note that the necessary Tensorflow Object Detection API library dependencies are also installed in the same Docker image as above and therefore, running inference within a Docker container of this environment is highly recommended for correct usage!
To extend functionality across platforms, an ONNX model release is also packaged in "object_detection/models/onnx_format/"
We highly encourage use of our model, with appropriate attribution:
Nathvani, R., Clark, S.N., Muller, E. et al. Characterisation of urban environment and activity across space and time using street images and deep learning in Accra. Sci Rep 12, 20470 (2022). https://doi.org/10.1038/s41598-022-24474-1
Contact r.nathvani@imperial.ac.uk for any troubleshooting issues.
Files
Object Detection Labeling Examples.pdf
Additional details
Related works
- Is source of
- Journal article: 10.1038/s41598-022-24474-1 (DOI)
References
- Nathvani, R., Clark, S.N., Muller, E. et al. Characterisation of urban environment and activity across space and time using street images and deep learning in Accra. Sci Rep 12, 20470 (2022). https://doi.org/10.1038/s41598-022-24474-1