Published April 18, 2023 | Version v1
Other Open

Energy-efficient Medical Image Processing

  • 1. Experimental Radiology, University Ulm
  • 2. SCC, KIT Karlsruhe

Description

In order to curtail and reduce the impact that climate change has on our socio-economic live, saving energy is key. Data centers in general and modern-day AI applications in particular are electricity super-users. Recent studies have attempted at estimating the carbon footprint of common large-scale AI applications, highlighting the unsustainable, environmentally questionable path of current AI research. Despite this research, reducing or even monitoring the necessary energy consumption needed for computational approaches in medical imaging are still poorly investigated. To counteract this situation, i.e. model development purely under the perspective of predictive performance while disregarding the accompanied environmental consequences, we aim to organize a challenge on energy-efficient Medical Image Processing, similar to the AI-HERO Hackathon on energy efficient AI [https://ai-hero-hackathon.de/ , https://doi.org/10.48550/arXiv.2212.01698 ].

The goal of the challenge is to raise awareness for energy consumption of training and inference methods, foster the development novel best-practice approaches and solutions to improve the energy efficiency of commonly used DL/ML/MIP models. This will hopefully increase the awareness for energy consumption in needed for medical image processing and lead novel approaches that allow more efficient algorithms. In addition to this, we try to gather more information about the current situation w.r.t. energy-efficient computation on medical image processing, for example the ratio of training and inference runs with an additional survey. Towards this end, the challenge will offer two pathways to develop energy-efficient medical image processing models:

- The original challenge will call for submission of training and inference on a dedicated public dataset (the actual training/test-split is hidden to the participants) for three common tasks: segmentation, detection, and classification.
- To foster best-practices and reporting of energy consumption in general AI model development, co-submission for inference and possibly training for other challenges will be offered.

Each submission will be evaluated on the tier-2 Supercomputer HoreKa, located at the Karlsruhe Institute of Technology (KIT), Germany. HoreKa allows for precise measurements of whole compute node energy consumption per run via internal power sensors of that are part of Lenovo's XClarity Controller (XCC) and can be read via IPMI. Whole workload energy consumption of submitted solutions will be measured for a full training run and inference on the hold-out test set.

Submissions will be offered a dedicated amount of compute resources (1 full node, equipped with 4 NVIDIA A100 GPUS with each 48 GB of VRAM, connected via NVLINK, and Intel Xeon Platinum 8368 CPUs with a total of 76 cores) to run training and testing.

For energy measurements, the participants have to submit their solutions prior to the on-site event. The final solution will then be calculated and evaluated in a time span of two to three weeks prior to MICCAI and the final results reported during the on-site event. To allow the participants a prior estimation of the success of their approach, we will present guidelines and tools to measure the energy consumption on standard hardware during a envisioned initial workshop and on the challenge website.

We expect a trade-off between required energy consumption and achieved performance. To account for this, we will not report a winning approach but rather a pareto front between achieved performance and energy consumption. A selection of high-performance approaches on the pareto front will be given the chance to present their solution and approaches during the on-site event. The actual selection will depend on the number of submissions; however, the presentations will be selected w.r.t. interest for the audience, performance, completeness of pre-experiments and originality.

Files

Energyefficientdeeplearningformedicalimaging_04-14-2023_12-23-12.pdf

Files (5.0 MB)