Environmental alignment for artificial intelligence
Authors/Creators
Description
A critical gap exists between artificial intelligence (AI) alignment research and environmentalists’ efforts on AI’s use and energy footprint: ensuring AI systems’ behaviors and outputs are consistent with the goals and well-being of biospheric health and ecological viability including but beyond humans. As AI systems quickly weave into the core digital infrastructure underlying and driving human activity and become more autonomous, the goals and values that we imbue into AI systems will scale environmental outcomes exponentially. This paper’s intention is to introduce the concept of the environmental alignment problem in AI development and to anchor questions on what goals to align to, how to develop such alignment, and how to course correct. By naming the problem explicitly, this work invites further exploration and solution-building by the broader community.
Files
[September] Environmental Alignment.pdf
Files
(230.3 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:8d4f77e09e323f316ce74e1b4d4ee839
|
230.3 kB | Preview Download |
Additional details
Identifiers
- Other
- EA-NOTE-0.1
Related works
- Cites
- 10.1038/s41893-025-0153-6 (DOI)
Dates
- Accepted
-
2025-09-09Brief report framing environmental considerations in AI alignment
References
- Amodei, D. (2023). Machines of Loving Grace. https://www.darioamodei.com/essay/machines-of-loving-grace Branwen, G., Christiano, P., Gao, L., Hubinger, E., Krueger, D., Shah, R., et al. (2025). Frontiers of AI alignment research. arXiv:2501.10390. https://arxiv.org/abs/2501.10390 Creutzig, F. (2024, March 5). High-Level Political Forum presentation. United Nations. https://sdgs.un.org/sites/default/files/2024-03/Felix%20Creutzig%20HLPF%20Mar%205th%202024.pdf Gaffney, O., Luers, A., Carrero-Martinez, F., Oztekin-Gunaydin, B., Creutzig, F., Dignum, V., Galaz, V., Ishii, N., Larosa, F., Leptin, M., & Takahashi Guevara, K. (2025). The Earth alignment principle for artificial intelligence. Nature Sustainability. Advance online publication. https://doi.org/10.1038/s41893-025-01536-6 Gomes, C. P., van Hoeve, W. J., & Sabharwal, A. (2019). Computational sustainability: Computing for a better world and a sustainable future. Communications of the ACM, 62(9), 56–65. https://doi.org/10.1145/3339399 Hendrycks, D., Zou, A., Mazeika, M., Li, M., Song, D., Steinhardt, J., et al. (2023). An overview of catastrophic AI risks. arXiv:2311.17017. https://arxiv.org/abs/2311.17017 Hubinger, E., Christiano, P., Jones, A., Steinhardt, J., & Shah, R. (2024). AI alignment faking. Anthropic. https://www.anthropic.com/research/alignment-faking Kenton, Z., Krakovna, V., Leike, J., Shah, R., Uesato, J., et al. (2024). AI alignment: A comprehensive survey. arXiv:2401.17805. https://arxiv.org/abs/2401.17805 Ngo, R., Chan, L., Aschenbrenner, R., Steinhardt, J., & Shah, R. (2023). The alignment problem from a deep learning perspective. arXiv:2301.11047. https://arxiv.org/abs/2301.11047 OpenAI. (2024). O1 system card (December 2024). OpenAI. https://cdn.openai.com/o1-system-card-20241205.pdf Öko-Institut. (2023). The European Parliament's amendments to the AI Act: Policy paper. Öko-Institut e.V. https://www.oeko.de/fileadmin/oekodoc/Policy_Paper_The-European-Parliaments-amendments-to-the-AI-Act.pdf United Nations University Centre for Policy Research (UNU-CPR). (2024). Hamburg Declaration on Responsible AI for the SDGs. United Nations University. https://unu.edu/cpr/news/hamburg-declaration-responsible-ai-sdgs