ACEsuit/mace: v0.3.15
Authors/Creators
- Ilyes Batatia1
- davkovacs
- ttompa
- bernstei
- Hatem Helal
- Janosh Riebesell2
- WillBaldwin0
- EszterVU
- Matthew Avaylon
- Rokas Elijosius
- Alin Marin Elena
- Vivek Bharadwaj3
- wcwitt
- ThomasWarford
- Tamas Stenczel
- Nils Goennheimer
- Rhys Goodall4
- Elliott Kasoar
- Andrew S. Rosen5
- Cheuk Hin Ho
- Felix Musil6
- Alexander Spears
- Hubert Beck7
- Eric Sivonxay8
- Lars Schaaf
- Chaitanya Joshi1
- Kush
- Sandip De9
- Leo10
- Harry Moore
- 1. University of Cambridge
- 2. Periodic Labs
- 3. @PASSIONLab
- 4. @Radical-AI
- 5. Princeton University
- 6. CuspAI
- 7. Charles University Prague
- 8. Lawrence Berkeley National Laboratory
- 9. BASF
- 10. Max Planck Institute for the Structure and Dynamics of Matter
Description
MACE v0.3.15 Release Notes
We are excited to announce MACE v0.3.15, featuring two new cross-domain foundation models, LoRA fine-tuning, weight freezing, improved LAMMPS MLIAP support for non-linear models, and a range of bug fixes and training improvements.
ποΈ Foundation Models
MACE-MH-1
Introduced MACE-MH-1, a state-of-the-art multi-head foundation machine-learning interatomic potential that unifies molecular, surface, and inorganic crystal chemistry in a single model. MACE-MH-1 covers 89 chemical elements, achieves state of the art accuracy across solids, molecular systems and surfaces, and is built on an enhanced MACE architecture with improved weight sharing and non-linear tensor decomposition.
MACE-MH-1 is pre-trained on OMAT-24 (100M inorganic crystal configurations) and fine-tuned with six heads:
omat_pbe(default) β PBE/PBE+U, general inorganic materialsomolβ ΟB97M-VV10, organic and organometallic moleculesspice_wB97Mβ ΟB97M-D3(BJ), molecular systemsrgd1_b3lypβ B3LYP, reaction chemistry intermediatesoc20_usemppbeβ PBE, surface catalysismatpes_r2scanβ rΒ²SCAN, high-accuracy inorganic materials
MACE-MH-1 is supported in LAMMPS via the MLIAP interface, including support for non-linear interaction blocks (see MLIAP section below). Note that MACE-MH-1 is not yet supported in SYMMETRIX.
Example usage:
from mace.calculators import mace_mp
calc = mace_mp(
model="mh-1",
default_dtype="float64",
device="cuda",
head="omat_pbe",
)
energy = atoms.get_potential_energy()
forces = atoms.get_forces()
Model weights and documentation: Hugging Face β mace-mh-1.
Reference: Batatia et al., "Cross Learning between Electronic Structure Theories for Unifying Molecular, Surface, and Inorganic Crystal Foundation Force Fields", arXiv:2510.25380
MACE-MH-0
Added MACE-MH-0, the predecessor of MACE-MH-1. It covers the same 89 elements and is trained on the same family of datasets (OMAT, OMOL, OC20, MATPES), providing strong cross-domain performance on bulk, surfaces, and molecules. MACE-MH-0 is supported in SYMMETRIX and via LAMMPS MLIAP.
Example usage:
from mace.calculators import mace_mp
calc = mace_mp(model="mh-0", default_dtype="float64", device="cuda")
Reference: Batatia et al., "Cross Learning between Electronic Structure Theories for Unifying Molecular, Surface, and Inorganic Crystal Foundation Force Fields", arXiv:2510.25380
π― Fine-tuning
LoRA Fine-tuning
Added Low-Rank Adaptation (LoRA) fine-tuning support, enabling parameter-efficient adaptation of foundation models. LoRA injects trainable low-rank matrices into equivariant o3.Linear, dense nn.Linear, and FullyConnectedNet layers, keeping the base model frozen. LoRA deltas are cached in eval mode to speed up validation. After training, LoRA weights are automatically merged back into the base model, producing a standard MACE model with no runtime overhead.
Example usage:
python mace_run_train.py \
--name="lora_mh1" \
--foundation_model="mh-1" \
--train_file=data.xyz \
--lora=True \
--lora_rank=4 \
--lora_alpha=1.0
Weight Freezing
Added support for selectively freezing layers during fine-tuning, enabling faster training and reduced overfitting on small datasets. The --freeze N flag freezes the first N layers of the model (embedding, interactions, products, readouts). This approach follows the frozen transfer learning strategy demonstrated in Radova et al.
Example usage:
python mace_run_train.py \
--name="frozen_ft" \
--foundation_model="mh-1" \
--train_file=data.xyz \
--freeze=5
See code.
Reference: Radova et al., "Fine-tuning foundation models of materials interatomic potentials with frozen transfer learning", npj Comput. Mater. 11, 237 (2025)
Estimated E0s for Fine-tuning
Added a robust least-squares procedure to automatically re-estimate atomic reference energies (E0s) when fine-tuning on a new dataset. Because MACE predicts atomisation energies rather than total energies, the E0s must be adapted to the target level of theory when fine-tuning. The new --E0s=foundation option formulates this as an overdetermined linear system: for each configuration, the prediction error is decomposed per element, and the correction to each element's E0 is obtained via least squares. This is more robust than simple averaging and avoids the need to run expensive isolated-atom DFT calculations. The E0s of any replay head are kept fixed at their original pre-training values.
Example usage:
python mace_run_train.py \
--name="ft_model" \
--foundation_model="mh-1" \
--train_file=data.xyz \
--E0s="foundation"
See code.
Reference: Appendix XV.A of Batatia et al., "Cross Learning between Electronic Structure Theories for Unifying Molecular, Surface, and Inorganic Crystal Foundation Force Fields", arXiv:2510.25380
Fine-tuning for MH-1, MH-0 and MACE-OMOL
Added fine-tuning support for MACE-MH-1, MACE-MH-0, and MACE-OMOL foundation models, with fixed head extraction and corrected pseudo-label replay.
π¬ Models
Arbitrary Per-Atom Embeddings
Extended the embedding framework to support arbitrary per-atom vector inputs alongside the existing per-graph embeddings (charge, spin, electronic temperature). This enables models to condition on additional per-site features such as local magnetic moments.
Example usage:
python mace_run_train.py \
--train_file=data.xyz \
--embedding_specs='{"charge": {"embed_type": "continuous", "per": "graph", "in_dim": 1, "emb_dim": 32}, "local_moment": {"embed_type": "continuous", "per": "atom", "in_dim": 1, "emb_dim": 32}}'
Dielectric MACE Improvements
Fixed weight shape errors for polarizability models and corrected support for AtomicDielectricMACE models trained only on dipoles (without polarizability). Updated PBC tensor handling for polarizability calculations. Added training tests for dipole and polarizability models.
See code.
β‘ Performance Improvements
MLIAP Support for Non-Linear Interaction Blocks
Fixed the MLIAP interface to correctly support models with non-linear interaction blocks, enabling MACE-MH-1 to be used in LAMMPS. This required fixing the exchange symmetrisation in the non-linear block and correcting the total charge and total spin tensor shapes in the MLIAP forward pass.
Example LAMMPS usage:
# Create LAMMPS-compatible model file
python -m mace.cli.create_lammps_model mace-mh-1.model
# In LAMMPS input script:
pair_style mliap model mace_mh1-mliap_lammps.pt
pair_coeff * * NULL H C O N ...
See code.
PyTorch Compile Fixes
Fixed torch.compile graph breaks for PyTorch 2.8, using fullgraph=True and scoping the symbolic trace more narrowly to avoid recompilation. CuEq wrapper operations were updated to the latest API.
π§ Training and Infrastructure Improvements
Improved Multi-GPU Support
- Fixed a bug preventing stage-2 model evaluation in multi-GPU (DDP) environments
- Fixed patience-based early stopping in distributed training to correctly exit the epoch loop across all ranks
Calculator Refactoring
Refactored result extraction from model output in MACECalculator to avoid branching on model_type, making it easier to support new model architectures. MACECalculator.implemented_properties is now set more robustly based on the actual model output.
See code.
π Bug Fixes and Improvements
- Fixed 1-layer model with CuEq
- Fixed OMOL CuEq code path
- Fixed cutoff dtype bug causing incorrect neighbour lists
- Fixed
apply_cutoffdefault value in config - Fixed
filter_nonzero_weightAPI calls invisualise_train.py - Fixed heads bug in
eval_configswhen using multihead models - Fixed array equality checks in info dict
- Explicitly cast atomic numbers to
intto prevent type errors - Fixed parser default values for several training arguments
- Fixed
model_dircreation before saving model files - Fixed
ScheduleFreeoptimizer beta parameter - Support for newer versions of pandas in
visualise_train
π¨ Infrastructure
- Python 3.8 support dropped β minimum supported version is now Python 3.9
- CI disk space management improvements
- Disk usage logging and verbose pytest output added to CI
- Test duration reporting added to pytest logs
- CI now also runs on the
developbranch
π Acknowledgments
We thank all contributors to this release, including new contributors @vue1999, @hatemhelal, @janosh, @Nilsgoe, @stenczelt, @zekunlou, @ThomasWarford, @Student204161, @pb1414, @Enry99, @alinelena, and @kush2803.
Full Changelog: https://github.com/ACEsuit/mace/compare/v0.3.14...v0.3.15
For detailed documentation and examples, visit our GitHub repository and documentation.
Files
ACEsuit/mace-v0.3.15.zip
Files
(121.3 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:45791e762cbf130c38669d3859cf920c
|
121.3 MB | Preview Download |
Additional details
Related works
- Is supplement to
- Software: https://github.com/ACEsuit/mace/tree/v0.3.15 (URL)
Software
- Repository URL
- https://github.com/ACEsuit/mace