Published May 27, 2025 | Version v1
Model Open

Beyond performance: How design choices shape chemical language models

Description

File content

Pre-trained fairseq models and tokenizers for the 16 combinations of SMILES/SELFIES chemical language, atomwise/SentencePiece tokenizer, implicit/explict chirality representations, and  BART/RoBERTa model architecture.

Abstract

Chemical language models (CLMs) have shown strong performance in molecular property prediction and generation tasks. However, the impact of design choices, such as molecular representation format, tokenization strategy, and model architecture, on both performance and chemical interpretability remains underexplored. In this study, we systematically evaluate how these factors influence CLM performance and chemical understanding. We evaluated models through finetuning on downstream tasks and probing the structure of their latent spaces using simple classifiers and dimensionality reduction techniques. Despite similar performance on downstream tasks across model configurations, we observed substantial differences in the structure and interpretability of their internal representations. SMILES molecular representation format with atomwise tokenization strategy consistently produced more chemically meaningful embeddings, while models based on BART and RoBERTa architectures yielded comparably interpretable representations. These findings highlight that design choices meaningfully shape how chemical information is represented, even when external metrics appear unchanged. This insight can inform future model development, encouraging more chemically grounded and interpretable CLMs.

Files

Beyond_performance_models.zip

Files (16.5 GB)

Name Size Download all
md5:1d23cdf97c362addec051a3521672cf7
16.5 GB Preview Download

Additional details

Related works

Is published in
Preprint: 10.1101/2025.05.23.655735 (DOI)

Software

Repository URL
https://github.com/ibmm-unibe-ch/SMILES_or_SELFIES
Programming language
Python