There is a newer version of the record available.

Published November 21, 2022 | Version v1
Preprint Open

On Pre-trained language models for antibody

  • 1. University of California, Santa Barbara
  • 2. ByteDance AI lab

Description

B-cell antibodies are vital proteins offering robust protection for the human body from pathogens. The development of general protein and antibody-specific pre-trained language models facilitates antibody prediction tasks. However, few studies comprehensively explore the representation capability of distinct pre-trained language models on different antibody problems. Previously, no benchmark available largely hindered the survey to answer these questions. We provide an AnTibody Understanding Evaluation (ATUE) benchmark to facilitate the investigation. We comprehensively evaluate the
performance of protein pre-trained language models by empirical study along with conclusions and new insights. The related manuscript can be found in biorxiv with title "On Pre-trained language models for antibody".

Files

ATUE.zip

Files (8.5 MB)

Name Size Download all
md5:fe2a0be65c7fdf4ef8badd5d5e486622
8.5 MB Preview Download