Login / Signup

EnzChemRED, a rich enzyme chemistry relation extraction dataset.

Po-Ting LaiElisabeth CoudertLucila AimoKristian B AxelsenLionel BreuzaEdouard de CastroMarc FeuermannAnne MorgatLucille PourcelIvo PedruzziSylvain PouxNicole RedaschiCatherine RivoireAnastasia SveshnikovaChih-Hsuan WeiRobert LeamanLing LuoZhiyong LuAlan Bridge
Published in: Scientific data (2024)
Expert curation is essential to capture knowledge of enzyme functions from the scientific literature in FAIR open knowledgebases but cannot keep pace with the rate of new discoveries and new publications. In this work we present EnzChemRED, for Enzyme Chemistry Relation Extraction Dataset, a new training and benchmarking dataset to support the development of Natural Language Processing (NLP) methods such as (large) language models that can assist enzyme curation. EnzChemRED consists of 1,210 expert curated PubMed abstracts where enzymes and the chemical reactions they catalyze are annotated using identifiers from the protein knowledgebase UniProtKB and the chemical ontology ChEBI. We show that fine-tuning language models with EnzChemRED significantly boosts their ability to identify proteins and chemicals in text (86.30% F 1 score) and to extract the chemical conversions (86.66% F 1 score) and the enzymes that catalyze those conversions (83.79% F 1 score). We apply our methods to abstracts at PubMed scale to create a draft map of enzyme functions in literature to guide curation efforts in UniProtKB and the reaction knowledgebase Rhea.
Keyphrases
  • systematic review
  • autism spectrum disorder
  • clinical practice
  • oxidative stress
  • minimally invasive
  • air pollution
  • quality improvement
  • smoking cessation
  • virtual reality
  • electron transfer