The ENCODE Uniform Analysis Pipelines.
Benjamin HitzJin-Wook LeeOtto JolankiMeenakshi S KagdaKeenan GrahamPaul SudIdan GabdankJ Seth StrattanCricket SloanTimothy DreszerLaurence RoweNikhil PodduturiVenkat MalladiEsther T ChanJean DavidsonMarcus HoStuart MiyasatoMatt SimisonForrest TanakaYunhai LuoIan WahlingKhine Zin LinJennifer JouEurie HongLaurence D RoweRichard SandstromEric RynesJemma NelsonAndrew NishidaAlyssa IngersollMichael BuckleyMark FrerkerDaniel KimNathan BoleyDiane TroutAlexander DobinSorena RahmanianDana WymanGabriela Balderrama-GutierrezFairlie ReeseNeva C DurandOlga DudchenkoDavid WeiszSuhas RaoAlyssa BlackburnDimos GkountaroulisMahdi SadrMoshe OlshanskyYossi EliazDat NguyenIvan BochkovMuhammad Saad ShamimRagini MahajanErez AidenThomas GingerasSimon HeathMartin HirstW James KentAnshul KundajeAli MortazaviBarbara J WoldJ Michael CherryPublished in: Research square (2023)
The Encyclopedia of DNA elements (ENCODE) project is a collaborative effort to create a comprehensive catalog of functional elements in the human genome. The current database comprises more than 19000 functional genomics experiments across more than 1000 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the Homo sapiens and Mus musculus genomes. All experimental data, metadata, and associated computational analyses created by the ENCODE consortium are submitted to the Data Coordination Center (DCC) for validation, tracking, storage, and distribution to community resources and the scientific community. The ENCODE project has engineered and distributed uniform processing pipelines in order to promote data provenance and reproducibility as well as allow interoperability between genomic resources and other consortia. All data files, reference genome versions, software versions, and parameters used by the pipelines are captured and available via the ENCODE Portal. The pipeline code, developed using Docker and Workflow Description Language (WDL; https://openwdl.org/) is publicly available in GitHub, with images available on Dockerhub (https://hub.docker.com), enabling access to a diverse range of biomedical researchers. ENCODE pipelines maintained and used by the DCC can be installed to run on personal computers, local HPC clusters, or in cloud computing environments via Cromwell. Access to the pipelines and data via the cloud allows small labs the ability to use the data or software without access to institutional compute clusters. Standardization of the computational methodologies for analysis and quality control leads to comparable results from different ENCODE collections - a prerequisite for successful integrative analyses.
Keyphrases
- electronic health record
- big data
- healthcare
- gene expression
- data analysis
- quality control
- transcription factor
- quality improvement
- emergency department
- endothelial cells
- genome wide
- oxidative stress
- autism spectrum disorder
- machine learning
- mass spectrometry
- high throughput
- artificial intelligence
- optical coherence tomography
- drug induced
- adverse drug