Adaptable control policies for variable liquid chromatography columns using deep reinforcement learning.
David AnderssonChristoffer EdlundBrandon CorbettRickard SjögrenPublished in: Scientific reports (2023)
Controlling chromatography systems for downstream processing of biotherapeutics is challenging because of the highly nonlinear behavior of feed components and complex interactions with binding phases. This challenge is exacerbated by the highly variable binding properties of the chromatography columns. Furthermore, the inability to collect information inside chromatography columns makes real-time control even more problematic. Typical static control policies either perform sub optimally on average owing to column variability or need to be adapted for each column requiring expensive experimentation. Exploiting the recent advances in simulation-based data generation and deep reinforcement learning, we present an adaptable control policy that is learned in a data-driven manner. Our controller learns a control policy by directly manipulating the inlet and outlet flow rates to optimize a reward function that specifies the desired outcome. Training our controller on columns with high variability enables us to create a single policy that adapts to multiple variable columns. Moreover, we show that our learned policy achieves higher productivity, albeit with a somewhat lower purity, than a human-designed benchmark policy. Our study shows that deep reinforcement learning offers a promising route to develop adaptable control policies for more efficient liquid chromatography processing.
Keyphrases
- liquid chromatography
- mass spectrometry
- public health
- tandem mass spectrometry
- high resolution mass spectrometry
- healthcare
- simultaneous determination
- high performance liquid chromatography
- solid phase extraction
- mental health
- gas chromatography
- endothelial cells
- machine learning
- artificial intelligence
- big data
- data analysis
- deep learning