Login / Signup

A Data-Driven Approach to Complex Voxel Predictions in Grayscale Digital Light Processing Additive Manufacturing Using U-Nets and Generative Adversarial Networks.

Jason P KillgoreThomas J KolibabaBenjamin W CaplinsCallie I HigginsJacob D Rezac
Published in: Small (Weinheim an der Bergstrasse, Germany) (2023)
Data-driven U-net machine learning (ML) models, including the pix2pix conditional generative adversarial network (cGAN), are shown to predict 3D printed voxel geometry in digital light processing (DLP) additive manufacturing. A confocal microscopy-based workflow allows for the high-throughput acquisition of data on thousands of voxel interactions arising from randomly gray-scaled digital photomasks. Validation between prints and predictions shows accurate predictions with sub-pixel scale resolution. The trained cGAN performs virtual DLP experiments such as feature size-dependent cure depth, anti-aliasing, and sub-pixel geometry control. The pix2pix model is also applicable to larger masks than it is trained on. To this end, the model can qualitatively inform layer-scale and voxel-scale print failures in real 3D-printed parts. Overall, machine learning models and the data-driven methodology, exemplified by U-nets and cGANs, show considerable promise for predicting and correcting photomasks to achieve increased precision in DLP additive manufacturing.
Keyphrases
  • machine learning
  • big data
  • high throughput
  • artificial intelligence
  • electronic health record
  • deep learning
  • resistance training
  • mass spectrometry
  • single molecule