Tomographic imaging using penetrating waves generates cross-sectional views of the internal anatomy of a living subject. For artefact-free volumetric imaging, projection views from a large number of angular positions are required. Here we show that a deep-learning model trained to map projection radiographs of a patient to the corresponding 3D anatomy can subsequently generate volumetric tomographic X-ray images of the patient from a single projection view. We demonstrate the feasibility of the approach with upper-abdomen, lung, and head-and-neck computed tomography scans from three patients. Volumetric reconstruction via deep learning could be useful in image-guided interventional procedures such as radiation therapy and needle biopsy, and might help simplify the hardware of tomographic imaging systems.
Keyphrases
- deep learning
- computed tomography
- image quality
- high resolution
- convolutional neural network
- radiation therapy
- artificial intelligence
- dual energy
- cross sectional
- positron emission tomography
- magnetic resonance imaging
- end stage renal disease
- case report
- newly diagnosed
- contrast enhanced
- chronic kidney disease
- ejection fraction
- squamous cell carcinoma
- peritoneal dialysis
- mass spectrometry
- high density
- resistance training