Login / Signup

Low-rate smartphone videoscopy for microsecond luminescence lifetime imaging with machine learning.

Yan WangSina SadeghiAlireza VelayatiRajesh PaulZach HetzlerEvgeny DanilovFrances S LiglerQingshan Wei
Published in: PNAS nexus (2023)
Time-resolved techniques have been widely used in time-gated and luminescence lifetime imaging. However, traditional time-resolved systems require expensive lab equipment such as high-speed excitation sources and detectors or complicated mechanical choppers to achieve high repetition rates. Here, we present a cost-effective and miniaturized smartphone lifetime imaging system integrated with a pulsed ultraviolet (UV) light-emitting diode (LED) for 2D luminescence lifetime imaging using a videoscopy-based virtual chopper (V-chopper) mechanism combined with machine learning. The V-chopper method generates a series of time-delayed images between excitation pulses and smartphone gating so that the luminescence lifetime can be measured at each pixel using a relatively low acquisition frame rate (e.g. 30 frames per second [fps]) without the need for excitation synchronization. Europium (Eu) complex dyes with different luminescent lifetimes ranging from microseconds to seconds were used to demonstrate and evaluate the principle of V-chopper on a 3D-printed smartphone microscopy platform. A convolutional neural network (CNN) model was developed to automatically distinguish the gated images in different decay cycles with an accuracy of >99.5%. The current smartphone V-chopper system can detect lifetime down to ∼75 µs utilizing the default phase shift between the smartphone video rate and excitation pulses and in principle can detect much shorter lifetimes by accurately programming the time delay. This V-chopper methodology has eliminated the need for the expensive and complicated instruments used in traditional time-resolved detection and can greatly expand the applications of time-resolved lifetime technologies.
Keyphrases