Login / Signup

Food Image Segmentation Using Multi-Modal Imaging Sensors with Color and Thermal Data.

Viprav B RajuMasudul H ImtiazEdward Sazonov
Published in: Sensors (Basel, Switzerland) (2023)
Sensor-based food intake monitoring has become one of the fastest-growing fields in dietary assessment. Researchers are exploring imaging-sensor-based food detection, food recognition, and food portion size estimation. A major problem that is still being tackled in this field is the segmentation of regions of food when multiple food items are present, mainly when similar-looking foods (similar in color and/or texture) are present. Food image segmentation is a relatively under-explored area compared with other fields. This paper proposes a novel approach to food imaging consisting of two imaging sensors: color (Red-Green-Blue) and thermal. Furthermore, we propose a multi-modal four-Dimensional (RGB-T) image segmentation using a k-means clustering algorithm to segment regions of similar-looking food items in multiple combinations of hot, cold, and warm (at room temperature) foods. Six food combinations of two food items each were used to capture RGB and thermal image data. RGB and thermal data were superimposed to form a combined RGB-T image and three sets of data (RGB, thermal, and RGB-T) were tested. A bootstrapped optimization of within-cluster sum of squares (WSS) was employed to determine the optimal number of clusters for each case. The combined RGB-T data achieved better results compared with RGB and thermal data, used individually. The mean ± standard deviation (std. dev.) of the F1 score for RGB-T data was 0.87 ± 0.1 compared with 0.66 ± 0.13 and 0.64 ± 0.39, for RGB and Thermal data, respectively.
Keyphrases