Computer-vision object tracking for monitoring bottlenose dolphin habitat use and kinematics.
Joaquin T GabaldonDing ZhangLisa LauderdaleLance MillerMatthew Johnson-RobersonKira BartonK Alex ShorterPublished in: PloS one (2022)
This research presents a framework to enable computer-automated observation and monitoring of bottlenose dolphins (Tursiops truncatus) in a zoo environment. The resulting approach enables detailed persistent monitoring of the animals that is not possible using manual annotation methods. Fixed overhead cameras were used to opportunistically collect ∼100 hours of observations, recorded over multiple days, including time both during and outside of formal training sessions, to demonstrate the viability of the framework. Animal locations were estimated using convolutional neural network (CNN) object detectors and Kalman filter post-processing. The resulting animal tracks were used to quantify habitat use and animal kinematics. Additionally, Kolmogorov-Smirnov analyses of the swimming kinematics were used in high-level behavioral mode classification. The object detectors achieved a minimum Average Precision of 0.76, and the post-processed results yielded 1.24 × 107 estimated dolphin locations. Animal kinematic diversity was found to be lowest in the morning and peaked immediately before noon. Regions of the zoo habitat displaying the highest activity levels correlated to locations associated with animal care specialists, conspecifics, or enrichment. The work presented here demonstrates that CNN object detection is viable for large-scale marine mammal tracking, and results from the proposed framework will enable future research that will offer new insights into dolphin behavior, biomechanics, and how environmental context affects movement and activity.