Skip to Content

Department of Biological Sciences

Undergraduate students Rachel Maile and Matthew Duggan published a new study in Ecology and Evolution

Camera traps such as motion activated cameras have been used for decades as a means of observing animal species in a wide variety of habitats. They have also become cost effective for widespread deployment in the field and have been widely used to observe various aspects of populations such as animal density and abundance. However, in order to most effectively estimate animal distribution and abundance, numerous camera traps must be deployed with a high sampling effort, leading to an expansive number of images that need to be filtered and labeled. Due to the considerable time and effort expended by researchers when classifying camera trap images, many studies have deployed the use of machine learning to rapidly classify animal species and anthropogenic objects. One of the most popular machine-learning architectures are convolutional neural networks (CNNs), which are deep-learning algorithms that have a variety of branching methodologies in their construction to suit a variety of problems within the scope of ecology. However, one drawback for most networks is that a large number of images (millions) are necessary to devise an effective identification or classification model.

In their new study titled "The successes and pitfalls: Deep-learning effectiveness in a Chernobyl field camera trap application", undergraduate students Rachel Maile and Matthew Duggan, together with their mentor Dr. Tim Mousseau, examined specific factors pertinent to camera trap placement in the field that may influence the accuracy metrics of a deep-learning model that has been trained with a small set of images. They transfer-trained a CNN to detect 16 different object classes (14 animal species, humans, and fires) across 9576 images taken from camera traps placed in the Chernobyl Exclusion Zone and analyzed the effects of wind speed, cloud cover, temperature, image contrast, and precipitation. Although no significant correlation between CNN success and ambient conditions was observed, they found that the model was more successful when images were taken during the day as well as when precipitation was not present. Altogether, their study suggests that while qualitative site-specific factors may confuse quantitative classification algorithms such as CNNs, training with a dynamic training set can account for ambient conditions so that they do not have a significant impact on CNN success.

Images from camera traps deployed in Chernobyl
Sample photographs taken from camera traps in Chernobyl. From the left: boar (Sus scrofa), red fox (Vulpes vulpes), moose (Alces alces).

Challenge the conventional. Create the exceptional. No Limits.

©