The Navy regularly conducts studies of marine mammal distribution and occurrence in association with training exercises to better monitor potential interactions between marine mammals and naval activities. Methods used for these studies include visual surveys and acoustic monitoring via passive acoustic recorders; however, these methods have significant drawbacks. Visual surveys from ships and airplanes are expensive, and they cannot be conducted during nighttime or periods of high winds, rough seas, or poor visibility. Although passive acoustic recorders have large detection ranges and can be used to persistently detect vocalizing marine mammals regardless of weather conditions, recordings can be accessed only after recovery of the recording instrument. In addition, acoustic analysis by a trained person is time consuming and expensive.
Recent advances in low-power digital signal processors, detection algorithms, and satellite communications have made near real-time (within hours of sound detection) audio processing, sound detection, classification, and reporting from autonomous platforms feasible. This project will demonstrate a passive acoustic detection and classification hardware/software system that is capable of detecting the calls of four species of endangered baleen whales—fin (Balaenoptera physalus), humpback (Megaptera novaeangliae), sei (Balaenoptera borealis), and right (Eubalaena glacialis)—from three different autonomous platforms (Slocum gliders, wave gliders, moored buoys). In particular, the project seeks to: (1) demonstrate year-round, large-scale near real-time acoustic surveillance from these autonomous platforms; (2) validate near real-time acoustic detections using audio recorded in situ and airplane-, ship-, and land-based visual observations; and (3) develop best practices for integrating near real-time acoustic detections from autonomous platforms into persistent visual monitoring programs such as the current National Oceanic and Atmospheric Administration and Navy marine mammal aerial survey programs off the U.S. east coast.
The enabling technology for this project is the digital acoustic monitoring (DMON) instrument/low-frequency detection and classification system (LFDCS), a combined hardware (DMON) and software (LFDCS) system capable of detecting and reporting a wide variety of low-frequency vocalizations in near real time. The DMON collects, conditions, processes, and records audio from up to three attached hydrophones. Because it is programmable, applications can be developed to detect, classify, and report sounds from the collected audio in near real time. The LFDCS is software that detects and describes sounds using pitch tracking, and it classifies those sounds using quadratic discriminant function analysis. The LFDCS has been converted recently into a DMON application, and the DMON/LFDCS has been integrated with several autonomous platforms. Although other available systems focus primarily on a single call type produced by a single species, often from a single location, the DMON/LFDCS is capable of reporting detection information on a wide variety of calls produced by several species from both mobile and stationary autonomous platforms. This project will demonstrate and validate this technology in the Gulf of Maine, a region of high baleen whale abundance.
This project will provide flexible tools for reducing analytical effort over the long term and improving the efficiency of existing monitoring technologies (e.g., visual surveys). It is expected that the Navy will be able to significantly enhance its monitoring efforts using near real-time detection information to identify areas of persistent marine mammal occurrence and to direct airplane- or ship-based surveys to regions that require additional visual surveillance. (Anticipated Project Completion - 2017)