A central theme of our acoustic monitoring is having
the ability to identify species based on their acoustic
signatures in near real-time. A sensor data stream
is a time series comprising continuous or periodic
sensor readings. Typically, readings taken from a
specific sensor can be identified, and each
reading appears in the time series in the order
acquired. These sequences can be clustered and
fused with other data to support species detection and
classification.
Classification attempts to
accurately recognize which species produced a
particular vocalization, while
detection
indicates the likelihood that an acoustic clip
contains a song voiced by a particular species.
Figures 1a and 1b depict two common methods for
visualizing an acoustic clip. Figure 1a shows
an oscillogram that depicts a signal's
normalized amplitude over time. Figure 1b
shows the same clip plotted as an acoustic
spectrogram. A spectrogram depicts frequency on
the vertical axis and time on the horizontal axis.
Color indicates the intensity of the signal at a
particular frequency and time, where frequency is
measured in cycles per second or hertz (Hz).
Spectrograms are useful for visualizing acoustic
signals in the frequency domain.
Plotted in Figures 1c and 1d is the power spectral
density (PSD) for the signal. Power spectral
density depicts the signal strength, or power,
found in different frequency bands. Figure 1c
shows the PSD at a fine granularity while Figure 1d
plots the PSD integrated over bins with a 200Hz
width. As shown, the spring peeper
(Pseudacris crucifer crucifer) signals
at approximately 3 kilohertz (kHz).
Visual and other representations of acoustic
and other sensor signals can be used to enable
automated classification and detection of acoustic
events, including classification and detection of
vocalizing species. Moreover, such signal processing
and analysis enables the computation of numerous
indices and fusing of different sensor data types
that can be used for
ecosystem assessment.