Published February 23, 2024 | Version v1
Other Open

Vibroscape analysis reveals acoustic niche overlap and plastic alteration of vibratory courtship signals in ground-dwelling wolf spiders

  • 1. Max Planck Institute of Animal Behavior
  • 2. University of Mississippi
  • 3. University of Nebraska - Lincoln

Description

Soundscape ecology has enabled researchers to investigate natural interactions among biotic and abiotic sounds as well as their influence on local animals. To expand the scope of soundscape ecology to encompass substrate-borne vibrations (i.e. vibroscapes), we developed methods for recording and analyzing sounds produced by ground-dwelling arthropods to characterize the vibroscape of a deciduous forest floor using inexpensive contact microphone arrays followed by automated sound filtering and detection in large audio datasets. Through the collected data, we tested the hypothesis that closely related species of Schizocosa wolf spider partition their acoustic niche. In contrast to previous studies on acoustic niche partitioning, two closely related species - S. stridulans and S. uetzi - showed high acoustic niche overlap across space, time, and/or signal structure. Finally, we examined whether substrate-borne noise, including anthropogenic noise (e.g., airplanes) and heterospecific signals, promotes behavioral plasticity in signaling behavior to reduce the risk of signal interference. We found that all three focal Schizocosa species increased the dominant frequency of their vibratory courtship signals in noisier signaling environments. Also, S. stridulans males displayed increased vibratory signal complexity with an increased abundance of S. uetzi, their sister species with which they are highly overlapped in the acoustic niche.

Notes

Python packages: Pydub (Robert & Webbie, 2018), kneed (Satopää et al., 2011), Noisereduce (Sainburg et al., 2020), Scipy (Virtanen et al., 2020) and Scikit-learn package (Pedregosa et al., 2011).

R for statistical analysis

Funding provided by: National Science Foundation
Crossref Funder Registry ID: https://ror.org/021nxhr62
Award Number: IOS 1037901

Funding provided by: National Science Foundation
Crossref Funder Registry ID: https://ror.org/021nxhr62
Award Number: IOS 1556153

Funding provided by: Animal Behavior Society
Crossref Funder Registry ID: https://ror.org/031nh9x49
Award Number:

Funding provided by: American Arachnological Society
Crossref Funder Registry ID: https://ror.org/02pfy0v45
Award Number:

Funding provided by: Society for the Study of Evolution
Crossref Funder Registry ID: https://ror.org/057kr0a20
Award Number:

Funding provided by: American Philosophical Society
Crossref Funder Registry ID: https://ror.org/04egvf158
Award Number:

Funding provided by: University of Nebraska–Lincoln
Crossref Funder Registry ID: https://ror.org/043mer456
Award Number:

Methods

Field recording:

For field recordings, we chose five study plots (10 m x 10 m) at the field station of the University of Mississippi at Abbeville, Mississippi, USA (34˚43' N 89˚39' W). In each study plot, we deployed a TemLog20 temperature logger (Tamtop, Milpitas, California, USA), 25 recording units consisting of a contact microphone (35 mm diameter, Goedrum Co., Chanhua, Taiwan) and a Toobom R01 8GB acoustic recorder (Toobom, China), and four pitfall traps (Carolina biological supply company, Bunington, North Carolina, USA). The temperature loggers recorded the temperature at each recording plot every 15 minutes during the experimental periods. In total, we deployed 125 recording units, 5 temperature loggers, and 20 pitfall traps across our five study plots. We conducted a 24-hour recording every three days from May 15th to July 15th, 2018 resulting in a total of 1950 24-hour recordings across 13 days. The substrate-borne vibrations during 24 hours in study plots were continuously recorded from 0800 except 10 minutes to replace audio recorders at 1600 due to the limited battery capacity.

After a ~24-hour recording, we extracted uncompressed WAV files at a 48-kHz sampling rate from recorders to an 8 TB external hard drive (Seagate Technology LLC., Cupertino, California, USA). On the same day, we collected specimens from pitfall traps at three different times (0800, 1600, and 0000) to observe the temporal variation in the activity of species in study plots. We sorted collected specimens by the time of collection, collection date, and study plot and we preserved them in 95% ethanol for later species identification by PM. We used the collected specimens to corroborate our species identity of sound recordings across locations.

Data processing:

To automate signal detection for classification, we wrote Python programs to filter background noise, detect pulses, and group pulses into biologically meaningful signal bouts. Before the process, we divided a 24-hour recording WAV file into 10-minute chunks using FFmpeg for processing speed. 

Noise filtering: 

Due to the spatial/temporal variation in background noise, we conducted adaptive noise filtering using a unique frequency spectrum of the background noise of each 10-minute WAV chunk. To acquire the frequency spectrum, the program calculated the amplitude threshold by the sigma clipping method. The amplitude threshold is calculated by sigma clipping as m + ασ with median m and a standard deviation σ of the amplitudes (mV) of all the sampling points of the WAV chunk. The constant α was determined among values between 1 to 10 at intervals of 0.3 by the elbow method on the number of sampling points above the amplitude threshold. Once the amplitude threshold was determined, the program extracted the frequency spectrum of the longest segment below the amplitude threshold by Fast Fourier Transformation and filtered the WAV chunk by the frequency spectrum of 'background noise'.

Sound detection and classification: 

After noise filtering, we updated the amplitude threshold of each file by the sigma clipping methods through the same methods with noise filtering to find the optimal alpha. Using the updated amplitude threshold, we detected pulses above the amplitude threshold. The amplitude and time of detected pulses within a WAV chunk were recorded for pulse grouping and sound classification. For pulse grouping, the program calculated the time interval between adjacent detected pulses and applied the Gaussian Mixture Model (GMM) to classify the time intervals into three categories of within-bout, between-bout within a single signaling activity, and between-signaling activities. Then, we grouped pulses into bouts by the results of the GMM for sound classification.

An expert in spider sound analysis (NC) classified detected sounds by visual inspection of spectrograms. To classify non-spider sounds, we used BirdNET (Kahl et al., 2021) and the library of Singing insects of North America (SINA; Walker & Moore, 2003). For the BirdNET, we accepted the species that showed the highest probability values from the online bird sound identification system. If the intervals between consecutive conspecific (or same class) sounds were recorded at the same vibratory sensor within one minute, we grouped the sounds as a signal bout. Also, if conspecific sounds from the same recording plot were detected by multiple sensors simultaneously, we classified the sounds as airborne sounds that were transmitted to the ground and counted the bouts as a single signal bout. When we cannot specify a reliable species or source producing sounds, we labeled the sound types as 'unknown'. 

Files

Supplementary_S3.zip

Files (71.8 MB)

Name Size Download all
md5:eb418c3be193097114961b60fadd9772
4.1 MB Download
md5:154ae17d865cd63576f39b4edebe7cdc
67.7 MB Preview Download

Additional details

Related works

Is derived from
10.5061/dryad.0gb5mkm5w (DOI)