Passive acoustic monitoring has become an established methodology for low-cost, non-invasive study of vocal wildlife. Through rapid evolution in both hardware and machine learning, microphones and hydrophones can now be deployed to record in rugged environments for months at a time, and the audio data subsequently analyzed to automatically detect animal vocalizations and classify them by species. However, a single microphone or hydrophone is insufficient to determine the position of a sound source or to disambiguate multiple sources, limiting key aspects of further ecological inference, and motivating the use of multiple sensors. In this talk, we examine computational methods for analysis of sensor arrays, with a focus on two case studies: acoustic localization of whale vocalizations, and source separation of birdsong. First, we consider an array of five hydrophones deployed offshore of Guam to monitor goose-beaked whales, an elusive deep-diving cetacean that regularly emits echolocation clicks. Applying time-difference-of-arrival likelihood-surface localization yields approximate source locations, informing abundance estimates of goose-beaked whales in this region. Next, we consider a compact tetrahedral microphone array which measures both pressure and acoustic velocity, therefore enabling streamlined calculation of acoustic direction-of-arrival. By distinguishing directions of high intensity within a recording, we can decompose a dawn chorus into its constituent birdsong sources, numerically approximate their signals, and improve species-level classification accuracy. Altogether, the use of sensor arrays in bioacoustics can advance the capabilities of passive acoustic monitoring and thereby contribute to data-driven conservation measures.