React Native: How to Load and Play Audio

Working with audio clips in React Native and Expo AV

Expo AV: Reliable Native Audio Support

This article introduces the expo-av package, a universal audio (playing and recording) and video module for React Native projects. The package is maintained by Expo, the go-to library for bootstrapping React Native projects targeting a range of platforms.

Audio packages have come and gone in the React Native ecosystem, most of which either do not work or have not been updated for a significant amount of time. When looking at packages to adopt — especially for critical tasks like audio playback — a reliable and well maintained package is needed. Major iOS updates are launched on a yearly basis and Android updates considerably more regularly. These updates sometimes come with breaking changes, so choosing reliable source code on the React Native side will be in every developers interest.

Note that react-native-audio has not been updated in over 2 years, and react-native-sound has been stagnant for over 1 year (since the time of writing this article); these packages should be avoided as they will not support the latest native APIs, and will lead to dead-end troubleshooting when they do not work in your project.

With all this being said, expo-av ticks all the boxes in terms of maintenance and platform support. With weekly downloads at >25,000, and the last update published a month ago (at the time of writing), it is a fairly strongly adopted package that the developer should have confidence using. Not all apps require audio, so it can be hard to gauge the popularity relative to the whole React Native ecosystem.

expo-av also has strong documentation that is kept up to date. The reader can visit the package on GitHub as the true source, but the majority of documentation is hosted on the Expo website, where there is a dedicated audio page along with an API reference page for the Expo AV module. It is this documentation that we’ll be referencing throughout this article as we build some audio tools.

What this article will cover

Now that I have hopefully persuaded you that Expo AV is the way to go with React Native audio, it is time to delve into some development. Instead of writing a standard tutorial of loading and playing an audio clip, we will make things more interesting to more closely reflect real-world use cases. After installation, this piece will:

  • Explain the audio loading and playing workflow and the key APIs needed to get audio working.
  • Walk through an audio controller class that loads, plays, resets and stops audio clips. This class will be separate from React components as to separate the audio logic from component logic.
  • Demonstrate how to load and manage multiple audio clips simultaneously. For this piece we will assume that a male and female audio clip need to be loaded, whereby the end-user can choose which gender to listen to. Of course, switching the gender can be achieved simply by toggling that gender via state management, but the corresponding audio itself must be loaded and available, ready to be played.
  • Walk through a <PlayButton /> component, that will initialise the audio (request to load the audio files from a remote server), and manage the audio state. More specifically, we will visit concepts such as auto play, tap to play, tap to stop, and reflect this in component state. Another thing we need to manage with audio is the component state while the audio is being retrieved. This means asynchronously requesting the audio and waiting for it to be fully loaded before it can be played. This adds further (but needed) complexity into the component as to not attempt to play an unloaded audio file that would result in a runtime error.

GitHub Gists will be available throughout this piece to demonstrate these concepts. Let’s get started by covering some key APIs of expo-av needed to get audio working.

Getting Started with Expo AV

expo-av can be installed with expo or with yarn:

expo install expo-av
yarn add expo-av

The only object we’ll be importing from the package is Audio, that will handle all the audio clips we’ll be downloading and playing:

import { Audio } from 'expo-av'

From here, Sound objects can be initialised. The Sound object exposes all the necessary methods needed to load and unload a sound, listen to changes to the Sound state via event listeners, and more. Initialising a Sound object can be done as a variable, state, or even a class property:

// initialising Sound as a variable
const audioClip = new Audio.Sound();
// as component state
const [audioClip] = useState(new Audio.Sound());
// or as a class property (recommended)
class SomeClass {
audioClip = new Audio.Sound();

With your Sound objects initialised, it then becomes possible to:

  • Load audio files from an external source, with the loadAsync() method. When we are done with the audio, the unloadAsync() method removes the audio file — this can be called when the component housing the audio file is unmounted, for example.
  • Once the audio file is loaded, it can be played or replayed with playAsync() or replayAsync() methods. Note that playAsync() will play the audio file if it has not yet been played, whereas replayAsync() starts playing audio even if that audio is some way through its timeline, or ended.
  • Get the current status of the Sound in question, the getStatusAsync() method is used. This returns an object with metadata such as isLoaded, to determine whether we can indeed play the audio clip.
  • Listen to status updates and react to them via an event listener. For this, the setOnPlaybackStatusUpdate(({ shouldPlay, isLoaded }) => { ... }) event listener is used. This function is a key component in the demos to follow further down.

Sources of audio: local or remote

The main limitation with expo-av is that we cannot persist audio files on device, such as in a cache or some key value system like AsyncStorage. Instead, we are limited to either locally bundled audio (not very useful for dynamic content), or remotely accessing audio.

So for dynamic content that cannot be bundled into your final app build, this leaves us with loading an audio clip from an external server — perhaps your own Node server serving endpoints for your app. This undoubtedly has negative implications, such as requiring an internet connection to stream audio clips in question, and having to provide real-time UX for audio state transitions (loading audio, ready to play, and failed to load). But alas, this is a limitation we have to cater for currently.

With this being said, the next section showcases a Controller class that will use some of the aforementioned methods to manage two audio clips concurrently.

Controller Class Walkthrough

The idea of the following Controller class is to manage multiple audio clips (using the APIs discussed above) within one simplified object that can be used within the React Components themselves. A <PlayButton /> component will rely on this Controller class in the next section.

The controller class here manages two audio files concurrently, that are labeled as audioFemale and audioMale. The idea behind this setup is a virtual assistant scenario whereby the user can switch between genders on the fly:- both male and female audio will be loaded and available to play, removing any additional loading states between switching. There will undoubtedly be other ways you can leverage concurrent audio management depending on how you deliver dynamic content.

The Controller class is quite straight forward, and is embedded in a Gist to follow. This class contains the following methods:

  • loadClips(token, uriFemale, uriMale): Uses Sound.loadAsync() to contact an API endpoint serving the audio files in question. Although loadAsync() does not support a request body, it does support request headers, that can be used to store authentication tokens.
  • playAudio(gender): Calls Sound.replayAsync() on the corresponding audio file.
  • stopAudio(): Calls Sound.stopAsync() on all audio clips.

The last two methods deal with unloading the audio files in question. One blocks execution (uses await to pause the containing function’s execution until the unload tasks as resolved), and performs more audio status checks than the other:

  • resetAudioClips(): Calls Sound.unloadAsync() on all audio clips, without blocking execution.
  • cautiousResetAudioClips(): Calls Sound.unloadAsync(), blocking further execution until the unload functions are resolved. In addition to this, the status of the audio files are checked (via Sound.getStatusAsync()) to check whether they have audio loaded already.

If the next component state is reliant on those sound files being unloaded, where concurrent actions or state updates rely on those unloads to be resolved, then cautiousResetAudioClips() should be used. If however there are no direct effects from unloading the audio, it is safe to use resetAudioClips(). In this latter case, the containing function will not wait for the resolve, and will continue execution of its logic.

React Native does flag warnings if there are errors in audio loading or unloading — such as if an audio file loads after the containing component is unmounted. Such events will not crash your app, but they should be avoided as much as possible.

With an understanding of what the Controller does, the following Gist contains the class in its entirety:

The reader can tweak the methods to suit their use case, such as the apiEndpoint class property.

It is worth pointing out some details within this implementation:

  • Sound.loadAsync(), can take a generic URL in its uri parameter, that does not point directly to the audio file in question. This allows you to serve just one endpoint to handle all audio file requests, and use the request header support to include the path of the audio file. Your endpoint can then check if this file exists, and perform other checks with authentication.
  • Also within Sound.loadAsync(), the shouldPlay and volume properties are provided. shouldPlay tells the API to start playing as soon as the audio is ready, whereas volume gives more granularity to how loud the audio should be (relative of the device volume).
  • Note that cautiousResetAudioClips() adds complexity in all of its checks, and could noticeably slow down app responsiveness while waiting for each promise to resolve. Keep this in mind when using this function and testing your app performance.

Controller can be instantiated within any React component and used accordingly with any pair of audio clips.

The next section puts all this logic together in a <PlayButton /> component. It is within this component that the aforementioned event listeners will be defined and managed, that allow the component to react to audio status updates, such as updating its local state to reflect that status. Let’s take a look at this now.

PlayButton Component Walkthrough

The <PlayButton /> component represents a simple button, that functionally will be able to play or stop the currently active audio clip. The button itself has 3 main states:

  • a loading state, whereby the button cannot be interacted with. This will be the initial state until all audio clips have been downloaded and ready to play.
  • A playing state, whereby tapping the button will stop audio from playing.
  • A ready state, with the button ready to play the audio with a tap.

A Gist containing the entire component is included after this section. Before that, key elements and design choices are discussed so the reader can gain intuition into how the component operates.

If preferred, refer to the Gist here as the following section discusses its implementation.

Component state design decisions

The Controller class we defined above is initialised as a component state variable, ensuring re-renders of the component will not reset our audio state — The component does indeed carry out frequent re-renders to reflect changes in audio status.

At the import stage, Controller is renamed for further clarity that it is related to audio:

import { Controller as AudioController } from './Controller'

And is embedded within the component state:

state = {
audioController: new AudioController(),
audioFemaleReady: false,
audioMaleReady: false,
audioPlaying: false,
autoPlayed: false

Notice that the component tracks key audio states, including whether the clips are ready to be played, whether audio is playing, and whether auto play has been triggered. Any changes here will trigger re-renders.

My exhaustive testing of Expo AV on both class components and functional components concluded that class components are a lot better suited to managing audio. This will become somewhat apparent next when defining the component lifecycle methods, where they effectively organise the audio lifecycle in relation to the component lifecycle.

Lifecycle methods and their audio management roles

Component lifecycle methods are key in managing audio. As soon as the component mounts, we’ll want to initialise the audio clips. This is done within a separate asynchronous function, initAudioClips:

componentDidMount () {
async initAudioClips () {
// initiate audio clips and event listeners
// (explored further down)

The componentDidUpdate(prevProps) lifecycle method is triggered upon a re-render of the component — or whenever state relating to the component changes. What needs to be checked here is whether the audio clips themselves have changed, or whether the user has swapped audio clips (in this case, changed gender). This simply pertains to checking the previously rendered props with the current props:

componentDidUpdate (prevProps) {   // if audio has changed, unmount the current audio clip and trigger reset
if (prevProps.audioMale !== this.props.audioMale) {
// handle gender change
if (prevProps.audioGender !== this.props.audioGender) {

The un-mounting stage also plays a key role, where any loaded audio will need to be unloaded, with their event listeners removed. This can be done within componentWillUnmount:

componentWillUnmount () {  const { audioController } = this.state;

// remove status update event listeners

// reset (unload) audio clips

A handleReplaceAudio function acts in a similar way to componentWillUnmount, but also updates the component state as to completely reset the audio status:

async handleReplaceAudio () {   const { audioController } = this.state;   audioController.audioFemale.setOnPlaybackStatusUpdate(null);
this.state.audioController.resetAudioClips(); this.setState({
audioFemaleReady: false,
audioMaleReady: false,
audioPlaying: false,
autoPlayed: false,

This function is called when new audio files have been fetched (and passed as component props), and therefore calls initAudioClips to reload them with their event listeners. It is within initAudioClips where audio status update logic is defined.

Initiating audio clips and their event listeners

If you refer to the implementation, notice that initAudioClips does two things:

  • Loads the audio files via audioController.loadClips().
  • Defined two event listeners for each audio clip, containing almost identical logic.

Remember, initAudioClips is called as soon as the component mounts, and when audio files have changed (that pertain to component props changing). It is this function that triggers the audio requests and updates component state every time an audio status update happens.

Loading the audio clips is very straight forward, as we already defined the logic in our Controller class. The component must wait until this step is processed before defining the event listeners:

// await clips to load (from external endpoint)await this.state.audioController.loadClips(authToken, uriFemale, uriMale);

Now for both audioController.audioFemale and audioController.audioMale, their event listeners are defined. Here is a snippet of the first one — notice that we have direct access to shouldPlay and isLoaded, directly supplied by Expo AV:

this.state.audioController.audioFemale.setOnPlaybackStatusUpdate(({ shouldPlay, isLoaded }) => {   // handle any status updates

The logic within these event listeners is relatively straight forward to read, albeit with a bit of complexity. Each execution of this handler function will:

  • Update the audioPlaying state value if shouldPlay is true. React will ignore this step if shouldPlay already matches state.audioPlaying.
  • If isLoaded is true, then<gender>Ready will update to true. This allows the button to change from a loading state to a ready state.
  • If autoPlay is turned on (via some global state management), and the audio has not yet auto played, then the audio starts playing with audioController.playAsync(), and another state update happens to reflect the auto play happening (we don’t want continuous auto-plays and re-render loops!).

Note that the whole auto play logic is wrapped in a setTimeout statement of 700 milliseconds. This is purely for user experience, so the button updates to the ready state, and then to the playing state, in a timely manner for the user to acknowledge.

As stated earlier, this logic is defined for each audio clip, as they are both being processed concurrently.

Rendering the component

The remainder of the function deals with rendering the button itself. This is not a focus of this piece, but the reader can rely either on React Native Button, or create their own UI.

Material UI contains some very useful Button components that support icons, labels and key functionalities out of the box. To integrate a Material UI button with your audio controllers, refer to my article: React: Theming with Material UI.

The Gist to follow contains the entire <PlayButton /> component implementation:

In Summary

This article has introduced the reader to Expo AV, the best React Native solution for audio, video and audio recording at this current time. The Expo AV API was introduced along with its key functions. After this we walked through a comprehensive solution for managing multiple audio clips, that splits logic between a Controller class and <PlayButton /> component.

This controller / component approach has worked best in my experimentation with the API. Having the expo-av methods wrapped in a simpler class definition makes managing the audio clips more streamlined on the component side. The component can focus on state updates, event listener logic, and managing the component lifecycle, whereas the controller can handle the audio requests and audio control, such as playing and stopping the audio.

I hope the reader can now apply or adapt these solutions in their own projects!

Expo AV documentation can be found on their Audio page and AV API Reference. The Controller class can be found here, and PlayButton component here.

Programmer and Author. Director @ Creator of for iOS.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store