How to Create an Audio Waveform Visualization using JavaScript

If you’re building an audio player for your website or application, you may want to include a waveform visualization to give your users a better understanding of the audio they’re listening to. Fortunately, this is easy to do using the HTML5 Canvas element and the Web Audio API.

In this tutorial, we’ll walk through how to create an audio waveform visualization using HTML5 Canvas and the Web Audio API.

Prerequisites

Before we get started, you should have a basic understanding of HTML, CSS, and JavaScript. You’ll also need to have an audio file that you want to visualize.

Step 1: Set Up Your HTML

First, we’ll create the HTML structure for our audio player and waveform visualization. Here’s an example HTML structure:

<div class="audio-player">
  <audio src="audio-file.mp3" controls></audio>
  <canvas id="waveform"></canvas>
</div>

This creates a div with a class of “audio-player” that contains an audio element and a canvas element with an ID of “waveform”.

Step 2: Style Your Audio Player

Next, we’ll add some CSS to style our audio player and waveform visualization. Here’s an example CSS code:

.audio-player {
    display: flex;
    flex-direction: column;
    align-items: center;
}
.audio-player canvas {
    height:80px;
}
.audio-player canvas,.audio-player audio {
    width: 500px;
}

Step 3: Set Up Your JavaScript

Now, we’ll add the JavaScript code to visualize the audio waveform using the HTML5 Canvas element and the Web Audio API. Here’s an example JavaScript code:

const audio = document.querySelector("audio");
const canvas = document.querySelector("#waveform");
const context = canvas.getContext("2d");

const width = canvas.width;
const height = canvas.height;

const audioCtx = new AudioContext();
const source = audioCtx.createMediaElementSource(audio);
const analyser = audioCtx.createAnalyser();

source.connect(analyser);
analyser.connect(audioCtx.destination);

analyser.fftSize = 2048;
const bufferLength = analyser.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);

context.clearRect(0, 0, width, height);
context.fillStyle = "#fff";
context.fillRect(0, 0, width, height);

let animationFrameID;

function draw() {
    requestAnimationFrame(draw);

    analyser.getByteTimeDomainData(dataArray);

    context.clearRect(0, 0, width, height);
    context.lineWidth = 2;
    context.strokeStyle = "#ff6600";
    context.beginPath();

    const sliceWidth = width * 1.0 / bufferLength;
    let x = 0;

    for (let i = 0; i < bufferLength; i++) {
        const v = dataArray[i] / 128.0;
        const y = v * height / 2;

        if (i === 0) {
            context.moveTo(x, y);
        } else {
            context.lineTo(x, y);
        }

        x += sliceWidth;
    }

    context.stroke();
}

audio.addEventListener('play', () => {
    draw()
});

audio.addEventListener('ended pause', () => {
    cancelAnimationFrame(animationFrameID);
});

Let’s go through this code step by step:

  • We first get references to our audio element and canvas element using document.querySelector().
  • We then set the width and height of our canvas element and create an AudioContext object, a MediaElementSourceNode
  • We then create an AnalyserNode and connect it to our audio source and audio destination using source.connect(analyser) and analyser.connect(audioCtx.destination).
  • We set the fftSize property of our AnalyserNode to 2048 and create a Uint8Array to hold the waveform data.
  • We then clear our canvas element and fill it with a white background using context.clearRect() and context.fillRect().
  • Next, we define a draw() function that is called using requestAnimationFrame() to continuously redraw our waveform visualization.
  • Inside the draw() function, we use the getByteTimeDomainData() method of our AnalyserNode to get the waveform data and clear our canvas element using context.clearRect().
  • We then set the line width and color of our waveform visualization using context.lineWidth and context.strokeStyle, and begin drawing our waveform visualization using context.beginPath().
  • We calculate the sliceWidth based on the width of our canvas element and the number of data points in our dataArray, and iterate over each data point in dataArray.
  • For each data point, we calculate the y-coordinate of our waveform visualization using the formula var y = v * height / 2, where v is the data value normalized to the range [0, 1].
  • We then use context.moveTo() and context.lineTo() to draw a line segment between the current and previous data points.
  • Finally, we call context.stroke() to draw the waveform visualization on our canvas element.
  • We also created two event listeners for audio player that would use to call draw function on play event and when audio ended or paused event is called then we would stop calling the draw function.

Step 4: Test Your Audio Waveform Visualization

That’s it! You should now have a working audio waveform visualization on your website or application. Try playing your audio file and watch the waveform visualization update in real-time.

Conclusion

In this tutorial, we learned how to create an audio waveform visualization using the HTML5 Canvas element and the Web Audio API. By using the Web Audio API’s AnalyserNode, we were able to extract the waveform data from our audio file and use the Canvas element to draw a waveform visualization in real-time.

While this tutorial only scratches the surface of what’s possible with the Web Audio API, it should give you a good starting point for building more complex audio applications in the future.