Motivation#
Apple Music, in the fullscreen view for playing music, shows an animated, flowing gradient in the background that appears to sample colors from the playing song’s album art.

I spent a lot of time trying to figure out how this could work before giving in and actually doing a bit of research into it. I ended up going down a multilayer rabbit hole (as most of my projects end up in) to reproduce it myself.
This was a relatively short project: I initially started looking into it in the afternoon of October 20…

…and wrapped up in the evening of October 25. The vast majority of actual work that contributed to the final project was actually done in the last two days, which ought to give you a sense of how finnicky these projects are. There are still things which my reproduction doesn’t do which drive me crazy, but that’s for later.
Initial exploration#
This all started off with a message I sent to Gemini on a whim, after doing about 5 minute of googling:
Any good analyses of how the full-screen background works in Apple Music?
i.e. https://www.reddit.com/r/AppleMusic/comments/k6434f/til_apple_music_on_macos_has_a_fullscreen_view/
Gemini managed to dig up this wonderful thread from developer/designer Sam Henri Gold, which is the key resource I used for practically the entire project. It notably does not work by sampling colors from the album art and blending them, as most gradients of this type do; here’s how Sam explains it:
Some background on this: the visualizer is one of those mesh/fluid gradients that designers have been swooning over for a couple of years now.
the way they’ve constructed theirs is by layering copies of the artwork and “twisting” each copy [and then layering a blur shader on top]. All done in a Metal shader.
the actual twist effect itself works something like this: it distorts coords within some radius around an offset by applying a rotation whose angle increases with distance to the offset. the rotation angle is modulated by a squared ratio of the distance to the radius.
This is quite smart; the notable advantage is that all of the functionality of the gradient can be implemented as a series of fragment shaders. There are three key steps:
- overlaying several moving copies of the album art
- applying the “twist” effect described
- applying a Gaussian blur.
At the time, though, I didn’t fully understand the idea (mainly because I didn’t even know how shaders worked), so I shot off a DM to Sam:

At the time of writing, Sam hasn’t replied. I was on my own.
Also note that there are a huge number of associated configurable constants — how many copies of the artwork, the sizes of each copy, the speed at which they moved, the standard deviation of the gaussian blur, the blur kernel size, twist angle, twist radius, number of twist effects, etc. etc. I had no idea what these should be either.
I decided the best way to resolve both of the issues (ambiguity with how the effect worked, and choice of constants) was to reverse-engineer the Apple Music app myself. This wasn’t easy, considering I was basically trying to debug a proprietary app on a proprietary platform, both owned by Apple, which has of course built in many safeguards to prevent reverse engineering.
Gemini threads:
- finding the original tweet by Sam
- initial research into how I could do similar debugging
- trying to use FLEX in a simulator
- more questions about FLEX compatibility
Trying to reverse-engineer the desktop app#
I first looked at how Sam had done his research: he had used FLEX, which is basically the equivalent of DevTools for iOS. However, FLEX requires an iPhone, and the iPhone needs to be jailbroken. I unfortunately do not have an iPhone, let alone a jailbroken one. I thus turned to the Xcode debugger as a last resort. Xcode wasn’t the hail mary it may seem to be: it was extremely slow, had safeguards built in to avoid debugging Apple apps, and often froze in the middle of debugging. But it was a start.
Xcode has two main ways to debug an external app: to load in its executable, or to attach to a running process. When using the former, Apple Music generally could immediately detect it was running in a debug environment and crash itself:

so I mostly used the latter. This enabled me to debug Apple Music for several minutes (and, of course, much longer if/when I paused it), although it did occasionally still crash after detecting the deubgger. This was enough to start doing basic work, though.

My main goal here was to disable the blur shader used, so that I could get a better internal look at how the effect worked, similar to what Sam had done.
Gemini recommended I use the Metal capture system builtin to Xcode to debug the shaders. Unfortunately, the capturing button was disabled, I assume as a result of me debugging a first-party app.

I reviewed Sam’s FLEX demo again and noticed he didn’t do anything Metal-specific — rather, he found the blur object in the heap and directly edited its sigma value. I tried to do something similar using the LLDB debugger, but inevitably ran into issues. I was able to find the object:

But LLDB would complain something along the lines of
$ expr ((MPSImageGaussianBlur *)$test).sigma = 0.00
error: <user expression 20>:1:39: no setter method 'setSigma:' for assignment to property
1 | ((MPSImageGaussianBlur *)$test).sigma = 0.00
Gemini suggested a lot of other methods to get around this, such as injecting a dynamic framework to swizzle the MPSImageGaussianBlur initializer to set the sigma to zero, but they didn’t end up panning out.
This part of the project lasted around 3 days itself. I started growing concerns around the second day that it wouldn’t work, so I decided to go another route.
Gemini threads:
- installing FLEX tweaks on a simulator
- identifying how Sam used FLEXList
- editing heap objects in LLDB
- trying to swizzle the app to make sigma always 0 in MPSGaussianBlur
- more fighting LLDB
Reimplementing from scratch#
At this point, I concluded that the fastest route to answer my questions would be to write the effect myself and then tune it until it matched the Apple Music one.
I initially decided to try using WGPU and WGSL since my graphics-nerd friends recommended it (and also because Rust = ⚡blazing fast⚡), but quickly burned out after it took about 200 lines of code to draw a triangle, and 350 for a simple image. The concept of textures/fragment shaders/vertex shaders weren’t super clear to me at the time, either, which probably added to the confusion.
I decided to try using WebGL instead on a whim, which led to the fumble of the century:

Yeah, I completely didn’t realize that WebGL and WebGPU were entirely different things and used their own separate shader languages. After getting acclimated, though, I managed to port a basic program to TypeScript with WebGL. And it was so much simpler 😅

I used this tutorial to figure out how to render images and also as a good source of truth for how WebGL works in general. I jumped in headfirst without reading any prior parts of the tutorial, which might have been a mistake, but oh well.
The development itself was relatively boring, but after a day or two of coding I ended up with this TypeScript code which implemented all of the needed effects. It looked like this (again, using the Breach album art for the demo):

It was relatively good, but not yet perfect:
- I didn’t know how to properly handle moving the image copies. My logic at the time just “respawned” images when they went out of frame by moving them back to a random position and moving them to the very back of the stack. This fell apart when there were too few copies, though, or when multiple copies went out of frame at once, leading to “jumps” between frames.
- The Apple Music background (see the top of this article) seemed way more saturated, which confused me as I wasn’t doing anything particular to desaturate the colors.
- Just tuning the constants was very hard! I had no idea how the copies were positioned, their size, how quickly they moved, how many twist effects there were, the radius/theta of each effect, etc.
Gemini threads:
- learning about Gaussian blurs & their optimizations
- looking into WGPU
- finding tutorials for loading images into WebGL
- understanding WebGL behavior
- figuring out how to implement multi-pass rendering
- understanding buffers
- writing a quick TypeScript function to generate a Gaussian kernel
- learning that GLSL ES 1.00 has mid array support
- learning more about vertex shaders
Reverse engineering the web app#
It was at this point that I realized the obvious thing, which was that Apple Music Web had a very similar view. It’s not the exact same as the MacOS app’s background, which I’ll talk a bit more about later, but it was close enough to continue with.
I initially searched for general terms I might expect to find in a shader — varying, uniform, etc. I ended up at a particular bundle file (I don’t remember the exact name) which contained code that I could definitely identify as manipulating WebGL instances and creating shaders. I thus copied this into Zed and began to reverse engineer.
This wasn’t trivial, as the file itself was over 12,400 lines of untyped, minified JavaScript. The minification meant that no variable or class names remained, making it very hard to understand what was going on; unfortunately, Apple remembered to not include source maps, unlike the recent incident with the App Store web app.
After manually hunting around a bit I compiled a list of shaders. There were a few common ones like one to apply a transformation matrix to a texture, and a few specialized ones. Notably, I found the shader which implemented the infamous twist effect!
vec2 twist(vec2 coord)
{
coord -= offset;
float dist = length(coord);
if (dist < radius)
{
float ratioDist = (radius - dist) / radius;
float angleMod = ratioDist * ratioDist * angle;
float s = sin(angleMod);
float c = cos(angleMod);
coord = vec2(coord.x * c - coord.y * s, coord.x * s + coord.y * c);
}
coord += offset;
return coord;
}
There were a few shaders which confused me, like one that ostensibly did a box blur — despite my knowledge that Apple Music on Mac used Gaussian blurs — although I later realized that shader actually ended up being used to implement Kawase blur (which approximates Gaussian blurs).
It was at this point that I actually bothered to scroll to the bottom of the line and read the last line:
export {zh as LyricsScene};
This was very important, as it clarified the zh class, which was defined immediately above, was the only export of the file, and thus probably the one which could be used to implement the actual functionality.
I looked into the constructor for zh:
constructor(t, s) {
let {height: i, width: r} = t.getBoundingClientRect();
this.app = new Ti({
width: r,
height: i,
view: t,
powerPreference: "low-power",
backgroundAlpha: 0
});
const n = new Me;
n.beginFill(16777215),
n.drawRect(0, 0, this.app.renderer.width, this.app.renderer.height),
n.endFill(),
this.app.stage.addChild(n),
this.reduceMotionQuery = window.matchMedia("(prefers-reduced-motion: reduce)"),
this.app.ticker.maxFPS = 15,
this.initAnimation(),
this.updateArtwork(s)
}
The key things to note here is that t is definitely used to provide some “view”. I managed to correctly guess that t could be a canvas element. Similarly, the definition of updateArtwork makes it clear that s can be a URL to an image. This was enough for me to set up a basic website to test out the functionality. The following code:
import { LyricsScene } from "./bundle";
const canvas = document.getElementById("canvas");
const scene = new LyricsScene(canvas, "http://localhost:8000/breach.jpg");
worked perfectly!

The animation itself, however, was still a black box to me. I could make very broad changes by enabling or disabling full shaders, but otherwise couldn’t change the animation at all.
After a lot more poking around, I eventually realized that the vast, vast majority of the file was actually just a minified embedded version of Pixi.js! Pixi is a “wrapper” of sorts around WebGL and similar standards which provides a game engine-like API to access them. After checking the pixi.min.js for a few different versions, I matched it to Pixi 7.4.2. After this, the decompilation itself was pretty simple; I just needed to figure out which parts of the code were part of Pixi, remove them, and change references to use the relevant Pixi imports instead.
This took quite a bit of work, basically the entirety of Saturday, but I eventually managed to finish decompiling the code and get a TypeScript reconstruction working! You can see the TypeScript code here. The code itself is less than 100 lines, meaning that the other 12,300 lines of the file were just Pixi.js (and the adjacent pixi-filters library which Apple used to get the twist, blur, and saturation effects).
I finally could get a good look inside the inner workings of the animation. It was quite interesting:
- Apple Music oversaturated the album covers to make the gradient more appealing, which I had suspected, but was still cool to see confirmed
- the overlaying of the album art is done through stacking 4 square copies of the album art, sized at 25%, 50%, 80%, and 125% of the viewport width, respectively. The latter (biggest) two only spin in place, while the former (smaller) two spin in place while moving along circular tracks.
This likely isn’t how the native animations work (more on this later) but is good enough to get a mesmerizing animation for the web version.
I made a few more tweaks and have put up a demo here:
The demo was actually built with Solid, making this my first ever Solid project. It was a quite interesting development experience; I ran into a few weird issues with the stores that led me to just use regular signals, which I think is a result of me not having the full intuition for Solid’s mental model, but it was overall very nice! Getting Astro to play nice in a project with both React and Solid was not fun but I managed to get it to work.
Gemini chats:
- understanding the twist effect
- understanding the saturation constants
- understanding the saturation shader
- understanding the blur shader
- understanding the blur shader (again)
- understanding the transformation shader
- asking about how the blur was implemented
- asking about how the
LyricsSceneinterface worked - asking about parts of the bundle
- help with Solid
Conclusion#
This was a very fun exploration! I learned a lot about reverse engineering and shaders/WebGL, so I think it was definitely worth the week of work.
It still doesn’t perfectly match the native Mac view, which drives me a little crazy. Take this example, where the blur on some parts of the background is clearly sharper than that of other parts:

A chat with Gemini seems to suggest there could be some kind of edge-preserving blur at play here, but I’m not sure.
A few future directions I could take this are actually learning WGPU (despite its verboseness…) and porting the visualization to run natively, perhaps converting the shaders via wgpu’s naga utility, or porting it to pure TypeScript through the insanely cool TypeGPU. GPU programming is definitely quite interesting and I think I’ll explore it more in the future, perhaps implementing more complex programs such as ray tracing.
As always, thanks for reading!