Under the Sun
Meaning, repetition, and the cost of concluding too soon
I found Under the Sun the way all of us find most music today: buried in some playlist, not really paying attention, just another track released on Warp, another name I already associated with a certain kind of electronic minimalism.
The track stuck around, though. Not dramatically. I didn’t look it up or try to figure out what it meant. It just kept showing up in my rotation, familiar enough to recognize but vague enough that I never felt like I needed to get it.
What stayed with me was the loop, which is 99% of the tone. A female voice singing “Under the sun there is a remedy”. Over and over, almost testing the phrase rather than stating it. Sometimes, not always, it completes with the verse “Or there is none”.
The voice doesn’t sound exactly “modern”, but it doesn’t sound too old either. Through repetition, you stop caring who’s singing. Through repetition, authorship becomes secondary, and the line starts to behave like something halfway between a lyric and a proverb, closer in spirit to minimalist composition than to narrative songwriting.
Last year, late one night at a bar in Guadalajara (the Spanish and provincial one, not the much larger Mexican city), I did what I usually do when I’m messing around with language models: I typed the line into ChatGPT, without any mention of music or anything else.
ChatGPT framed it immediately as some kind of classical saying, with Biblical echoes, Stoic-like philosophy, and Ecclesiastes vibes. It didn’t claim a source exactly, but it did something worse: it placed the line in this prestigious cultural space and treated it like timeless wisdom. Even offered to improve the phrasing, like I’d written it myself and needed help polishing it.
Nothing it said was wrong, but the whole framing felt off.
The line got pulled out of its actual path and dropped into this canonical slot that sounded important but contextually made no sense. Not a hallucination in the usual way. It didn’t invent an author or fake a quote. It defaulted to pattern-matching, treating ambiguity as permission to elevate meaning instead of suspending judgment.
A couple of days later, while I was scrolling YouTube comments under the track, someone mentioned that the sampled line actually comes from an old Julie Andrews children’s record, looped until the original context almost disappears.
That didn’t “solve” anything (that was not my intention) but it shifted how I heard it. What sounded “eternal” turned out to be historical, and what felt abstract had a real source.
The meaning didn’t shrink at all, it just got more and more interesting. The original voice sampled by Pritchard was sung by Andrews on “School Days and Learning Songs” from the album “Songs of Sense and Nonsense – Tell It Again” with Martyn Green. But the actual origin dates back to the eighteenth century, featured in “Nursery Rhymes and Traditional Poems” published in London in 1765 by John Newbery.
This is the point. For me, humanizing machine language starts here, in moments like this one, when a system should hesitate instead of concluding, and leave space for context to surface later.
This whole sequence (hearing a loop, asking a model, finding the origin in a YouTube comment) ended up in my GitHub repo. Not because it’s spectacular, but because it’s typical. It shows how these models collapse meaning into static frames instead of leaving room for context to build over time.
Humans don’t work like that. We hear things out of order: fragments before sources, loops before explanations. Meaning piles up slowly, sometimes through detours no model would bother tracking.
That’s what I mean when I talk about humanizing machine language. Not making it “sound nice” or branding bullshit. Just knowing when not to conclude. When not to treat ambiguity like authority. When to leave things open long enough that context can actually surface.
The line didn’t need refining, it just needed its space... and that’s why this Substack exists.
