A few years back, I created portable-color for adding color to shell scripts. Then I deprecated it in favor of a new library, dye, that fixed a number of things that bothered me about portable-color.
dye eventually added its own built-in templating, which meant users could just print a string full of things like “{{red}}” to color text.
And, unfortunately, as I realized a little while after that, it also sprouted a little security vulnerability.
Finding the vulnerability
dye, like portable-color before it, specifically targets the POSIX shell. By not using bashisms, it could work anywhere.
An early feature, adapted from portable-color, was the ability to wrap text in a control character that turned something on (like red text, or boldface), then turned it off. As those commands approached the point where they’d call tput, I had a problem: I needed to split a string like “setaf 1” into “setaf” and “1”. This would be passed to tput, which expected them as two separate arguments.
I couldn’t find a way at the time to split them in the POSIX shell, so I settled for using the shell utility eval. This takes a text string and processes it as if it was typed at a shell prompt. It did the job for splitting spaces, and I figured I was in control of everything I sent to it.
Except then, a little while after I added template expressions, I realized that the two decisions together created a problem.
Code injection vulnerabilities
Code injection is a class of vulnerability where a program reads data, but while processing it, is tricked into executing code instead. All an attacker needs to do is control the data.
Many developers are familiar with the SQL variant of code injections (if not via experience, via xkcd). These specifically attack the kind of SQL use where user-supplied parameters are simply concatenated with executable SQL. A specifically-crafted bit of text, fed from a user interface like a website, can end up giving the attacker the ability to search for hidden data, add their own—or even drop tables.
The vulnerability I found in dye was similar, and the exploit simple. Drop a specially-named file on disk, and if a script was run that printed that file’s name—boom, its embedded code was executed. Likely? Not terribly. Serious? Yes.
I figured out how to fix this issue by excising eval (in version 1.1.1). I added a test that would fail if the vulnerability was present, and then implemented the fix.
But, instead of just releasing a fix, I thought I would take the opportunity to go through the whole security advisory process. Even if only a scant few people used dye, I wanted to know how the process worked from the maintainer’s end. (I have actually been involved in one other security advisory, in macOS, but that was as a reporter.)
Writing the advisory
GitHub actually makes it really easy to handle security advisories for your repositories. dye was already on GitHub in addition to Codeberg, so I decided to take advantage of this.
The first step was to head to “Security and quality” in the repository. From there, I could start a new draft security advisory. The Markdown template supplied helpfully structures the text of the advisory, making it easy even for novices like me to think about what really needed to be in there.
“Impact” was where I gave one sentence describing what could happen, and one more noting that I discovered the bug and fixed it, not being aware of any exploitation. “Patches”, pretty straightforward: version 1.1.1, not yet released, would contain the fix.
The “Workarounds” section was really important to me. Maybe not so much with dye, but with other libraries, it’s not always possible to upgrade packages to secure versions. (Looking very side-eyed at you, JavaScript ecosystem.) How could someone in that position secure their software? I noted that if they avoided untrusted content, or just didn’t use template expressions, they’d be safe. The eval was still there, but it wouldn’t be activated with attacker-supplied data.
Finally, for the “References” section, I added a link to a blog post I’d next write—more or less a copy of the advisory, but with a “Discussion” section explaining in-depth what caused the problem and how it was fixed. People can also read the diff, of course, but the discussion can be valuable. So I wrote that up too.
You can, now that it’s published, see the finished advisory. But I didn’t publish it right away.
Preparing for publication
I had the fix sitting locally on my computer. I had the drafted advisory. All I really had to do at this point was the release, but I was interested in taking it further.
CVE records track vulnerabilities in software. I had been associated with another CVE record before, for the Apple issue I mentioned above.
In this case, GitHub could create a CVE record for the vulnerability I was fixing—I just needed to request it. They said it could take up to a couple days; it took less than one. The vulnerability was now CVE-2026-35197—which, as I write this, is not yet published. GitHub holds publication until I publish my own advisory.
Disclosure of a security vulnerability is a sensitive thing, and unsurprisingly has been debated quite a bit over the years. The right way in this case, I thought, was to do things in this order—and in quick succession:
- Release the fixed software. People can get the fix right away even if they don’t have information about it yet.
- Make sure the (still unlisted) blog post is publicly readable, to support the advisory.
- Publish the advisory on GitHub. dye is a shell library and as such doesn’t support this, but if it were an NPM package or similar, this process actually kicks off a bunch of other notifications that GitHub does to spread the word about the security fix.
- List the blog post. Now folks who have my blog feed monitored are made aware as well.
Time to figure things out
I was fortunate in this case that dye’s security vulnerability was not, as far as I knew, exploited. (I’m pretty sure very few people use it!) That gave me time to figure the process out.
Had it been the case that I knew of exploitation, I would have probably chosen to release the fix first, then work through the rest. Likely, I’d have taken the fast path to getting the information out next, then figuring out the CVE as a last step.
Regardless, it was a very informative experience to go through—and I discovered that GitHub really makes it easy for maintainers. Ideally, I wouldn’t have created a vulnerability. But since I did, I did the next best thing.