Don’t trust open-source software. It’s inherently insecure.
The opinions stated here are my own, not those of my company.
I had been planning to write this article in response to the University of Minnesota debacle, but now it feels apt to write about the log4j vulnerability that is being actively exploited.
A lot of pieces have been written online about how open-source software is secure, usually in comparison to closed-source. But don’t take their words for truth, because their rationales are too idealistic. That’s a serious problem for modern software, which depends on a large stack of open-source libraries.
“Now Nick,” you’ll say, “Surely you aren’t saying that closed-source software is more secure.”
No, I’m not. My point is that open-source software is assumed to be secure, but each point is easily refuted.
More Eyes, More Secure
Everyone can see the source code. Everyone knows exactly how it works. This means that everyone can see for themselves how secure it is, and offer patches to make it more secure.
But in the real world nobody does this. If I’m looking on NPM for a terminal style library, I’m going to use chalk. I just install and use it. Have you ever looked for a library then sat down and read all of its code? Then have you looked through all the code of its dependencies, ad nauseum?
No. Nobody does that. It would take too long. Everyone assumes that the code is secure because someone else clearly must be checking. This is the free-rider problem, and our large repository of freely available code gets used but never analyzed.
Many vulnerabilities get fixed after they’re publicly exposed. Closed-source software is not immune to this. Yet there seems to be a false perception that vulnerabilities in critical software are less likely just because it’s open-source.
Unfortunately, despite everybody having ample opportunity to view and audit this code for themselves, this doesn’t happen often enough. Researchers from the University of Minnesota pushed “bad faith” commits to the Linux kernel, software indispensable to computing today. These patches were accepted without real critique and reverted two months after the paper was published.
Did these researchers act appropriately? Absolutely not. But they expose the fact that nobody is checking. There is nobody behind the curtain. Had they actually been malicious, that would’ve been a problem.
Open-source is not designed for security
It sucks to be an open-source maintainer. They are overworked and underpaid. If they drop the ball, it could cost millions or billions, yet so many are volunteers.
Why do these developers participate in a system where they do the work of a Senior Engineer while not receiving any compensation? The whole concept is absurd and a consequence of the idealists of the early Internet age.
While workers go on strike for better wages and working conditions, open-source maintainers don’t. That’s not to say they’re fine with the status quo, but seem unable or unwilling to fix the system they’re actively participating in. Perhaps it’s unfair to expect this from engineers, but this is ample opportunity for a business-oriented person to build a company.
Instead, our thoughts turn towards sponsorships:
rgoers is a smart engineer who works on log4j… in his spare time. As of today, he has 16 sponsors on GitHub who give a monthly payment to continue funding development. Seriously? This is how we’re going to treat critical infrastructure? This is like asking Dominos to fix potholes.
Sixteen sponsors, each paying maybe $10 a month for “a lot of love and gratitude”. That’s not even $2000 a year. That’s embarrassingly low. Even at the federal minimum wage you would get a paltry $15,000 a year.
I’m grateful to these volunteers, as they could easily have ignored the problem without personal consequences. Last year a popular NPM package Lodash had a vulnerability that was long ignored. As the maintainer said: “I’m not letting some report ruin my week or plans.”
And that’s perfectly understandable. Because they aren’t getting paid, or just a few donations, it’s hard to incentivize a prompt fix. Sure, we can hope that the volunteers do it out of the goodness of their heart, but that is not the way to run a business or build software.
Developers are vulnerable
Developers are surprisingly vulnerable not just to library exploits, but the entire ecosystem of development asks developers to install random software from the Internet as superuser. This is such a massive opportunity for hackers.
event-stream was one library that was taken over by an apathetic developer despite its wide usage. It became a tool that, when downloaded, would take over the cryptocurrency wallet on the device. It took two months for this to be discovered and resolved.
Conclusion time
From Heartbleed to log4shell, we have seen a number of critical bugs in software packages that are maintained by too small a number of volunteers. Employees have to scramble to patch their systems while this bug may be exploited for months and years to come.
How can we actually fix the underlying issues?
- We need to professionalize open-source software. Side-hustles are fine, but once there’s a critical mass we need a way to ensure there’s continued maintenance.
- Maintenance includes security. Experts need to be paid to professionally audit this code regularly.
- Our ability to patch software is inadequate due to the way that software is compiled today. Decoupling untrusted software to easily update it is difficult but important.
- We need to stop assuming that others are ensuring this software is secure. No software is secure, but some are more secure than others.
It seems silly to pay millions of dollars to fund professional engineering in these libraries when they’re available for free, nobody is laughing when a vulnerability costs the company tens of millions.
Projects need to figure out how to take payments from large corporations, as there is no corporate Patreon account. Handling purchase orders and dealing with business interests is not fun, but it’s necessary if we want to be serious about an open-source ecosystem.
While patching software will never finish, we can get a lot smarter about it. Like many other things in life, a stitch in time may save nine.