Long-term concerns about programming as a business

Being in the programming and technology sector comes with some unique concerns from the business angle.

Being able to infinitely and freely copy what we build is an entirely new constraint that we haven’t figured out how to work within and think about yet. There are copyright concerns with this, but they’re already extensively covered across the internet, so here I’ll be focused on how this affects the business of software development in the long term. Foremost, I see issues with how easy it is for our sector to lose skills and understanding of the systems we are building on. I also have concerns about how we hold ourselves accountable for bugs of various types. In particular I am interested in how society may begin to hold our profession accountable for them.

Loss of skills in the programming workforce

The most talked about difference about working in computers is that in this domain, copying is free. The obvious upside of this is that once a job is done by a programmer it should never need to be done again - we should largely be able to re-use the old work. However, that fact also results in some serious downsides which are less considered. The things we build now are more often carried forwards than reimplemented wholesale. As individuals, we see this effect when we build prototypes quickly and end up having to ship them without properly thinking them through. As teams, we find ourselves faced with systems that have been running for years without being reconsidered as programming paradigms, technology standards, and our team’s skills makeup changes.

These two concerns are relatively easy to see (and to see solutions to) compared with the way we experience this as a society. Since we can largely re-use earlier work, down to the most low-level implementations of device drivers and networking stacks, it’s easier for the computing sector to lose skills. In general, as technology progresses, we as a species forget skills as we stop needing them. In the computing sector we have the ability to forget skills before we stop needing them. We can continue to depend on artifacts produced in the earliest days of computing long after those who built and know how to troubleshoot those artifacts have retired. As an example, look at NASA’s call for someone who knows COBOL, ALGOL, and FORTRAN. They have systems (namely, Voyager and Voyager 2) still running that were built with those technologies and only one engineer who knows how they work, and he wanted to retire. NASA, and the programming workforce in general, had lost these skills without noticing because there’d been no pressure to keep them up.

As computers get better and better, there’s less pressure to understand how they work. When I was growing up, my computers didn’t just work. They needed constant troubleshooting and because of that some knowledge of how the systems were put together. My youngest siblings have not had that experience - sometimes their phones are slow, or overheat, but a reboot generally does the trick. Their computers generally just work. I think I was right on the cusp of this change. I expect the generation before me looks at me and has the same thought. Their computers were even harder to get working, and even simpler. Technical troubleshooting knowledge once considered ubiquitous can be and is being lost wholesale - there’s no pressure to keep it around.

The cost of bugs

Software isn’t generally just delivered and forgotten. Often, bugfixes are included for free. At the very least, security holes are patched for free. Delivering a piece of software with significant bugs results in unpaid future work for the company that shipped it. Compare this to delivering a product. Products don’t generally come with fixes for flaws included (unless they’re warrantied; and generally this only includes defects in manufacturing that result in loss of functionality). It’s irrational to expect that they would - it’s not free to have your product fixed. Software bugfixes have zero marginal cost, though, so once a single product is fixed, it’s free to fix all the rest, so we include bugfixes in our software contracts. Customers have come to expect that they will receive fixes. It’s therefore important to deliver software with minimal bugs and security issues first-time to reduce the cost of long-term support.

This is especially true given the changing social attitudes towards security issues. The Equifax hack in late 2017 resulted in the disclosure of the personal credit information of 143 million Americans. As a result, there was (and is) significant pressure to hold Equifax accountable for failing to secure this information. While we have yet to see whether or not anything will come of this, it’s interesting historically - we’re beginning to figure out how we as a society view security issues in software. If the various class action lawsuits against Equifax pan out and we hold a company accountable for operations negligence, we’re one (large) step closer to holding the programmers of security-deficient software accountable for negligence. While that might seem far off, other engineering professions can be held accountable for negligence; it’s hard to say how much damage a security issue causes relative to a bridge collapse, but the sheer quantity of people affected by the Equifax breach suggests that we’ll have to consider it.


I don’t know if anything will come of these. I believe the increasing salary for someone with so-called “forgotten” knowledge will drive more people to learn the necessary skills, esoteric though they may be. I do think the concern with software engineers being held accountable for significant security issues is legitimate. Computer science is still on its way to becoming a respected engineering discipline. It’s possible a certification in developing secure software will become common, or a professional engineering certification will be required to sign off on security-conscious projects. We might see a mandated standard, something like MISRA C, for web applications handling sensitive data - or just for some important libraries, like OpenSSL. Overall, though, it’s all just some hypothetical musings I’ve had recently. Let me know what you think.