AI can’t make software engineers obsolete even if it writes the best code

Nick Felker
7 min readJan 24, 2023

--

I am a software engineer. I went to work this morning. I wrote code for two hours. Is this an admission of my laziness or low productivity? No.

Over the weekend I read a short article from Matt Welsh predicting that AI will get so good at writing code that the role of developer will become entirely automated. Yet this piece seems to diminish the role and in doing so reaches the wrong conclusion.

I have used AI-based code generation tools. Are they impressive? In a way, sure. They autocomplete code that I was going to write. It can do a few lines right, though often it doesn’t.

Writing a function or a unit test is the easiest part of my job. It is the kind of work that a junior engineer is assigned. But you can’t extrapolate that to the entire position.

Engineers do more than write code

Matt states that a PRD can be turned into code in under a second, which is admittedly far faster than even the best programmers. However, a PRD is a technical document that lays out the requirements and core design of the software. It is one that an engineer will be writing, to provide input on the technical possibilities.

My guess is that in the not-too-distant future — maybe 3 years — it will be possible to instruct an AI to take a high-level, natural language spec of a piece of software — a PRD, or a bug report, or a Slack thread, say — and generate “perfectly fine” code from it.

What if there was a way to define exactly what you want your code to do using natural language instead of complicated machine language? We’ve done that already. It’s called Python.

As you get higher and higher level, your control over the implementation grows weaker. AI tools in other domains can’t just be given loose-fitting ideas. Their prompts need to be crafted to get the precise output you want. That requires time, and business task which takes time becomes a job.

Matt seems to falsely believe that one can turn a PRD into working software and then wash your hands, but that’s never been how software work. The philosophy behind agile methodology is that a product is constantly evolving and iterating based on user feedback.

The PRD is a good start, but if you want people to use your product you’ll need to regularly improve it. But your improvements will need to be precise. You can’t have AI constantly changing the UI or database schemas or corrupting user data. And it still needs to build. This kind of context is currently unavailable, and it’s unclear that it will be meaningfully aware in the future.

Code as a liability

In the real world, we tolerate a huge amount of slop and error in human-produced software, so why set expectations for AI-generated code any differently?

One reason why I only worked two hours on coding was because I had meetings. One meeting was with folks on the security team. We’ve been discussing security requirements for an upcoming project, and they have a lot of notes.

We do tolerate a lot of bad code, particularly in open source, and that’s become a growing vulnerability in our systems. Our infrastructure, across public and private areas, is under constant attack by all kinds of malicious actors.

I’d argue this requires far more oversight of generated code compared to what we have now. AI already is capable of programming malware into your app and you’d never know.

But security in software is far broader than just the code being executed. Security engineering is about designing and improving software in a way that keeps users protected from bad code and bad people. Even most malware comes from phishing and social scams.

Threats to security and privacy are getting broader than before as computers advance. Wi-Fi can be used to identify individuals through gait detection. Phone calls can generate minute accelerometer sensor changes which let an attacker reconstruct the dialog.

Nobody cares if an AI or a person wrote the code that allowed for a remote execution vulnerability to steal customer data. If that were to happen, your company is still liable. You’ll need engineers to ensure that doesn’t happen.

This is even more important in national security contexts. You can’t just blindly trust what is being created. You need professionals to regularly check that things are doing what you expect, and only what you expect.

Humans can reduce their own workloads

Do we need a hundred different people writing the same code in their own way? No, that’s silly. And while tools like CoPilot can write that code for the hundred-and-first time, maybe it’s worth spending more time writing reusable libraries and frameworks that can be reused in more projects. AI code should be minimal, since boilerplate code should be minimal.

But while we can have a debate on React v. Angular v. Vue, there are bigger areas where engineers are needed in the form of standards bodies. All three web frameworks generate standards-compliant webpages, and are transmitted over standards-compliant HTTP.

Standards bodies are made up of engineers, those who understand the area at a deep level, who discuss where to take the standard next. You can’t evolve the standard without knowing where it is now. You can’t just rely on AI to transform data back and forth between incompatible formats, especially if the cost of errors is significant.

This is far more important in hardware, where a variety of companies need to agree on common protocols and connectors. And while there is a lot of technical details to agree upon, getting hundreds of different companies and thousands of stakeholders to agree is also an exercise in communication and diplomacy. AIs are just ineffective here.

AI tools can’t trust its own training data

PMs will never be able to build software on their own because they don’t know how to hack a solution together.

Here’s a great example that happened to me recently: I’ve been building a Node app and needed to make an API call using node-fetch. This is a trivial thing and the code took me a minute to write. AI could do that even faster.

Right now it would write the function, but not change my dependencies. But it is very easy to go into my package.json and add the latest version of node-fetch.

So I deployed my code and expected it to work. It didn’t though, instead giving me an unusual error that I couldn’t understand at first. It turned out that a bug had been introduced in the latest version of node-fetch. I was able to quickly downgrade the version and redeploy, since I understand not just how to code but how to debug code and handle cases like this.

I don’t mean to call out node-fetch in particular, as I really do like the library. It reduces a lot of boilerplate, and these kinds of errors can happen all the time. Engineering is about designing processes to minimize errors, and figuring out how to get around them.

Again, in the hardware space this kind of engineering requires more institutional knowledge, the kind that is rarely documented in text for AI training.

When writing hardware drivers, it is not uncommon to open a 100-page PDF with all kinds of deep technical details. AI could be a useful tool here to summarize and jump to critical sections. However, it cannot actually be trusted to write the driver itself.

Why? Because the datasheet is wrong! Hardware is common for having small quirks that are undocumented or found buried in errata. You can’t know what’s true without actually testing and verifying yourself.

An AI program isn’t going to have the same level of quality tolerance that you can get in artistry or even a website. When the stakes are high, engineering prioritizes testing over code quantity.

What is the art of programming?

Matt concludes his essay by suggesting programming is a skill that is unnecessary, as we can find better ways to talk to computers.

Yet I would disagree with his conclusion. When you take a step back, you see the role of a developer is hardly about writing code. It’s about designing a system that meets the customer’s needs, ensuring that system protects customer data, interoperates with other systems as needed, and is regularly maintained.

In a decade, will we still need to pay someone to write Python? I think that’s the wrong question. Programming is, at its core, the skill of getting a computer to do what we want and not do what we don’t want. Whether that is through Python or a paragraph matters little. You’ll still need a professional to ensure everything is above board, and that’s what you’ll have to pay someone to do.

AI is only as good as it knows how to be, but will it be able to develop new coding paradigms, different UX modalities, or application concepts? Engineering is an inherently creative job.

I would take an AI that generates code roughly on par with a human engineer, which costs 10,000x less, and produces results roughly 10,000x faster — a cost and efficiency savings of 100 million times — any day of the week.

When it comes to writing code, GPT-3 can be arguably 100m times more productive. That can save a company a lot of money. But the savings don’t matter if it leads to greater liability.

--

--

Nick Felker
Nick Felker

Written by Nick Felker

Social Media Expert -- Rowan University 2017 -- IoT & Assistant @ Google

Responses (1)