Software Engineering Is Still Writing Letters By Hand

Every now and then a new technology arrives that completely breaks the assumptions of the world around it. Not gently. Not politely. Not in a way that leaves existing workflows intact. It shows up like an unexpected guest who kicks the furniture out of the way and sits down wherever it likes.

That’s what GenAI is doing to software engineering.

We now have tools that can write code faster than any human who has ever touched a keyboard. Tools that can generate an entire service layer while you’re still finishing your coffee. Tools that make the very idea of “typing out logic by hand” feel like sharpening a quill pen.

And yet the rest of the software process still behaves as if code is the scarce part of the system. As if writing those lines is the expensive, delicate act that must be protected with layers of ceremony and ritual. We still operate like code is a hand-written letter being mailed across an ocean for review. We treat pull requests like envelopes that need a wax seal of approval from some principal engineer who has appointed themselves the guardian of The One True Abstraction.

It’s absurd.

You can now generate features in minutes, but your CI pipeline still takes 18 minutes to run tests that check whether a string matches another string. You can write code ten times faster, but you are still waiting three days for a code review because the reviewer is busy tweaking architecture diagrams for a future system you’ll probably never ship. We’ve sped up the wrong thing. We’ve strapped a rocket engine to one wheel of the car and then acted surprised when it doesn’t go any faster.

The problem isn’t the coding. The problem is everything wrapped around it.

The reviews. The pipelines. The brittle tests. The committees. The approvals. The workflows that read like a Kafka novel. The belief that is deeply ingrained in our culture that the job of engineering is to police each other’s pull requests rather than deliver working software to actual users.

There’s a particularly tragic comedy in watching teams celebrate “AI-assisted coding” only to funnel that output into a process designed to limit velocity at every turn. It’s like inventing a printing press and then insisting every book be copied out by hand afterwards, just to be safe.

Meanwhile, out in the real world, the tech stack has changed under our feet. Cloud-gen AI tools are already building, testing, refactoring, and deploying in ways legacy processes simply cannot accommodate. Compute has become so cheap and so plentiful that the idea of waiting thirty minutes for a pipeline feels like deliberately choosing to live without electricity. We’re sitting on infrastructure that could run build steps in parallel on hardware that makes supercomputers from the 90s look like pocket calculators, and we’re still behaving like we only get one build a day.

Why? Because the process hasn’t caught up.

We created development pipelines for an era when code was slow and humans were the only generators of complexity. But now the bottleneck isn’t the developer, or the keyboard, or even the repo. It’s the system around it. It’s the superstition that every change must be reviewed by someone whose primary contribution is a preference for different indentation. It’s the belief that unit tests checking HTML strings are somehow a bulwark against disaster.

The truth is simple: if you only speed up code writing, you break everything downstream.

Your review queues collapse.

Your tests buckle under the volume.

Your build pipelines become traffic jams.

Your architecture decisions become the new drag coefficient.

And this is the part many organisations still refuse to face: you cannot cling to the old ways and still expect to survive in the new world. Teams that restructure everything: code, infra, review, release, architecture around the assumption that code is effectively free will run circles around teams that don’t. Small teams, especially, are about to discover they can outperform entire departments simply because they aren’t carrying the cultural baggage of slow development.

In a world where writing code is basically instantaneous, the competitive advantage shifts to those who can ship code instantaneously. Those who can validate ideas quickly. Those who can refactor continuously without drowning. Those who treat software as a living system, not a museum exhibit curated by senior engineers who think elegance matters more than utility.

This is the part that’s hard to swallow for traditionalists: the new ways are not optional. They’re not “nice to have.” They aren’t some trend for startups in hoodies. They will win because they are better. Faster. Cheaper. More aligned with how software is actually consumed and changed and corrected.

Software engineering as we know it is fundamentally incompatible with the next decade. You cannot have Victorian-era workflows in a world with warp-speed tooling. You cannot have code reviews designed for handwritten scripts when machines generate thousands of lines without blinking. You cannot have pipelines designed for slow humans when machines operate at machine speeds.

This is the shift: software practice has to evolve or collapse under its own rituals.

Shipping has to match the speed of production, otherwise you’re just stockpiling code like unsold inventory. Faster development only matters if it leads to faster validation, faster learning, and faster course correction. GenAI gives us the ability to produce features at an incredible pace, but if shipping still takes months, nothing has improved — you’ve just created a larger pile of untested assumptions. Software doesn’t generate value sitting in a repo; it generates value when real users touch it, react to it, break it, ignore it, or ask for more. Without that loop, the acceleration at the front of the process becomes a liability instead of an advantage.

And then there’s the scale delusion. Most internal tools don’t need Netflix-grade failover or £2,000-a-month Kubernetes clusters to serve three people filling out a form once a week. Yet companies routinely throw huge engineering teams and enterprise infra at problems small enough to solve with a spreadsheet and a shared inbox. Spending six months with eight engineers to automate a process handled today by three people over two days is not good business — it’s big-tech cosplay. The next era of software will punish this waste. The teams that win will be the ones who ship at the speed they build, validate early, scale only when reality demands it, and stop mistaking complexity for competence.

There’s also a more pragmatic lens we rarely apply: how much should we even spend on this problem in the first place? We obsess over speeding up workflows without doing the basic maths. It’s the XKCD “Is It Worth the Time?” chart for engineering decisions: if shaving 30 minutes off a pipeline saves the team a collective 200 hours a year, great — automate it yesterday. But if we’re about to sink weeks into optimising something that only inconveniences one person twice a quarter, maybe the grown-up answer is: don’t. Speed isn’t just about going faster; it’s about knowing which business problems are worth fixing at all.

We need pipelines that adapt.

We need tests that measure behaviour, not strings.

We need review systems that validate intent, not syntax.

We need architectures that are continuously refactored, not debated endlessly.

We need to stop treating code like sacred scripture and start treating it like what it is: a byproduct, not the product.

The teams that embrace this will move at a pace nobody has ever seen before. The ones that don’t will look around in a few years and wonder why the world passed them by.

The future of software isn’t faster typing.

It’s a different universe entirely.

Leave a comment