The Fear.

I like AI and have been working with the technology for over a decade. It’s been exciting, interesting, and inspiring — until recently.

As I write this in mid-February of 2026, software stocks are in freefall and social media is inundated with videos of AI company CEOs predicting the looming collapse of the software industry.

They tell us the story we’ve heard on loop for two years:

  • “Most white-collar work will be fully automated within a year.”

  • “AI will be writing all of the code within 12 months.”

  • “AGI is just around the corner.”

The narrative is clear: the end is nigh. Fear-mongering, panic, chaos.

If software is a commodity that AI can generate for free, then the trillion-dollar software industry is effectively zeroed out, along with everyone in it.

On the other hand, I suppose it’s great for the average consumer — no more paying for software!

But are we really paying for software that much?

I lean into that thought and pull out my phone to figure out which apps I can replace so I don’t have to pay for them.

OK — I pay for Spotify, but the app is free. I’m paying a subscription fee for a music streaming service, not the app on my iPhone. I can even cancel the subscription if I don’t mind the ads. That’s how most of the world consumes YouTube anyway.

Fine. What else? I pay for Netflix — but again, not for the app itself, for the service. Next?

iCloud. And again, the app is free; it’s the storage and sync that costs money.

After a moment of going through my phone, I realize that all of the software is free, including the operating and all the updates. The fact of the matter is that we don't pay for email, maps, GPS, route planning, weather, video calls, news, or texting. For decades, most of the software used by billions of people has been free.

So I’m genuinely surprised no one asks the obvious question: what exactly is the general public going to vibe-code en masse, and why would they?

And how is the next GPT going to make a Spotify subscription cheaper or force Apple to lower iCloud costs?

People are glued to their screens — that’s indisputable. But they’re still mostly scrolling through social media, playing games, checking email, reading news, texting, working or watching a movie. What is also true is that millions are now using LLM chatbots to ask questions, brainstorm, or generate videos of a talking cat. But the time spent with a chatbot is dwarfed by time spent in Excel, Minecraft, or TikTok — especially when you try to justify charging money for it on a global scale.

So who spends hours every day in back-and-forth conversations with AI? Who is that single largest cohort of heavy, daily AI users?

Software developers.

Which brings us to the paradox.

The Paradox.

For anyone inside the software industry, two things are impossible to miss — the fearmongering, and the fierce competition between tech giants to build the next best IDE — or rather, the next fork of VSCode. Billions of dollars and countless hours of engineering time are being spent on coding tools.

Cursor, Windsurf, VS Code, and now Codex from OpenAI — they’re all competing for one customer: the enterprise software developer. That allegedly soon-to-be-extinct creature, destined to be replaced by AI.

Another obvious question nobody seems to be asking: if software developers are going extinct, why spend billions building tools for them?

Here’s another data point worth sitting with — the Claude Cowork timeline.

While mass media obsesses over the narrative that Claude Cowork is killing the software industry, one important fact gets omitted entirely.

Anthropic — a company widely considered to have the best coding model in the world — took nearly half a year to port their own app from Mac to Windows.

Think about that.

This is a company with billions in funding, the best engineering talent in the valley, and unlimited access to their own AI models. And yet they couldn’t simply prompt a Windows version into existence.

They had to build it. They had to debug it. They had to deal with the reality of the Windows API, the registry, permissions, and the rest of the software development routine.

The release of Cowork has been labeled a harbinger of software’s collapse. But if it’s indeed a canary in the coal mine, it signals something entirely different.

If anything, it’s a clear indication of an emerging trend: companies that were laser-focused on building the next best LLM are now actively seeking ways to break into the SaaS business.

It’s no longer a secret that no matter how many billions a company spends training a frontier model, a competitor will distill it for a fraction of the cost in a fraction of the time. Which means selling model access is unlikely to yield a healthy ROI for whoever trained it. Basic arithmetic makes that impossible. Hence the foray into new revenue streams — and hence Cursor, a SaaS company built entirely on third-party models, now has its own.

SaaS is dead, long live SaaS.

The Arithmetic.

To be clear — this is not an attack on Cowork or any other product. Cowork just happens to be all over the news recently and I simply decided to give it a go. I had a one-page DOCX file I needed converted to PDF and merged with another PDF. A trivial task. I downloaded Cowork app and asked it to do it for me, and it did — in about a minute, with exactly the result I wanted. I was impressed!

But then, it got me thinking - a rough estimate of the tokens burned for this simple task is around 15,000 — approximately $0.08 at current API rates. Scale that to 10,000 documents per month and you're looking at roughly $800/month. That’s not cheap, especially considering the alternative.

A standalone "classical" binary application does ten times more work for practically nothing. Zero dollars. It won’t generate a photo of a cat, but it will churn through millions of documents — faster, cheaper, and 100% correct, every single time. That’s process automation at scale.

The Process.

When software is built to automate a business process, it must generally meet five requirements: reliability, cost-efficiency, performance, security, and maintainability.

There are countless of payment terminals in the world that have purpose-built editions of Windows running a single app that authorizes a transaction, sends it to the payment processor and prints a receipt. That’s it. That’s all it does. And it works like clockwork, as it should.

Deterministic, rule-based process that must be 100% reliable and auditable cannot be replaced by an LLM. Any GPT-type AI is probabilistic, computationally expensive, require significant memory and processing power, and occasionally gets things wrong.

LLMs are a genuinely useful and valuable component of the modern software stack, but they’re not the stack - by design.

The Work.

The work keeps getting more sophisticated, and it doesn’t slow down. The cumulative effect is real.

Building production-grade software means simultaneously reasoning about distributed state across multiple services, API rate limits, latency and failure modes, cloud cost optimization, security at every layer — from data transmission and persistence to third-party integrations — eventual consistency in NoSQL systems, component reusability, and endpoint design. Not to mention file colocation choices and naming conventions. And you have to hold all of it in your head in the split second before you give instructions to the AI.

The cruel irony is that while it’s so much faster to build and deploy a web app, the tools that enable it - React, Redux, GraphQL, Kubernetes, serverless functions and now langraph and langchain - each introduce their own steep learning curves, failure modes, and abstractions that eventually leak. We no longer have the need or the mental capacity to maintain the knowledge of the syntax of the shell command that kills a process on a specific port. We know it exists and what it does, and the AI can just go ahead and do it.

Creating a clickable app is no longer difficult, but risk and complexity is more distributed, more expensive to get wrong, and harder to hold in one’s head because it spans infrastructure, economics, security, and architecture all at the same time. A bug used to crash a program. Now someone with no coding experience can vibe-code and push some code with a single click, and it will go unnoticed — until the AWS bill arrives with a 120% increase as the only error message.

And yet — none of that is the AI’s fault.

As a tool, LLM is worth the money being paid. Sometimes it makes mistakes and poor choices, but it is still remarkably good and keeps getting better and helps ease off the burden of ever increasing work. Like many others, I hardly write any code anymore, instead I write a lot of instructions for the AI and I read a lot of code too. I do get a lot more work done and more code shipped compared to before the GPT era. Nevertheless, the amount of work that still needs to be done continues to increase. While productivity has leaped, so has the todo list.

True story.

A startup needed a marketing website. One of the developers used AI to generate and deploy it within hours. Naturally, it was a Next.js app — a questionable choice for a static marketing site, but that’s a separate conversation.

Literally the following day, they needed to update the site: legal documents uploaded, copy changed, the usual. Within a week it became clear that nobody on the team had time to keep prompting the AI for updates. A position opened up. For what they were willing to pay, they couldn’t hire anyone with coding experience, so they hired without it. The new hire had never written a line of code in their life. The company bought him a Cursor Pro subscription and tasked with implementing the updates. After learning the basics of Git he now spends most of his day in Cursor, talking to AI.

The code quality is not good by any standard, but it doesn’t matter — the static site works, and for the startup it makes economic sense. Everybody’s happy, and someone who was previously unemployed now has a full-time vibe-coding job.

So it goes.

Don’t panic.

People will continue to pay for services they want.

AI tools increase productivity and, like any tool, require skill and experience to use.

LLMs are now part of the stack, a valuable component of the final product.

Running a non-deterministic probabilistic model to add 2+2 is thermodynamically insane and does not scale.

The Jevons Paradox stands — as code becomes cheaper, we don’t use less of it; we use more.

There is now more work, not less.

Только зарегистрированные пользователи могут участвовать в опросе. Войдите, пожалуйста.
AI will replace all software developers within the next 5 years
50%Yes1
50%No1
0%Maybe0
Проголосовали 2 пользователя. Воздержавшихся нет.