Pull to refresh
1024K+

Product Management *

Learning how to manage a product

433,7
Rating
Show first
Rating limit

What "AI-first" actually means when you have 5 years of legacy code

Every software company wants to be AI-first now.

For teams starting fresh, this sounds logical and doable.

The more interesting question is different. What does it mean for a product that's 5 years old, has 200 microservices, thousands of pages of docs, and decisions made by people who left two years ago?

Why greenfield teams have it easier

Teams starting from scratch don't have accumulated context they need to put somewhere. They can agree from day one on how they write specs, how they keep ADRs, what the rules are for agents.

Legacy teams have a different reality. Services nobody wants to touch because nobody remembers why they're built that way. Processes that work "because that's how it ended up." Integrations held together by one person. Decisions made on calls in 2022 with no written trace.

This accumulated context is exactly what AI tools are missing the most.

Cursor and Claude Code don't solve this on their own

You can plug Cursor or Claude Code into a legacy repo and they will work. They will suggest reasonable changes, generate tests, refactor functions.

The problem isn't the tool. Without context, the tool doesn't know that this endpoint can't be changed because of an external contract, that this seemingly duplicated logic actually does different things, that this service is being deprecated, that this database structure looks odd because of an old migration.

None of this is written in the code. It lives in people's heads, in old Jira tickets, in Confluence pages last updated in 2023. The result is code that compiles fine but breaks something three services away.

Where the context went

If you look at a typical legacy team honestly, the context is scattered across maybe ten places. Calls nobody transcribes. Slack threads you can't find a month later. Tech leads' personal notes. PR comments that aren't indexed. Old docs that are partially accurate, partially not.

Someone who's been on the team for three years holds this picture in their head. A new developer assembles it over months. An AI agent doesn't assemble it at all.

What usually doesn't work

The standard response is "let's write good documentation for AI." Six months in, it's clear that keeping docs current at the actual rate of product change is essentially impossible by hand, that whatever is written is always a couple of sprints behind reality, and that developers don't enjoy writing docs.

Documentation as a separate activity almost never catches up with the product.

What seems to work better

Treat context not as an artifact you maintain separately, but as a byproduct of what the team already does.

The team discusses a feature, and the discussion turns into a draft spec automatically. An architectural decision gets made, and an ADR gets captured without a separate ritual. What used to require a disciplined tech writer can now be partially automated, because LLMs are finally good enough to do this work at acceptable quality.

That's the bet behind what we're building at replys.ai. Product discussions get turned into specs, updated tasks, and synced docs automatically, so context lives where the work happens, not in someone's head.

Fix this part, and any AI tool starts working much better on the same legacy codebase. Don't fix it, and you can buy ten different tools each one runs into the same wall.

Tags:
0
Comments0

Does your team actually use PRDs, or has something else taken over?

Anthropic recently said they don’t really rely on classic PRDs. They build a prototype, ship it internally, and let people use it. In that world, the prototype becomes the main reference.

A lot of people heard that and thought, “PRDs are dead.” I don’t think that’s quite right.

It works at Anthropic because everyone is technical, they use the product themselves, and they trust AI‑generated code enough to ship it early. The product evolves through real use, not documents.

Most teams are not like that. There’s a short call, a loose agreement, and then a ticket that misses half the conversation. By the time something ships, it’s working code, but not what was really meant.

So to me, Anthropic is not killing PRDs. It’s replacing them with heavy internal usage and fast feedback. If you remove PRDs and don’t have that, you’re not being like Anthropic. You’re just losing context.

For me, the key question is not “do we need PRDs.” It’s “what makes sure the team actually builds what it agreed on.”

Tags:
0
Comments0

Replys beta is open. Join today at replys.ai

You discuss. We update the specs and create the tasks.
In AI-first teams, the center of gravity is shifting.

Code gets written faster than ever. The real work moves upstream, into discussions. Sprint plannings. Stakeholder calls. Architecture debates. The thread where half the decisions actually get made.
That's where the product lives now. And that's where things fall through the cracks.

Replys makes sure they don't.

It listens to your team discussions, understands your existing documentation and tasks, and turns what was said into structured specs and tickets. Synced to Jira, Confluence, and Notion. Context-aware, so it updates what's already there instead of duplicating. Human review before anything gets pushed.
Every decision makes it into the docs. Every requirement makes it to the devs. Nothing gets lost between the meeting and the codebase.
Built for teams working on existing, complex products, not greenfield vibe coding.

If you're a PM, BA, or tech lead, we'd love to have you in the beta.
Link in the first comment.

#ProductManagement #BusinessAnalysis #AI

Tags:
0
Comments0