← Articles

The Developer Role Is Changing — Here's What I Learned Building a Product with AI

483K lines of code, 349 PRs, 2.5 months, two people. What building a full product with AI tools taught me about the future of development.

Over the past 2.5 months, as a side project, we shipped:

483,803
lines of code
Elixir · HEEx · JavaScript · CSS
349
merged pull requests
~5 per day
4,819
commits
in 2.5 months
5
sub-applications
full-stack Phoenix

How AI Is Changing the Developer Role: Before vs. After

Based on 2.5 months of full-product development with AI tools, here’s how the day-to-day reality of the developer role has shifted:

DimensionBefore AI ToolingWith AI Tooling
Primary time spendWriting codeReviewing code, making decisions, giving feedback
Context switchingHigh cost — killed productivityLow cost — parallel sessions run simultaneously
Work rhythmSequential: one task at a timeConcurrent: feature development, bug fixing, review happening at once
Speed bottleneckWriting implementationKnowing what to build and catching what AI gets wrong
Key skillTechnical executionProduct sense + engineering judgment
Team size neededLarger teams for full-stack products2 people can ship what used to require 8–10
Roadmap modelPlan weeks ahead, executeDecide in real time, test immediately, iterate
Code review roleQuality checkPrimary safeguard against AI drift and bad patterns

A production system with auth, role-based access, agent orchestration, tool integration, eval infrastructure, observability, custom tools, and a design system — deployed and running.

The core of our workflow was mostly Claude Code, paired with a few companion apps that let us run multiple sessions at once in a simple way. I say “a few” because we tried many. In the end, I stuck with Conductor. It does the job. And that part — running many sessions in parallel — is crucial.

Before AI tooling, I would spend most of my day writing code, and then the next day thinking about what to do next. One day bug fixing, another delivering features. The context switching was killing productivity. We all know how frustrating it was to get pulled away from building a feature just to debug and fix something ASAP.

With AI, the cost of context switching becomes much, much lower. And that changes a lot.

Before, a random day could be split into blocks like this:

  • Feature development
  • Bug fixing
  • Reviewing code
  • Releasing
  • Demo
  • Brainstorming
  • Refinement
  • Testing

It all happened in long cycles, even though every team at the time would say those cycles were short and fast :) Now I have the feeling that all of this happens at once.

Of course, it depends on product size, product stage, and mostly team size. It is much easier for me and Kasia to stay in sync, talk to our three users, and decide ourselves what to change or build next. This is an extreme example of autonomy and small scale, but I honestly believe this is the model we are moving toward if we want to build great products.

Feature development happens in parallel with bug fixing. Delivered features get tested while I review code. Brainstorming can happen while I am writing prompts. Architecture decisions can be made on the fly and tested against different scenarios. Code exploration happens instantly, which gives better input for decisions.

It is all wired together.

How AI Is Changing the Developer Role in Practice

There are two absolutely crucial skills here. And I am not even mentioning AI fluency, because at this point that is just a must-have.

Here’s what each skill actually means in an AI-first workflow — and why neither can be replaced by the tools themselves:

SkillWhat It Means in PracticeWhy AI Can’t Replace It
Product SenseDeciding what to build, in what order, at what level of detail — and being able to turn that into clear direction for parallel AI sessionsAI generates what you describe. If you don’t know what the product needs, you’ll generate the wrong things faster.
Engineering JudgmentReading AI-generated code and knowing when it’s subtly wrong — bad architecture, unnecessary workarounds, patterns that will break under growthLLMs drift. One bad pattern gets amplified by every subsequent change. Only experience catches it early.

1. Product Sense: Deciding What to Build, Not Just How

At some point while building Towar, I noticed that I was moving faster than Kasia. Not because she has less skill or knowledge, but because of one specific factor.

At the beginning, the idea of how the product should take shape was mostly in my head. Because of that, I was able to spawn 10, sometimes even 15 sessions working on different features and orchestrate them individually. Meanwhile, Kasia was working more in the areas we had already discussed, in a way closer to how teams used to work — creating a backlog for a week and executing on it.

I cannot express how important this is.

Because I could imagine both the small things and the big things we needed, I could throw those ideas into a session — even in a rough form — and get a draft back quickly. That gave me a huge advantage. There was no waiting for one agent to finish before moving forward. There was always another piece of code to review, another decision to make, another round of feedback to give.

I think this is the shift many developers are still not ready for.

The idea of product sense has been around in the industry for a long time. But from what I have seen, many developers understood it mainly as the ability to build features well, simplify them, and ship them faster or more reliably.

What I mean here is something different.

It is much closer to stepping into the product manager role — actually deciding what to build, experimenting with it, and shaping the product in real time.

It is no surprise that solo founders and small teams are making huge progress with coding tools. They can throw ideas into the system, build a prototype, test it, improve it, and do it all very cheaply. What makes them fast is not just the tool. It is their understanding of the problem the product needs to solve.

That is the leverage.

Now I spend much more of my time deciding what to build, reviewing what was built, and giving feedback.

It is literally a different kind of job.

In my opinion, this makes the gap between holistic product builders and developers who mainly execute a roadmap even bigger.

Honestly, I cannot imagine working the way we did 3, 5, or 10 years ago, where a roadmap appears and teams spend the next weeks or months just working through it. If you cannot contribute to the shape of the product, you are already falling behind.

2. Engineering Judgment: Knowing When AI Is Wrong

This touches a bit on where the boundary between vibe coding and AI-assisted engineering really is.

In our case, what helped us move fast was relatively broad experience with the tech stack and a deep understanding of the ecosystem we were building in.

I think with current coding models, almost anyone can one-shot even a whole application. But the friction grows with the size of the codebase and the maturity of the product.

Keeping the codebase maintainable still requires the same skills as traditional development — or maybe now I should say the old way of development. The code still needs good architecture. It still needs solid tests. And above all, it needs to stay easy to change.

The more sophisticated a system gets, the harder it becomes to change. The same rule applies to coding agents.

Opus is great — I use it most of the time. Elixir is great for code generation. Phoenix makes features easy to test end to end. And when patterns are followed, it all comes together into a very smooth experience.

Yeah — when patterns are followed.

A lot of the time, they are not.

Sometimes the LLM drifts away from well-known concepts and suggests a workaround instead. That is exactly where engineering experience matters most. For a data-backed look at the security consequences of shipping AI code without this review layer, see The Hidden Cost of AI Code.

The judgment of what is simple, what is good architecture, and what is unnecessarily complicated becomes a huge factor. Because those small workarounds add up. One bad pattern gets amplified by the next change, and then the next one after that.

So being able to read the code, understand the plan the LLM is creating, and give strong feedback on it — that is what makes the codebase maintainable in the future.

What the Future Developer Looks Like

So my prediction is this: the developer role will move further away from being mainly about writing code, and much closer to shaping systems, products, and decisions.

Code will still matter a lot. Good engineering will still matter a lot. But the biggest leverage will come from being able to understand the problem deeply, turn that into clear direction, orchestrate execution across many parallel threads, and keep quality high while things move fast.

I think the best developers will look more like product-minded engineers with strong technical taste — what some would call a founding engineer. People who can decide what is worth building, break it into the right pieces, guide agents well, review critically, and keep the system easy to evolve. Not just people who can implement tasks from a roadmap.

In that world, coding itself does not disappear. But it becomes only one part of the job, and probably not even the dominant one.

The future developer is part engineer, part product thinker, part editor, part systems designer.

And honestly, I think this shift is already happening.

Frequently Asked Questions: AI and the Changing Developer Role

How is AI changing the role of software developers in startups?

The developer role is shifting from primarily writing code to primarily making decisions about what to build, reviewing what AI generates, and maintaining quality as the codebase grows. Developers who adapt are spending more time on product judgment and less time on implementation. Those who don’t adapt are generating more code faster — without the oversight to catch what AI gets wrong.

What skills matter most for developers working with AI tools?

Two skills stand out above raw coding ability. First, product sense: the ability to decide what to build, break it into the right pieces, and translate that into direction for AI sessions — not just execute a predefined roadmap. Second, engineering judgment: the ability to read AI-generated code and catch when it’s subtly wrong — bad architecture, unnecessary workarounds, security gaps, patterns that will compound into maintenance problems.

Is vibe coding the same as AI-assisted engineering?

No. Vibe coding means describing what you want and shipping whatever AI generates. AI-assisted engineering means using AI to accelerate implementation while applying senior judgment to everything it produces — architecture decisions, security review, code quality, and long-term maintainability. The output can look similar in a demo. The difference shows up in production. For a deeper look at where vibe coding breaks down, see The Hidden Cost of AI Code.

Can a small team really ship a full production product with AI tools?

Yes — with the right conditions. In 2.5 months, two people shipped 483K lines of code, 349 merged PRs, and a full production system with auth, agent orchestration, observability, and a design system. What made that possible wasn’t just the tooling. It was the combination of deep product understanding, the ability to run many parallel AI sessions simultaneously, and the engineering experience to catch and fix what AI got wrong before it compounded.

What does a founding engineer do differently with AI compared to a standard developer?

A founding engineer operates at the intersection of product thinking and engineering execution — which is exactly where AI amplification is most powerful. They can decide what to build (not just how), spawn multiple parallel AI sessions, and maintain quality through critical review rather than just prompt and ship. The result is output that used to require a team of 8–10, delivered by two people. That’s the model described in this article — and what founding engineer partnerships are built around.


Want to see this approach in action? Read about the stack we chose to build our AI agent platform and how engineering judgment shaped every decision. Or if you’re a founder looking for this kind of partnership — that’s exactly what a founding engineer does. Let’s talk.