First, for those who don’t know me, a bit of context. My name is Flavien, and I’ve been a developer for 16 years now. I started as a fullstack developer, then I specialized in frontend, and I currently work as a Lead Frontend Developer.
Just to reassure you: this article was not written to sell you a dream that doesn’t exist. I’m not selling anything, I don’t work for a company that sells AI products, and I don’t sell training. This is simply a raw, honest feedback post about how I work today. If you feel excitement in this article, it’s because that’s genuinely how I feel.
AI in my development process
I’ve been using AI to build software for a while now. Like many people, I started with ChatGPT in chat mode, with no project context. It helped with quick, isolated blockers, but it became painful when the problem was deeper.
Then we moved to VS Code extensions to improve autocomplete and run prompts with a bit of context (GitHub Copilot, Supermaven, etc.).
And finally, we moved to full “AI IDEs.” I tested Windsurf and Cursor, and I ended up choosing Cursor.
Why Cursor?
I didn’t choose Cursor recently; it was about a year ago. Back then (which is ancient history in AI time), models were far from what they can do today. Getting high-quality code on the first try was hard, so having a visual editor mattered a lot, so I could jump in easily and write parts without AI.
But more importantly, after more than 10 years of coding, going from “I code” to “I supervise” felt almost impossible. The need to see, touch, and write code was still visceral. So choosing Cursor felt natural, because to me it’s still a very good product.
Of course, we customized it to be more efficient (adding .cursorrules, Jira MCP, Browser, DB, Context7, Figma, etc.). In the end, I saw a productivity gain, but nothing truly game-changing. I even think that at times, I wasted more time fighting with the Sonnet model than I would have spent moving forward without it.
Friction points and the turning point
That’s when the limits showed up. First, Cursor’s business model isn’t ideal for teams. The Opus model (the best one) is expensive, and “out of plan” (the usage-based pricing beyond individual quotas) is shared at the organization level.
Then there’s the heaviness. Cursor is a fork of VS Code with its own layers, and it consumes resources. Running agents in parallel across multiple projects (a client project + an in-house MCP + a POC) is very difficult: you have to multiply IDE instances, and RAM gets saturated quickly.
But the real turning point happened last summer. After a long run of intense weeks, I stepped back and tested coding agents seriously. For the first time, I shipped a complete personal project to production… without writing a single line of code myself.
That’s when I realized something fundamental: what I loved most wasn’t writing code, it was creating things and finding solutions to a problem. Code was just a tool. I was ready for the next step: a CLI code generator.
The power of Claude Code
Since the release of the Opus model, we’ve realized a huge gap has been crossed. The reasoning quality and the code quality have nothing to do with earlier models (Gemini, GPT, or even Sonnet, which hits its limits quickly). We came to the conclusion that the only model we truly wanted to use was Opus.
Getting started with Claude Code was very easy. For me, the Developer Experience (DX) is now light-years ahead of what I had before. Instead of a small side panel for prompting in Cursor, I have my terminal full screen, with multiple tabs.
That’s where the “superpower” lives. It’s extremely fast and lightweight. Where Cursor was struggling, Claude Code feels natural: you stack instances, it runs fast even with “only” 16 GB of RAM. I literally feel like a superhero who can work on 2 or 3 projects at the same time.
The downside is that you can reach a real fatigue state quickly. Once you’re in flow, you don’t stop, you just switch from tab to tab. Small breaks become vital, but my productivity exploded to a level I never reached with Cursor.
My 2026 tech stack
Today, my environment is hybrid but clear:
-
Brain (Claude Code + Opus): It does the work. I launch tasks, it handles complexity.
-
Hands (Google Antigravity): I use the free version of Antigravity as a lightweight editor. Why? Because I barely write code now, but I still need to review diffs or do a quick touch-up. The free plan of Antigravity offers autocomplete (Gemini-based) that is often better than the default elsewhere, and the tool is much less heavy than Cursor.
It’s not magic (you have to tame it)
Be careful though: Claude Code out of the box is not the ultimate solution, especially on large existing projects. You need to improve it and guide it.
The more you customize Claude Code for your project, the better your results will be. But keep in mind it won’t do everything. It will make mistakes, and sometimes it will loop. Since it’s not deterministic, two identical prompts run in parallel won’t produce the same result.
My advice: when things “go bad,” stop everything. Kill the process and restart, instead of stubbornly pushing forward with a polluted context.
To make it work, you need configuration (which I’ll detail in future articles, because it’s a topic on its own), but here are the pillars:
-
Coding rules (AGENTS.md / CLAUDE.md): Required so it understands your architecture
-
MCPs (Model Context Protocol): I use Figma, Jira, GitLab, and especially Context7 (truly required). Be careful what you install, because of data security
-
Skills: For example, I created a skill so it can update dependencies intelligently (read changelogs, migration guides, run TS checks, etc.)
-
Plan mode: At the start of every task, we ask Claude to prepare a plan. This helps you iterate and validate the best approach before touching the code
Conclusion
This migration is not just a tool change, it’s a job change. I spend less time typing code and more time designing, orchestrating, and validating. If you also feel your real value is in the solution more than the syntax, it’s time to try the terminal.