Almost three months ago, I discovered vibe coding after meeting YouWare in China, and to say that it’s changed my life since would be an understatement. Over the course of 12 weeks, I’ve gone from dreaming of building certain platforms and apps to using AI tools to build them.
At first, I was slow and focused on one prompt and project at a time, but now I’m running up to five projects at once, with multiple parallel agents in each workspace. My vibe coding process and journey have changed considerably since I started, and here are the three things I wish I’d known then.
Feed the AI context
The more you tell it, the better some models are
This is my golden rule: there should be one AI agent that knows everything about your business. Their sole job is to spot the vines — both good and bad — that are binding your business together. For me, this is where Claude Opus 4.6 proves to be both a blessing and a curse.
It’s expensive, and that’s by design: every prompt analyzes the full context of your entire conversation, meaning you want to treat each agent window like a sprint session. Gemini is excellent at remembering a lot of things, but I’ve found that, even with a 1M-token context window, it can still get sidetracked or lose context frequently.
The secret to Claude is organizing things into Claude Projects and ensuring strong handoff documentation across different chats. If your goal is a fun project as a hobby, you probably don’t need to worry about this as much. If you want to make money from it, or if this is something new to you, it’s something to be wary of. The solution is to have your AI agent create different handoff documents that carry context between sessions.

Google’s new AI Studio is a game-changer for vibe coding
In three months, I’ve vibe-coded multiple platforms, but always ran into problems with one key feature. Google’s new AI Studio just solved it.
Not all models are the same
The secret lies in getting them to work together
If you want to be truly agnostic and utilize the best models for the job, there’s one thing you need: it’s taken me three months, but all of my models and access points draw from the same Google Drive knowledge base.
The goal has always been to get Gemini, Claude, ChatGPT, and even my Cursor to share knowledge, mostly so any AI can access the latest updates within 90 seconds of a file being touched. I also built a governance process that has AI create and maintain 14 different files for every project, meaning any AI I plug into my Google Drive can instantly accomplish whatever analysis or task that I want it to.
This is the key piece that makes it all work. I’m using a Python sync script to get them all to read from the same set of documents, and it now requires 50 times less Human-In-The-Loop (HITL) time.
Now, an agent in Cursor can automatically output a file to a specific folder, and a background script can do so in a Google Drive folder within a few minutes.
There are still improvements to make, but I also structured the outputs so they can be easily converted to a lightweight PDF. I was doing most of this in Claude in the browser, but my new approach is simpler and more streamlined: a Claude Opus agent in Cursor playing Architect, who then provides prompts — and verifies — the work of the Claude Sonnet or Cursor Composer 2 agent who is doing the building.

I moved my entire ChatGPT context to Claude and it finally felt like home
Here is the best path to go from ChatGPT to Claude.
Vibe coding still requires working
To charge money, it has to be worth paying for
This is my biggest takeaway: it’s not easy to get people to pay for things, especially when you are vibe-coding something that someone else could vibe-code. The secret, therefore, is in the execution and attention to detail, as well as how unique your idea is.
When I first started out on the vibe coding journey, it was simply something to pass the time during a few weeks of downtime. That was January, but now I already have broader goals, and I’m working on four major projects at once, all intertwined and working together. This is the definition of chaos, but it somehow fits the way I like to work.
That’s by design: AI will build anything you tell it to, literally. Certain models, such as Claude Opus 4.6, are extremely good at understanding the nuances of different projects and remembering your long-term goals, but it comes at a high cost: I’ve spent almost $500 in a month on Claude Max 20x and overage credits, and still hit a 3-day timeout.
No AI can truly replace your decision-making when it comes to the core non-code things that matter. Most important, and key to every one of my projects: a Pricing Strategy. It’s easy to build 100 things into a platform; it’s much harder to make them a cohesive thing worth paying for.
![]()
6 Reasons I Use Claude Instead of ChatGPT
ChatGPT is great; don’t get me wrong. But Claude is so much better.
P.S. You’re never ‘done’ building
Vibe coding is amazing, but it’s also very easy to lose track of your final goals. What should have been the current release date is actually weeks away, as my vision gets bigger and bolder. Claude Opus 4.6 is at the heart of everything I build, and while it somewhat keeps me on track, it also lets me go bigger and bolder with each request. How it stays on track is one of the things I’m most proud of: what started as Claude Skills in the browser has grown into a 14-document governance and syncing system with many potential applications.
Thankfully, Claude Opus 4.6 and Gemini 3.1 Pro have kept me on track, with the former proving my current favorite for bridging projects. Opus 4.6 is why I was able to get all my AI models talking to each other, and Claude’s rate changes and challenges are too expensive to use at every step, which forced me to develop new ways to work across agents without context drift.
The proof is in the pudding: what started as a single refactor of something vibe-coded on YouWare has turned into major platforms for publishing, events, and benchmarking. There are also mobile apps and games, and many more ideas on the roadmap to come.