Building This Portfolio With Antigravity: What We Tried, What Failed, and What We Learned


Building a personal portfolio is often a web developer’s favorite continuous project. Recently, I decided to completely revamp mine into a Linux terminal-style interface, complete with a floating AI chat assistant. I built the whole thing pair-programming with Antigravity, an advanced agentic coding assistant.

This isn’t a polished “here’s the perfect stack” post. It’s a more honest account the wrong turns we took, the walls we hit, and the lessons that came from them.

Where It Started: A Simple Static Site

The first version of this portfolio was humble. A static site, hosted for free on Firebase, generated with Astro. No backend. No secrets to hide. It worked great, and for a while, that was enough.

Then I wanted to add something more interesting: a floating AI chat assistant, always available, styled to match the terminal aesthetic of the rest of the site. That single feature decision is what cascaded into every architectural choice that followed.

The Stack We Landed On

Before getting into the journey, here’s what the site runs on today:

  • Astro: The core framework. Astro’s island architecture is a natural fit for a mostly-static site with a few dynamic pockets. By default it ships zero JavaScript, which keeps the terminal theme feeling fast and lightweight. Interactive islands like the chat widget drop in exactly where needed.
  • Vanilla CSS: No utility framework. Pure, semantic CSS with custom variables for the green-on-black palette, typewriter animations, and JetBrains Mono as the monospace font. The control this gave us over the retro aesthetic was worth it, though it required more deliberate thinking than reaching for Tailwind.
  • Vercel: Hosting and backend. Spoiler: this wasn’t the original plan. More on that below.
  • OpenRouter API: The engine behind the AI chat widget.

The First Big Mistake: Ignoring Infrastructure Until It Mattered

Here’s the thing about adding a chatbot: you can’t expose API keys to the client. That means you need a server or at least a secure backend function to proxy the requests. With Astro, that means enabling Server-Side Rendering (SSR) via Astro’s server mode.

Setting up SSR in Astro wasn’t the problem. The problem was that I was still on Firebase, and Firebase’s free tier doesn’t support the cloud functions needed to host an Astro Node server. I only discovered this at deployment time, after the feature was already built and working locally.

It’s a classic lesson: infrastructure constraints don’t announce themselves early. I’d gotten deep into building before asking “wait, can our host actually run this?” The answer was no not without upgrading to a paid Firebase plan for a feature I’d built in an afternoon.

Rather than pay for something I didn’t otherwise need, I evaluated alternatives. Vercel had seamless, out-of-the-box support for Astro SSR on its free tier. The migration was smooth maybe two hours of work but it was entirely avoidable. Now I ask the infrastructure question before writing the first line of a new feature.

The Architecture: Static and Dynamic, Side by Side

Once on Vercel, the architecture clicked into place:

Hybrid Rendering (SSG + SSR): Most of the portfolio the home page, the about section, the blog is statically generated at build time. Fast, cacheable, great for SEO. The AI chat feature uses SSR: requests flow through a dedicated Astro API route (/api/chat) that holds the OpenRouter key server-side, away from the client.

The Chat Widget: The floating assistant was the hardest part to get right, and not because of the AI integration itself. The tricky bits were:

  • Rate limiting that survives page refreshes. An early version reset limits on every load, which was easy to bypass. Getting persistent rate-limiting state right took a few iterations.
  • Rendering Markdown in the chat UI. The model returns bold text using **asterisks** and the initial implementation just rendered that literally. Fixing how the chat UI parsed and displayed formatted responses was a small thing that took longer than expected one of those bugs that seems trivial until you’re in it.
  • Dependency conflicts at deployment. We hit ERESOLVE npm errors during the first few Vercel deploys. Nothing catastrophic, but a good reminder that local environments can mask version conflicts that surface in CI.

What Pair-Programming With Antigravity Actually Felt Like

Antigravity wasn’t just a code generator it was a genuine collaborator that operated at the level of the whole project. It initialized the Git repo, bootstrapped base content from my CV, executed complex CSS reworks for the terminal theme, and handled mobile responsiveness.

But more than the individual tasks, what made it useful was the debugging loop. When we hit the Firebase wall, Antigravity fetched documentation, reasoned through the options, and helped me weigh the tradeoff between paying for Firebase functions versus migrating to Vercel. When the ERESOLVE errors appeared in deployment logs, it pulled the relevant context and worked through the resolution without me needing to context-switch into npm documentation.

The rhythm it created was different from solo development. Instead of stopping to read docs every time I hit an unfamiliar API or config, I could stay focused on the high-level decisions. That’s not nothing the context-switching cost in software development is real, and reducing it kept the project moving.

Debugging: The Hardest Part of Agentic Development and How Antigravity Solved It

Here’s something nobody talks about enough when it comes to agentic coding tools: debugging is the hard problem, not code generation.

Getting an AI to write a component or wire up an API route is relatively straightforward. But when something breaks a layout shifts on mobile, a widget renders incorrectly, a CSS animation fires at the wrong time the traditional debugging loop falls apart. You can paste error messages into a chat window, but you’re now playing telephone. The agent writes a fix based on your description of the visual, you check it, paste back what’s still wrong, and round you go. It’s slow, and a lot gets lost in translation.

Antigravity handles this differently. It can take control of the browser directly opening the live app, interacting with it, and capturing a screenshot of exactly what’s on screen. That screenshot becomes part of its context. It’s not working from your description of the bug; it’s looking at the bug itself.

This changed the debugging experience completely. When the chat widget’s Markdown rendering was broken bold text showing as raw **asterisks** in the UI I didn’t have to explain what it looked like. Antigravity opened the browser, saw it, and had everything it needed to fix it. Same with the mobile layout issues that only appeared at certain viewport widths. Rather than asking me “can you describe what’s misaligned?”, it checked.

The loop became: make a change → Antigravity opens the browser → takes a screenshot → sees the result → iterates if needed. That’s not a fundamentally different process from what a developer does manually, but having the agent close the loop autonomously is what makes it feel qualitatively different. Bugs that would have taken 30 minutes of back-and-forth description got resolved in one or two cycles.

It also caught things I wouldn’t have thought to check. After a CSS refactor for the terminal theme, Antigravity did a visual sweep opened the site, scrolled through pages, took screenshots and flagged a section where the green accent color had broken contrast on a particular heading. I hadn’t noticed it. It had.

For anyone evaluating agentic development tools, I’d put browser control and visual feedback near the top of the feature checklist. Code generation is table stakes now. The ability to actually see what it built and close the feedback loop without human relay that’s what separates a useful tool from a genuinely productive one.

What I’d Do Differently

If I were starting this project over:

  1. Lock in the deployment platform before writing SSR code. The Firebase-to-Vercel pivot was avoidable.
  2. Build the rate limiting correctly from the start. It’s easier to design state management upfront than retrofit it after the rest of the feature is working.
  3. Test deployments earlier. Local dev can mask dependency issues. A quick deploy to a staging environment on day one would have caught the ERESOLVE errors before they mattered.

None of these are profound lessons they’re the same lessons developers relearn on every project. But hitting them in a real build, rather than reading about them, is what makes them stick.

Final Thought

The site you’re reading now exists because of a chain of constraints: a chatbot needed a server, the server couldn’t run on a free Firebase plan, and that forced a migration that made everything else work better. The best version of this portfolio came from the failures, not the plan.

That’s usually how it goes.