Why I Stopped Building for Humans
Sometime soon, an AI agent is going to evaluate your product's API spec — its documentation, its pricing, its reliability — and decide whether to use it or a competitor's. Not a developer. Not a procurement team. A language model, making a vendor decision on behalf of a human who will never see your marketing site.
Maybe. I could be wrong about the timeline. But if agent autonomy keeps expanding at anything like the current pace, this seems plausible enough to be worth building for. The question is what that infrastructure looks like.
Let me back up.
Try this: ask ChatGPT, Claude, or any AI chatbot to make you a personal website. You'll get:
< HTML in a code block >
Maybe a nice one. Some chatbots can even give you a shareable preview link — Claude's published Artifacts, ChatGPT's Canvas shares. But these are sandboxed previews on the chatbot's own domain. There's no custom URL, no ownership, no way to edit it later through an API. The conversation produced a demo that lives inside the platform's walled garden.
Now try getting from that to a real website you own. You need to copy the code, find a hosting provider, set up a domain, configure DNS, deploy. You just went from talking to an AI — which felt like the future — to doing the same manual work you would have done in 2015.
There's a gap here, and I think it's more interesting than it looks.
The conversation-to-artifact gap
If you're a developer, this gap is closing fast. Claude Code, Codex, Cursor — these tools take a conversation and produce real files, committed code, and can run your build and deploy commands. The pipeline works because it assumes a technical user with a terminal and a hosting setup.
For everyone else, the wall is still there. AI agents have gotten remarkably capable inside the conversation. They reason, search the web, write code, analyze data, generate images. But for the non-technical person asking ChatGPT on their phone to "make me a bio page," almost nothing the AI produces persists in the world after they close the tab.
The direction is clear: agents are moving from answering questions to doing things. Most major model providers are shipping tool use, function calling, and MCP integration. The agent doesn't just tell you how to book a flight — it books the flight. It doesn't just draft the email — it sends it. But "create something on the web that other humans can visit"? For a non-technical user, there's almost no default path for an agent to do that yet.
To be specific about what's missing: hosting APIs exist. Vercel, Netlify, Cloudflare Pages, GitHub Pages — all have APIs that can deploy a site programmatically. Partial substitutes exist too — temporary hosting, deploy hooks, pastebin-style services. But none of these combine credentialless creation with an ownership handoff. The gap isn't deployment. The gap is: publish now with no end-user account, then transfer ownership to a human who may not even know your platform exists.
There's no cross-platform standard for that.
Why existing AI website builders don't solve this
Bolt, v0, Replit, Lovable — these are impressive tools. They use AI to help humans build websites and apps through natural language. Some have even shipped programmatic APIs — v0 has a Platform API, Lovable has a REST API for creating apps from prompts. But these APIs still require developer credentials. They're designed for developers building on top of the platform, not for arbitrary agents acting on behalf of non-technical users.
The shared assumption, whether through a UI or an authenticated API, is: a known, credentialed entity is driving the process.
The workflow is: human (or developer) authenticates → types prompt → AI generates code → output deploys to authenticated account. The AI accelerates the process. But there's always a credentialed entity in the seat.
This doesn't work when the "user" is an AI agent operating without platform credentials — a GPT App responding to someone in a chat, a Claude instance connected via MCP, or an autonomous agent running a workflow. These agents may not have accounts on the target platform. They interact through APIs, and the human behind the request may not know (or care) which infrastructure the agent chose.
Yes, computer-use agents can click through UIs — but that's a brittle workaround, not infrastructure. It breaks when the UI changes, it's slow, and it doesn't solve the ownership handoff problem.
Putting an uncredentialed AI in the seat sounds simple — just build an API without auth. But it surfaces a set of design tensions that don't exist in human-driven or developer-credentialed tools:
- Lower authentication friction vs. larger attack surface — asking an end user to leave the conversation, sign up for an unfamiliar platform, and come back breaks the flow that makes this valuable in the first place. But removing that gate opens the door to abuse at scale.
- Creator ≠ owner — the entity building the site (agent) is not the entity who should own it (human). There's no standard pattern for when and how to hand off ownership.
- Right-first-time vs. iterative creative process — agents can loop and refine, but each round-trip costs tokens, latency, and context. The economics push toward getting it right in fewer calls.
What agent-native web infrastructure looks like
I've been building in this space (more on that below), and here are a few principles I've landed on — though I'd be curious how others are thinking about them:
API-first, not UI-first. The interface isn't a dashboard — it's an OpenAPI spec or an MCP server. The "documentation" isn't written for developers; it's written for LLMs. This is a weird shift. You start optimising your API descriptions for how a language model interprets them, not how a human reads them. Endpoint naming, parameter descriptions, error messages — all of it gets reconsidered through the lens of "will GPT or Claude understand what to do with this?"
Ephemeral-to-persistent lifecycle. The agent creates a site instantly — no account required, no sign-up flow. The site is live immediately. But it's ephemeral: it expires after a set period unless the human claims it. This pattern is rare in traditional web infrastructure. It exists here because of a gap in the current ecosystem: there's no standard way for an AI platform to vouch for its user's identity to a third-party service. If ChatGPT could pass a verified signal saying "this request comes from an authenticated user," ownership could be assigned at creation and the claiming step disappears. Until that plumbing exists, ephemeral-to-persistent is the best compromise — the agent gets zero-friction creation, the human gets a path to ownership, and the platform limits its exposure on unclaimed artifacts.
Some prior art exists. Vercel has a "Claim Deployments" flow that lets users claim ownership of AI-generated deploys. Netlify has experimented with build-and-claim patterns. But these are vendor-specific, require an operator account for the initial deploy, and don't compose into a cross-platform standard. The primitives are emerging, but they're not portable yet.
In concrete terms, the API call looks something like this:
POST /api/sites
{
"content": { "name": "Jane Smith", "bio": "Designer in Brooklyn", ... },
"theme": "minimal"
}
→ 201 Created
{
"site_url": "https://janesmith.unu.lu",
"claim_url": "https://unu.lu/claim/ak29x",
"expires_at": "2026-02-27T12:00:00Z",
"status": "ephemeral"
}
The site is live immediately. It expires in three hours unless claimed. And yes — I'm aware that right now this is closer to a smart onboarding flow than infrastructure. That's fine. It's enough to test whether the model works.
Abuse is not optional
A publish endpoint with no authentication gate is, on its face, a spam cannon.
This is a deliberate design choice. Requiring API keys means requiring a developer integration step — which kills the scenario where an agent discovers and uses the platform on its own. Zero-friction creation for agents means zero-friction creation for bad actors. That's the core tension — and it persists as long as there's no platform-level identity signal between the AI provider and the tools agents use. Until that exists, you can't have one without accepting some of the other.
Here's what's in place today:
- Ephemeral by default. Unclaimed sites expire. This limits the window for abuse and reduces the incentive for persistent spam.
- Rate limiting by fingerprint. Volume patterns, content similarity, and creation frequency are all signals.
- Unlisted until claimed. The site exists at a URL but isn't indexed or discoverable until a human takes ownership. This changes the economics of spam significantly.
- A reporting mechanism on every generated site. At current volume, anything flagged gets manually reviewed.
Here's what's not in place yet: automated content scanning, impersonation detection, proactive moderation. These are necessary — particularly if the no-auth model survives contact with real scale. Right now the volume is small enough that manual review works. That won't last, and I know it.
If you've navigated this tension in other contexts — zero-friction creation vs. abuse prevention — I'd genuinely like to hear how you approached it.
What I built to test this
I've been building a platform called unulu to test whether this model works in practice. It exposes an API and MCP server that lets any AI agent — a GPT App, a Claude MCP integration, any agent framework — create and publish a website on behalf of its user in a single interaction.
The agent sends structured content to the API, Unulu generates and hosts the site, and returns a live URL. The human gets a link to claim and own the site. No account creation upfront. No UI in the loop.
It's early. The scope is deliberately narrow right now — personal bio pages. Not because that's the grand vision, but because bio pages are low-stakes: it's a bio page, not a storefront. This gives us a useful sandbox for testing the harder problems of handoff, trust, agent-to-human delegation and abuse prevention before the stakes get higher.
What I've learned so far
The handoff is half the product. I assumed nearly all the work would be generating good websites from agent input. In reality, I've spent more time on what happens after the site is created — the moment the artifact leaves the conversation and enters the real world. How does the human find out a site was created for them? How do they claim it without friction? What if they're on mobile? What if they don't have an account and don't want one? I've had to innovate in areas I didn't expect to be building in at all.
Building for LLMs rhymes with building for non-technical humans. Before this, I was building website tools for people who struggle with Squarespace — users that will abandon the moment something is unclear. It turns out an LLM and a non-technical user fail in the same way — silently, at the first point of confusion. The LLM won't ask for clarification. If your API parameter name is ambiguous, it guesses. If your error message is vague, it can't recover. The medium shifts from UI design to API spec writing, but the discipline is the same — radical clarity, simplicity, zero assumptions.
There's a missing "middle" in platform authentication. Today I can choose between no auth at all or requiring every end user to authenticate with my platform. What I actually want is simpler: a platform-level signal that tells me "this request is coming from a signed-in user on ChatGPT's side" — without my platform needing to know who they are. A shared secret, not a shared identity. This primitive doesn't exist yet.
The agent ecosystem is more fragmented than it looks from the outside. MCP has an official registry and over ten thousand public servers. OpenAI has an app directory inside ChatGPT. The building blocks exist. But authentication between platforms has no middle ground, specs written for one platform don't always port cleanly to another, and discovery is still immature. None of this is fatal — it's just the reality of building at this stage. The infrastructure is being laid while people are already trying to build on top of it.
Open questions I don't have answers to
I'm sharing these because I genuinely don't know the answers and I think they matter:
What happens when agents start choosing platforms? Right now, humans choose the tool chain. But as tool discovery matures — MCP directories, capability-based routing — agents may start selecting platforms on their own. The next customer doesn't care about your landing page — it's reading your API spec instead.
Who pays for unclaimed artifacts? If an agent creates a site and the human never claims it, someone bore the cost of hosting, generation, and moderation. Are publish credits tied to the agent developer, the user, or the platform? The economics of ephemeral infrastructure are different from anything in traditional hosting.
What's the edit loop? Publishing once is relatively straightforward. Updating later is harder. The original builder was an agent in a conversation that's now closed. The owner is a human who may not be technical. Do they ask another agent to edit it? Do they use a UI? Does the original agent retain some relationship with the artifact? The maintenance story is unsolved.
Is "the agent builds it" actually what people want? There's an assumption baked into all of this — that people want AI to create things autonomously on their behalf. Maybe they don't. Maybe most people want AI to help them build, not to build for them. The copilot model might win simply because humans want to feel ownership over the creative process, even when the AI is doing 90% of the work. I'm betting against this, but I might be wrong.
I'd be curious what this community thinks — especially from anyone building agent tooling or thinking about the infrastructure layer for autonomous AI. What am I missing? If this problem is already solved in a way I haven't seen, please point me to it. I'm testing this at unulu.