From vibe coding to production

8 min read Updated 1 day ago

From vibe coding to production

You opened a chat with Claude Code, Cursor, or Windsurf, described a product idea, and an hour later you had something that ran on localhost:3000. Andrej Karpathy called this style of building "vibe coding": you say what you want, the model writes most of the code, and you steer with prompts rather than keystrokes. The hard part is no longer typing the code. The hard part is everything that happens after the demo works on your laptop. This guide is the playbook for closing that gap. It walks through what production actually requires, why most vibe-coded projects stall before launch, and how the Model Context Protocol (MCP) collapses the deploy step into the same chat where you wrote the app.

What people mean by vibe coding

Vibe coding is building software by describing intent to an AI model and letting the model produce most of the implementation. You write less code yourself and spend more time evaluating what the model gives back, asking for changes, and iterating. The output can be anything from a one-off Python script to a full-stack Laravel or Next.js application. There is no single tool that defines the practice. Claude Code, Cursor, Windsurf, ChatGPT's code interpreter, Codex CLI, and Gemini Code Assist all belong to the same family. What they share is that the unit of work shifts from lines of code to prompts and reviews.

Why deployment is the wall

The build phase is fast because models are good at generating code from natural language. Deployment is slow because it is not a code problem. It is a coordination problem across half a dozen separate concerns: provisioning a database, configuring environment variables, picking a runtime version, getting a domain, terminating SSL, wiring logs, sizing memory and CPU, and rotating secrets when something leaks.

Each of those concerns has its own console, its own quirks, and its own learning curve. A two-day vibe coding sprint can dead-end into two weeks of YAML, terminal commands, and "why won't this build" loops. Most projects never make it past that wall. They stay on localhost, get a screenshot in a portfolio, and never go live. The promise of vibe coding stays half-met until the deploy step gets the same compression treatment that the build step already received.

What you actually need for production

The minimum viable production stack for a typical web app looks like this:

  • A managed database. Self-hosting Postgres or MySQL on a server you also run your app on is the fastest way to lose data when something crashes.
  • Environment variables and secrets. API keys, database URLs, signing keys. They cannot live in your repo.
  • A real domain. Customers do not type xxx.preview.ploi.it. You need to attach myapp.com and have it resolve.
  • Automatic SSL. Browsers refuse to load anything without HTTPS. Manual certificate management is a separate full-time job.
  • Log access. When something breaks at 2am, you need to read what the application said before it died.
  • Scaling headroom. The app might handle 10 users on day one and 1000 on day three. The infrastructure should not need a rewrite to keep up.
  • Secret rotation. Tokens leak. Keys get committed by accident. You need a way to swap them without a redeploy.

A vibe-coded app is not really shipped until all seven are in place. Skipping any of them is a clock running down to an outage.

The five-minute path with MCP

The Model Context Protocol gives your AI tool a structured way to call external services as tools. Once Ploi Cloud is connected over MCP, the entire production checklist above is reachable from the same chat where you wrote the app. The flow looks like this.

You finish the build, push your code to GitHub, and type into Claude Code (or Cursor, or Windsurf):

Deploy my Laravel app from github.com/myorg/notebook to Ploi Cloud. Add Postgres and Redis. Use the domain notebook.example.com.

Your assistant calls applications_store to create the application, detecting Laravel from your composer.json. It calls v1_applications_services_store twice, once for Postgres and once for Redis. The platform auto-injects the database URL and Redis credentials into your application's environment, so you do not edit a .env file. It calls applications_domains_store for the custom domain, which kicks off automatic SSL provisioning. It calls applications_deploy and the build runs.

The first time it does any of this, your browser opens for OAuth. You sign in, approve, and the assistant has a token. Subsequent calls run silently. Two minutes after the deploy starts, you have a live URL and a working application with a real database, real cache, and a real certificate. The five-minute claim is conservative. Most apps deploy in two.

Frameworks supported out of the box

Ploi Cloud detects the framework from the files you push and configures the build, runtime, and start command for you:

  • Laravel: detected from composer.json containing laravel/framework. Builds with composer install plus npm run build for assets.
  • Statamic: detected from composer.json containing statamic/cms. Same build path as Laravel.
  • Craft CMS: detected from composer.json containing craftcms/cms. Composer-only build.
  • WordPress: detected from wp-config.php. Pre-built, mounts a persistent volume on wp-content/uploads.
  • Generic PHP: detected from any composer.json without a framework signature. Composer-only build.
  • Node.js: detected from package.json. Builds with npm install && npm run build.
  • Next.js: a Node.js project with the next dependency. Same build, with the Next.js production server as the start command.

You can override any of these by editing the build configuration after the application is created, and you can ask your AI assistant to do the override for you.

When auto-fix saves you

Deploys fail. The interesting question is what happens after they fail. With the pc-deploy skill for Claude Code installed, the assistant reads the deployment logs, identifies the cause, applies a fix, and retries. Three examples that come up often.

A Laravel app boots, runs migrations, then gets killed mid-request because the worker process tipped over the 1Gi memory limit. The skill spots OOMKilled in the deployment status, calls applications_resources_update to bump memory to 2Gi, and redeploys. If the new size still falls over, it bumps to 4Gi.

A PHP app fails with Class "Imagick" not found during the first request. The skill recognizes the missing extension, calls applications_php-config_update to add imagick to the PHP extensions list, and redeploys. The image rebuilds with the extension installed.

A Next.js build fails because npm run build is missing from package.json. The skill calls applications_build-config_update to set the right build command, commits the fix in your application's build config, and retries the deploy.

The skill retries up to five times, escalating between attempts. Most production deploys succeed on the first or second try.

What it costs

Ploi Cloud is usage-based. You pay for the resources your application actually consumes (CPU minutes, memory, storage) and the managed services you attach (database, cache, queue). There is no flat fee per app and no charge for sitting idle. Small apps cost cents per day. Bigger apps cost what bigger apps cost. See the pricing page for current rates and the calculator.

Pick your tool

All three connect to the same MCP endpoint, expose the same tool set, and produce the same results. Pick the one your existing workflow is closest to.

Skip the boilerplate with the deploy skill

Once your AI tool is connected, the pc-deploy skill compresses the typical first-deploy conversation into a single command. Type /pc-deploy from your project directory and the skill detects the framework, creates the app, attaches services, configures the build, deploys, monitors, and auto-fixes. It is the closest thing to a one-key shortcut from "the demo works" to "the URL is live."

Next steps