localgcp

{ the unified GCP emulator }

One binary, nine services, zero cloud bills. Develop against GCP locally, including Vertex AI with real local inference.

New: Vertex AI emulator. Your google.golang.org/genai code talks to Gemma, Llama, or any Ollama model. Same SDK, zero code changes, no API keys.
localgcp
$ brew install slokam-ai/tap/localgcp $ localgcp up Starting localgcp... Cloud Storage :4443 Pub/Sub :8085 Secret Manager :8086 Firestore :8088 Cloud Tasks :8089 Vertex AI :8090 ollama: gemma3 localgcp is ready. Press Ctrl+C to stop.
{ }

Nine services. One process.

Your GCP client libraries already support emulator host env vars. Point them at localhost and your existing code works with zero changes.

AI
Vertex AI
REST · :8090

generateContent and embeddings via Ollama. Proxy to Gemma, Llama, or any local model. Stub mode for CI/CD.

Cloud Storage
REST · :4443

Bucket and object CRUD. Simple, multipart, and resumable uploads. Signed URLs. JSON and XML API paths.

Pub/Sub
gRPC · :8085

Topics, subscriptions, publish, pull, StreamingPull. Push subscriptions and dead letter topics.

Firestore
gRPC · :8088

Document CRUD, real-time listeners (onSnapshot), queries with in/array-contains. Transactions.

Secret Manager
gRPC · :8086

Secrets with versioning. Enable, disable, destroy states. Access by version number or "latest" alias.

Cloud Tasks
gRPC · :8089

Queue and task CRUD. HTTP target dispatch. Scheduling, retry with exponential backoff.

Cloud KMS
gRPC · :8091

Encrypt/decrypt, asymmetric sign, HMAC. KeyRing and CryptoKey management. In-memory keys.

Cloud Logging
gRPC · :8092

Write and query log entries. Filter by severity, text payload, log name. Bounded in-memory store.

Cloud Run
gRPC · :8093

Service CRUD with immediate operations. Auto-generated URIs. No polling required.

>

Three commands to local GCP

01

Install

Single binary. No Docker, no JVM, no runtime dependencies.

$ brew install slokam-ai/tap/localgcp # or: go install github.com/slokam-ai/localgcp/cmd/localgcp@latest
02

Start

All nine services in the foreground. For Vertex AI with local models, start Ollama first.

$ ollama pull gemma3 # optional: for Vertex AI $ localgcp up --vertex-model-map="gemini-2.5-flash=gemma3"
03

Connect

Sets emulator host env vars. Your GCP client libraries do the rest.

$ eval $(localgcp env) $ go run ./your-app
#

Before and after

Without localgcp

  • Spin up real GCP dev projects
  • Pay cloud bills for test environments
  • Burn Vertex AI credits on every prompt test
  • Manage individual emulators per service
  • No Cloud Storage or Cloud Tasks emulator from Google
  • Leak API keys in CI/CD for AI tests
  • Requires internet connection

With localgcp

  • One binary, starts in milliseconds
  • Zero cloud bills for dev and CI
  • Run Vertex AI against local Gemma/Llama
  • All services in one process
  • Every service included, even the missing ones
  • No API keys needed, ever
  • Works fully offline
{ }
AWS has LocalStack. GCP had nothing equivalent. Fragmented emulators, inconsistent APIs, missing services. Until now.

Start building locally

Nine services. 160+ tests. MIT licensed. Works with Go, Python, Java, Node.js.

View on GitHub or: read the docs · download a binary