localgcp

{ the unified GCP emulator }

One binary, thirteen services, zero cloud bills. Nine native Go services start in under 100ms. Four more via Docker, on demand.

Spanner, Bigtable, Vertex AI, and 10 more. Native services need zero dependencies. Orchestrated services start lazily via Docker on first connection. One command runs everything.
localgcp
$ brew install slokam-ai/tap/localgcp $ localgcp up Starting localgcp... Cloud Storage :4443 Pub/Sub :8085 Secret Manager :8086 Firestore :8088 Cloud Tasks :8089 Vertex AI :8090 ollama: gemma3 localgcp is ready. Press Ctrl+C to stop.
{ }

Thirteen services. One command.

Your GCP client libraries already support emulator host env vars. Point them at localhost and your existing code works with zero changes.

AI
Vertex AI
REST · :8090

generateContent and embeddings via Ollama. Proxy to Gemma, Llama, or any local model. Stub mode for CI/CD.

Cloud Storage
REST · :4443

Bucket and object CRUD. Simple, multipart, and resumable uploads. Signed URLs. JSON and XML API paths.

Pub/Sub
gRPC · :8085

Topics, subscriptions, publish, pull, StreamingPull. Push subscriptions and dead letter topics.

Firestore
gRPC · :8088

Document CRUD, real-time listeners (onSnapshot), queries with in/array-contains. Transactions.

Secret Manager
gRPC · :8086

Secrets with versioning. Enable, disable, destroy states. Access by version number or "latest" alias.

Cloud Tasks
gRPC · :8089

Queue and task CRUD. HTTP target dispatch. Scheduling, retry with exponential backoff.

Cloud KMS
gRPC · :8091

Encrypt/decrypt, asymmetric sign, HMAC. KeyRing and CryptoKey management. In-memory keys.

Cloud Logging
gRPC · :8092

Write and query log entries. Filter by severity, text payload, log name. Bounded in-memory store.

Cloud Run
gRPC · :8093

Service CRUD with immediate operations. Auto-generated URIs. No polling required.

Docker
Spanner
gRPC · :9010

Official Google emulator via Docker. Lazy start on first connection. Full API.

Docker
Bigtable
gRPC · :9094

Official Google emulator via Docker. Lazy start on first connection. Full API.

Docker
Cloud SQL
TCP · :5432

PostgreSQL 16 via Docker. Standard connection. Works with any Postgres client.

Docker
Memorystore
TCP · :6379

Redis 7 via Docker. Standard connection. Works with any Redis client.

More coming

IAM, Datastore, and more on the roadmap. See roadmap

>

Three commands to local GCP

01

Install

Single binary. No Docker, no JVM, no runtime dependencies.

$ brew install slokam-ai/tap/localgcp # or: go install github.com/slokam-ai/localgcp/cmd/localgcp@latest
02

Start

All native services in the foreground. For Vertex AI with local models, start Ollama first.

$ ollama pull gemma3 # optional: for Vertex AI $ localgcp up --vertex-model-map="gemini-2.5-flash=gemma3"
03

Connect

Sets emulator host env vars. Your GCP client libraries do the rest.

$ eval $(localgcp env) $ go run ./your-app
#

Before and after

Without localgcp

  • Spin up real GCP dev projects
  • Pay cloud bills for test environments
  • Burn Vertex AI credits on every prompt test
  • Manage individual emulators per service
  • No Cloud Storage or Cloud Tasks emulator from Google
  • Leak API keys in CI/CD for AI tests
  • Requires internet connection

With localgcp

  • One binary, starts in milliseconds
  • Zero cloud bills for dev and CI
  • Run Vertex AI against local Gemma/Llama
  • All services in one process
  • Every service included, even the missing ones
  • No API keys needed, ever
  • Works fully offline
{ }
AWS has LocalStack. GCP had nothing equivalent. Fragmented emulators, inconsistent APIs, missing services. Until now.

Start building locally

Thirteen services. 170+ tests. MIT licensed. Works with Go, Python, Java, Node.js.

View on GitHub or: read the docs · download a binary