localgcp
One binary, thirteen services, zero cloud bills. Nine native Go services start in under 100ms. Four more via Docker, on demand.
Thirteen services. One command.
Your GCP client libraries already support emulator host env vars. Point them at localhost and your existing code works with zero changes.
generateContent and embeddings via Ollama. Proxy to Gemma, Llama, or any local model. Stub mode for CI/CD.
Bucket and object CRUD. Simple, multipart, and resumable uploads. Signed URLs. JSON and XML API paths.
Topics, subscriptions, publish, pull, StreamingPull. Push subscriptions and dead letter topics.
Document CRUD, real-time listeners (onSnapshot), queries with in/array-contains. Transactions.
Secrets with versioning. Enable, disable, destroy states. Access by version number or "latest" alias.
Queue and task CRUD. HTTP target dispatch. Scheduling, retry with exponential backoff.
Encrypt/decrypt, asymmetric sign, HMAC. KeyRing and CryptoKey management. In-memory keys.
Write and query log entries. Filter by severity, text payload, log name. Bounded in-memory store.
Service CRUD with immediate operations. Auto-generated URIs. No polling required.
Official Google emulator via Docker. Lazy start on first connection. Full API.
Official Google emulator via Docker. Lazy start on first connection. Full API.
PostgreSQL 16 via Docker. Standard connection. Works with any Postgres client.
Redis 7 via Docker. Standard connection. Works with any Redis client.
IAM, Datastore, and more on the roadmap. See roadmap
Three commands to local GCP
Install
Single binary. No Docker, no JVM, no runtime dependencies.
$ brew install slokam-ai/tap/localgcp
# or: go install github.com/slokam-ai/localgcp/cmd/localgcp@latest
Start
All native services in the foreground. For Vertex AI with local models, start Ollama first.
$ ollama pull gemma3 # optional: for Vertex AI
$ localgcp up --vertex-model-map="gemini-2.5-flash=gemma3"
Connect
Sets emulator host env vars. Your GCP client libraries do the rest.
$ eval $(localgcp env)
$ go run ./your-app
Before and after
Without localgcp
- Spin up real GCP dev projects
- Pay cloud bills for test environments
- Burn Vertex AI credits on every prompt test
- Manage individual emulators per service
- No Cloud Storage or Cloud Tasks emulator from Google
- Leak API keys in CI/CD for AI tests
- Requires internet connection
With localgcp
- One binary, starts in milliseconds
- Zero cloud bills for dev and CI
- Run Vertex AI against local Gemma/Llama
- All services in one process
- Every service included, even the missing ones
- No API keys needed, ever
- Works fully offline
Start building locally
Thirteen services. 170+ tests. MIT licensed. Works with Go, Python, Java, Node.js.
View on GitHub or: read the docs · download a binary