Getting Started
Installation
Homebrew (recommended)
$ brew install slokam-ai/tap/localgcp
Docker
$ docker run --rm \
-p 4443:4443 -p 8085:8085 -p 8086:8086 \
-p 8088:8088 -p 8089:8089 -p 8090:8090 \
-p 8091:8091 -p 8092:8092 -p 8093:8093 \
ghcr.io/slokam-ai/localgcp
Pre-built binary
Download from GitHub Releases for Linux, macOS, or Windows.
From source
$ go install github.com/slokam-ai/localgcp/cmd/localgcp@latest
Starting the emulator
Start all native services in the foreground:
$ localgcp up
Adding orchestrated services
Four additional services run as Docker containers, opt-in via --services. Requires Docker (Docker Desktop, OrbStack, or Colima).
# Add Spanner and Bigtable $ localgcp up --services=spanner,bigtable # Add all four orchestrated services $ localgcp up --services=spanner,bigtable,cloudsql,memorystore
Orchestrated services use lazy startup: the port binds instantly, but the Docker container only pulls and starts when your code first connects. If you never use Spanner, the container never starts.
With all available flags:
$ localgcp up \ --data-dir=./.localgcp \ --port-gcs=4443 \ --port-pubsub=8085 \ --port-secretmanager=8086 \ --port-firestore=8088 \ --port-cloudtasks=8089 \ --port-vertexai=8090 \ --port-kms=8091 \ --port-logging=8092 \ --port-cloudrun=8093 \ --services=spanner,bigtable,cloudsql,memorystore \ --ollama-host=http://localhost:11434 \ --vertex-model-map="gemini-2.5-flash=gemma3" \ --quiet
Press Ctrl+C to stop. Data lives in memory by default and vanishes when you stop.
Connecting your app
Set the emulator host environment variables with a single command:
$ eval $(localgcp env) $ go run ./your-app
This sets STORAGE_EMULATOR_HOST, PUBSUB_EMULATOR_HOST, and FIRESTORE_EMULATOR_HOST. Your GCP client libraries connect to localhost automatically.
Persistent storage
By default, all data lives in memory. To persist data across restarts, use the --data-dir flag:
$ localgcp up --data-dir=./.localgcp
Data is stored as JSON files in the specified directory. This is useful for development workflows where you want to keep test data between sessions.
Port configuration
| Service | Protocol | Default Port | Flag | Env Var |
|---|---|---|---|---|
| Cloud Storage | REST | 4443 |
--port-gcs |
STORAGE_EMULATOR_HOST |
| Pub/Sub | gRPC | 8085 |
--port-pubsub |
PUBSUB_EMULATOR_HOST |
| Secret Manager | gRPC | 8086 |
--port-secretmanager |
(manual config) |
| Firestore | gRPC | 8088 |
--port-firestore |
FIRESTORE_EMULATOR_HOST |
| Cloud Tasks | gRPC | 8089 |
--port-cloudtasks |
(manual config) |
| Vertex AI | REST | 8090 |
--port-vertexai |
(manual config) |
| Cloud KMS | gRPC | 8091 |
--port-kms |
(manual config) |
| Cloud Logging | gRPC | 8092 |
--port-logging |
(manual config) |
| Cloud Run | gRPC | 8093 |
--port-cloudrun |
(manual config) |
| ORCHESTRATED (Docker required, opt-in via --services) | ||||
| Spanner | gRPC | 9010 |
--port-spanner |
SPANNER_EMULATOR_HOST |
| Bigtable | gRPC | 9094 |
--port-bigtable |
BIGTABLE_EMULATOR_HOST |
| Cloud SQL | TCP | 5432 |
--port-cloudsql |
(standard Postgres) |
| Memorystore | TCP | 6379 |
--port-memorystore |
(standard Redis) |
Quick verification
After starting localgcp, run the built-in smoketest to verify all services are working:
$ localgcp up & $ eval $(localgcp env) # Run the smoketest binary $ ./smoketest
Or write a quick Go program to verify Cloud Storage is responding:
package main import ( "context" "fmt" "cloud.google.com/go/storage" ) func main() { ctx := context.Background() // STORAGE_EMULATOR_HOST is set by `localgcp env` client, err := storage.NewClient(ctx) if err != nil { panic(err) } defer client.Close() // Create a test bucket err = client.Bucket("test-bucket").Create(ctx, "my-project", nil) if err != nil { panic(err) } fmt.Println("localgcp is working!") }