It is assumed that you have a working Python environment and a Google Cloud account and SDK configured.
-
Install dependencies using virtualenv:
virtualenv -p python3 env source env/bin/activate pip install -r requirements.txt
-
Test running the code, optional:
# Run the server: python greeter_server.py # Open another command line tab and enter the virtual environment: source env/bin/activate # In the new command line tab, run the client: python greeter_client.py
-
The gRPC Services have already been generated. If you change the proto, or just wish to regenerate these files, run:
python -m grpc.tools.protoc \ --include_imports \ --include_source_info \ --proto_path=protos \ --python_out=. \ --grpc_python_out=. \ --descriptor_set_out=api_descriptor.pb \ helloworld.proto
-
Edit,
api_config.yaml
. ReplaceMY_PROJECT_ID
with your project id. -
Deploy your service config to Service Management:
gcloud endpoints services deploy api_descriptor.pb api_config.yaml # Set your project ID as a variable to make commands easier: GCLOUD_PROJECT=<Your Project ID>
-
Also get an API key from the Console's API Manager for use in the client later. Get API Key
-
Enable the Cloud Build API:
gcloud services enable cloudbuild.googleapis.com
-
Build a docker image for your gRPC server, and store it in your Registry:
gcloud container builds submit --tag gcr.io/${GCLOUD_PROJECT}/python-grpc-hello:1.0 .
-
Either deploy to GCE (below) or GKE (further down).
-
Enable the Compute Engine API:
gcloud services enable compute-component.googleapis.com
-
Create your instance and ssh in:
gcloud compute instances create grpc-host --image-family gci-stable --image-project google-containers --tags=http-server gcloud compute ssh grpc-host
-
Set some variables to make commands easier:
GCLOUD_PROJECT=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google") SERVICE_NAME=hellogrpc.endpoints.${GCLOUD_PROJECT}.cloud.goog
-
Pull your credentials to access Container Registry, and run your gRPC server container:
/usr/share/google/dockercfg_update.sh docker run --detach --name=grpc-hello gcr.io/${GCLOUD_PROJECT}/python-grpc-hello:1.0
-
Run the Endpoints proxy:
docker run --detach --name=esp \ --publish=80:9000 \ --link=grpc-hello:grpc-hello \ gcr.io/endpoints-release/endpoints-runtime:1 \ --service=${SERVICE_NAME} \ --rollout_strategy=managed \ --http2_port=9000 \ --backend=grpc://grpc-hello:50051
-
Back on your local machine, get the external IP of your GCE instance:
gcloud compute instances list
-
Run the client:
python greeter_client.py --host=<IP of GCE Instance>:80 --api_key=<API Key from Console>
-
Cleanup:
gcloud compute instances delete grpc-host
-
Create a cluster. You can specify a different zone than us-central1-a if you want:
gcloud container clusters create my-cluster --zone=us-central1-a
-
Edit
deployment.yaml
. ReplaceSERVICE_NAME
andGCLOUD_PROJECT
with your values:SERVICE_NAME
is equal to hellogrpc.endpoints.GCLOUD_PROJECT.cloud.goog, replacing GCLOUD_PROJECT with your project ID. -
Deploy to GKE:
kubectl create -f ./deployment.yaml
-
Get IP of load balancer, run until you see an External IP:
kubectl get svc grpc-hello
-
Run the client:
python greeter_client.py --host=<IP of GKE LoadBalancer>:80 --api_key=<API Key from Console>
-
Cleanup:
gcloud container clusters delete my-cluster --zone=us-central1-a