diff --git a/README.md b/README.md index 5a7750a9..ff03f128 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # Kip, the Kubernetes Cloud Instance Provider -Kip is a [Virtual Kubelet](https://github.com/virtual-kubelet/virtual-kubelet) provider that allows a Kubernetes cluster to transparently launch pods onto their own cloud instances. Kip's virtual-kubelet pod is run on a cluster and will create a virtual Kubernetes node in the cluster. When a pod is scheduled onto the Virtual Kubelet, Kip starts a right-sized cloud instance for the pod’s workload and dispatches the pod onto the instance. When the pod is finished running, the cloud instance is terminated. We call these cloud instances “cells”. +Kip is a [Virtual Kubelet](https://github.com/virtual-kubelet/virtual-kubelet) provider that allows a Kubernetes cluster to transparently launch pods onto their own cloud instances. The kip pod is run on a cluster and will create a virtual Kubernetes node in the cluster. When a pod is scheduled onto the Virtual Kubelet, Kip starts a right-sized cloud instance for the pod’s workload and dispatches the pod onto the instance. When the pod is finished running, the cloud instance is terminated. We call these cloud instances “cells”. When workloads run on Kip, your cluster size naturally scales with the cluster workload, pods are strongly isolated from each other and the user is freed from managing worker nodes and strategically packing pods onto nodes. This results in lower cloud costs, improved security and simpler operational overhead. @@ -42,7 +42,7 @@ terraform apply -var-file myenv.tfvars ### Installation Option 2: Using an Existing Cluster -To deploy Kip into an existing cluster, you'll need to setup cloud credentials that allow the Kip provider to manipulate cloud instances, security groups and other cloud resources. Once credentials are setup, apply [deploy/virtual-kubelet.yaml](deploy/virtual-kubelet.yaml) to create the necessary kubernetes resources to support and run the provider. +To deploy Kip into an existing cluster, you'll need to setup cloud credentials that allow the Kip provider to manipulate cloud instances, security groups and other cloud resources. Once credentials are setup, apply [deploy/kip.yaml](deploy/kip.yaml) to create the necessary kubernetes resources to support and run the provider. **Step 1: Credentials** @@ -50,19 +50,19 @@ In AWS, Kip can either use API keys supplied in the Kip provider configuration f **Credentials Option 1 - Configuring AWS API keys:** -Open [deploy/virtual-kubelet.yaml](deploy/virtual-kubelet.yaml) in an editor, find the virtual-kubelet-config ConfigMap and fill in the values for `accessKeyID` and `secretAccessKey` under `data.provider.yaml.cloud.aws`. +Open [deploy/kip.yaml](deploy/kip.yaml) in an editor, find the kip-config ConfigMap and fill in the values for `accessKeyID` and `secretAccessKey` under `data.provider.yaml.cloud.aws`. **Credentials Option 2 - Instance Profile Credentials:** In AWS, Kip can use credentials supplied by the [instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) attached to the node the pod is dispatched to. To use an instance profile, create an IAM policy with the [minimum Kip permissions](docs/kip-iam-permissions.md) then apply the instance profile to the node that will run the Kip provider pod. The Kip pod must run on the cloud instance that the instance profile is attached to. -**Step 2: Apply virtual-kubelet.yaml** +**Step 2: Apply kip.yaml** -The resources in [deploy/manifests/virtual-kubelet](deploy/manifests/virtual-kubelet) create ServiceAccounts, Roles and a virtual-kubelet Deployment to run the provider. [Kip is not stateless](docs/state.md), the manifest will also create a PersistentVolumeClaim to store the provider data. +The resources in [deploy/manifests/kip](deploy/manifests/kip) create ServiceAccounts, Roles and a kip Deployment to run the provider. [Kip is not stateless](docs/state.md), the manifest will also create a PersistentVolumeClaim to store the provider data. - kubectl apply -k deploy/manifests/virtual-kubelet/base + kubectl apply -k deploy/manifests/kip/base -After applying, you should see a new virtual-kubelet pod in the kube-system namespace and a new node named virtual-kubelet in the cluster. +After applying, you should see a new kip pod in the kube-system namespace and a new node named virtual-kubelet in the cluster. ## Running Pods on Virtual Kubelet @@ -81,9 +81,9 @@ If you used the provided terraform config for creating your cluster, you can rem terraform destroy -var-file . -If you deployed Kip in an existing cluster, make sure that you first remove all the pods and deployments that have been created by Kip. Then remove the virtual-kubelet deployment via: +If you deployed Kip in an existing cluster, make sure that you first remove all the pods and deployments that have been created by Kip. Then remove the kip deployment via: - kubectl delete -n kube-system deployment virtual-kubelet + kubectl delete -n kube-system deployment kip ## Current Status diff --git a/deploy/terraform-gcp/README.md b/deploy/terraform-gcp/README.md index 99014ec2..52c1b1c9 100644 --- a/deploy/terraform-gcp/README.md +++ b/deploy/terraform-gcp/README.md @@ -39,5 +39,5 @@ If you decide to enable the taint on the virtual node (via removing the `--disab nodeSelector: type: kip tolerations: - - key: kip.io/provider + - key: virtual-kubelet.io/provider operator: Exists diff --git a/deploy/terraform/README.md b/deploy/terraform/README.md index 0e740d18..25593c8e 100644 --- a/deploy/terraform/README.md +++ b/deploy/terraform/README.md @@ -50,5 +50,5 @@ If you decide to enable the taint on the virtual node (via removing the `--disab nodeSelector: type: kip tolerations: - - key: kip.io/provider + - key: virtual-kubelet.io/provider operator: Exists diff --git a/docs/cells.md b/docs/cells.md index 9b1016b5..5fa0378f 100644 --- a/docs/cells.md +++ b/docs/cells.md @@ -13,7 +13,7 @@ We maintain images that are optimized for cells and come with our tools pre-inst Update your provider config: cells: - cloudInitFile: /etc/virtual-kubelet/cloudinit.yaml + cloudInitFile: /etc/kip/cloudinit.yaml bootImageSpec: owners: "099720109477" filters: name=ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-* @@ -32,4 +32,4 @@ Add a cloud-init section to the provider configmap, e.g. for ubuntu or debian: Finally, restart the provider: - $ kubectl delete pod -n kube-system -l app=virtual-kubelet + $ kubectl delete pod -n kube-system -l app=kip diff --git a/docs/cloud-init.md b/docs/cloud-init.md index 22b96e22..87426229 100644 --- a/docs/cloud-init.md +++ b/docs/cloud-init.md @@ -12,10 +12,10 @@ Kip's cloud-init system provides the following initialization functions: ### Cloud-init Example -In provider.yaml specify the location for the cloud-init file in the virtual-kubelet pod: +In provider.yaml specify the location for the cloud-init file in the kip pod: ```yaml cells: - cloudInitFile: /etc/virtual-kubelet/cloudinit.yaml + cloudInitFile: /etc/kip/cloudinit.yaml ``` cloudinit.yaml contents: diff --git a/docs/networking.md b/docs/networking.md index d9a0d925..3e267e70 100644 --- a/docs/networking.md +++ b/docs/networking.md @@ -2,7 +2,7 @@ Kip allocates two IP addresses for each cell: one for management communication (between the provider and a small agent running on the instance), and one for the pod. Unless the pod has hostNetwork enabled, a new Linux network namespace is created for the pod with the second IP. Both IP addresses come from the VPC address space — fortunately, even the tiniest cloud instances are allowed to allocate at least two IP addresses. This design ensures that the pod can’t interfere with management communications. -As for network interoperability between regular pods and virtual-kubelet pods, we recommend the native CNI plugin that integrates with the cloud provider VPC, i.e. the aws-vpc-cni plugin on AWS. That way both virtual-kubelet pods and regular pods will get their IP addresses from the VPC address space, and the VPC network will take care of routing. +As for network interoperability between regular pods and kip pods, we recommend the native CNI plugin that integrates with the cloud provider VPC, i.e. the aws-vpc-cni plugin on AWS. That way both virtual-kubelet pods and regular pods will get their IP addresses from the VPC address space, and the VPC network will take care of routing. If you would like to use another CNI plugin for some reason, that will also work as long as the cloud controller is configured to create cloud routes with the PodCIDR allocated to nodes and the CNI plugin used in the cluster is able to use the PodCIDR (most CNI plugins can do this). Currently, Kip needs to run in host network mode. Since NodePorts are managed by the service proxy running on Kubernetes nodes, they also work seamlessly. Iptables rules for HostPort mappings are created and maintained by Kip. diff --git a/docs/provider-config.md b/docs/provider-config.md index 4c2d290d..718d1ee6 100644 --- a/docs/provider-config.md +++ b/docs/provider-config.md @@ -118,7 +118,7 @@ cells: # modifications to this file. Cells started afte a modification are # made will get the updated cloudInit file. # - # cloudInitFile: /etc/virtual-kubelet/cloudinit.yml + # cloudInitFile: /etc/kip/cloudinit.yml # standbyCells is used to speicfy pools of standby cells kip will # keep so pods created can be dispatched to cells quickly. diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md index 91557a36..213ad2ae 100644 --- a/docs/troubleshooting.md +++ b/docs/troubleshooting.md @@ -6,10 +6,10 @@ The output of `kubectl describe` is helpful to see why a pod is stuck in Pending ### Virtual Kubelet Logs -The a good place to look for more answers is the output of the virtual-kubelet provider pod. +The a good place to look for more answers is the output of the kip pod. ```bash -./kubectl -nkube-system logs virtual-kubelet -f +./kubectl -nkube-system logs kip kip -f ``` ### Logging into Cells via SSH @@ -20,9 +20,9 @@ As an extreme measure, it might be necessary to enable ssh access to a Cell in o ```yaml -# snippet of /etc/virtual-kubelet/provider.yml +# snippet of /etc/kip/provider.yml cells: - cloudInitFile: /etc/virtual-kubelet/cloudinit.yaml + cloudInitFile: /etc/kip/cloudinit.yaml ``` 2. Create provider.yaml and cloud-init.yaml in a ConfigMap: @@ -31,7 +31,7 @@ cells: kubectl create configmap kip-config --from-file=./provider.yaml --from-file=./cloudinit.yaml ``` -3. Add cloudinit.yaml as an item in the kip-config ConfigMap volume for virtual-kubelet: +3. Add cloudinit.yaml as an item in the kip-config ConfigMap volume for kip: ```yaml spec: @@ -60,7 +60,7 @@ $ curl www.myhost.com:80 ### Viewing Kip's Internal State -If you're doing development on the kip provider, it's helpful to see the state of resources inside the virtual-kubelet process. The virtual-kubelet image packages an executable `kipctl` alongside the virtual-kubelet executable to communicate with the virtual-kubelet provider. To enable kipctl to talk to virtual-kubelet, add the `--debug-server` flag to the virtual-kubelet's command line arguments and restart the virtual-kubelet pod. After execing into the pod you can inspect the internal state of virtual-kubelet. +If you're doing development on the kip provider, it's helpful to see the state of resources inside the kip process. The kip image packages an executable `kipctl` alongside the kip executable to communicate with kip. To enable kipctl to talk to kip, add the `--debug-server` flag to the kip's command line arguments and restart the kip pod. After execing into the pod you can inspect the internal state of kip. ```bash ./kipctl get pods