Skip to content

Commit

Permalink
updating docs to refer to kip instead of virtual-kubelet
Browse files Browse the repository at this point in the history
  • Loading branch information
justnoise committed Jun 4, 2020
1 parent 52de5fc commit 303ce23
Show file tree
Hide file tree
Showing 8 changed files with 23 additions and 23 deletions.
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Kip, the Kubernetes Cloud Instance Provider

Kip is a [Virtual Kubelet](https://github.com/virtual-kubelet/virtual-kubelet) provider that allows a Kubernetes cluster to transparently launch pods onto their own cloud instances. Kip's virtual-kubelet pod is run on a cluster and will create a virtual Kubernetes node in the cluster. When a pod is scheduled onto the Virtual Kubelet, Kip starts a right-sized cloud instance for the pod’s workload and dispatches the pod onto the instance. When the pod is finished running, the cloud instance is terminated. We call these cloud instances “cells”.
Kip is a [Virtual Kubelet](https://github.com/virtual-kubelet/virtual-kubelet) provider that allows a Kubernetes cluster to transparently launch pods onto their own cloud instances. The kip pod is run on a cluster and will create a virtual Kubernetes node in the cluster. When a pod is scheduled onto the Virtual Kubelet, Kip starts a right-sized cloud instance for the pod’s workload and dispatches the pod onto the instance. When the pod is finished running, the cloud instance is terminated. We call these cloud instances “cells”.

When workloads run on Kip, your cluster size naturally scales with the cluster workload, pods are strongly isolated from each other and the user is freed from managing worker nodes and strategically packing pods onto nodes. This results in lower cloud costs, improved security and simpler operational overhead.

Expand Down Expand Up @@ -42,27 +42,27 @@ terraform apply -var-file myenv.tfvars

### Installation Option 2: Using an Existing Cluster

To deploy Kip into an existing cluster, you'll need to setup cloud credentials that allow the Kip provider to manipulate cloud instances, security groups and other cloud resources. Once credentials are setup, apply [deploy/virtual-kubelet.yaml](deploy/virtual-kubelet.yaml) to create the necessary kubernetes resources to support and run the provider.
To deploy Kip into an existing cluster, you'll need to setup cloud credentials that allow the Kip provider to manipulate cloud instances, security groups and other cloud resources. Once credentials are setup, apply [deploy/kip.yaml](deploy/kip.yaml) to create the necessary kubernetes resources to support and run the provider.

**Step 1: Credentials**

In AWS, Kip can either use API keys supplied in the Kip provider configuration file (`provider.yaml`) or use the instance profile of the machine the Kip virtual-kubelet pod is running on.

**Credentials Option 1 - Configuring AWS API keys:**

Open [deploy/virtual-kubelet.yaml](deploy/virtual-kubelet.yaml) in an editor, find the virtual-kubelet-config ConfigMap and fill in the values for `accessKeyID` and `secretAccessKey` under `data.provider.yaml.cloud.aws`.
Open [deploy/kip.yaml](deploy/kip.yaml) in an editor, find the kip-config ConfigMap and fill in the values for `accessKeyID` and `secretAccessKey` under `data.provider.yaml.cloud.aws`.

**Credentials Option 2 - Instance Profile Credentials:**

In AWS, Kip can use credentials supplied by the [instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) attached to the node the pod is dispatched to. To use an instance profile, create an IAM policy with the [minimum Kip permissions](docs/kip-iam-permissions.md) then apply the instance profile to the node that will run the Kip provider pod. The Kip pod must run on the cloud instance that the instance profile is attached to.

**Step 2: Apply virtual-kubelet.yaml**
**Step 2: Apply kip.yaml**

The resources in [deploy/manifests/virtual-kubelet](deploy/manifests/virtual-kubelet) create ServiceAccounts, Roles and a virtual-kubelet Deployment to run the provider. [Kip is not stateless](docs/state.md), the manifest will also create a PersistentVolumeClaim to store the provider data.
The resources in [deploy/manifests/kip](deploy/manifests/kip) create ServiceAccounts, Roles and a kip Deployment to run the provider. [Kip is not stateless](docs/state.md), the manifest will also create a PersistentVolumeClaim to store the provider data.

kubectl apply -k deploy/manifests/virtual-kubelet/base
kubectl apply -k deploy/manifests/kip/base

After applying, you should see a new virtual-kubelet pod in the kube-system namespace and a new node named virtual-kubelet in the cluster.
After applying, you should see a new kip pod in the kube-system namespace and a new node named virtual-kubelet in the cluster.

## Running Pods on Virtual Kubelet

Expand All @@ -81,9 +81,9 @@ If you used the provided terraform config for creating your cluster, you can rem

terraform destroy -var-file <env.tfvars>.

If you deployed Kip in an existing cluster, make sure that you first remove all the pods and deployments that have been created by Kip. Then remove the virtual-kubelet deployment via:
If you deployed Kip in an existing cluster, make sure that you first remove all the pods and deployments that have been created by Kip. Then remove the kip deployment via:

kubectl delete -n kube-system deployment virtual-kubelet
kubectl delete -n kube-system deployment kip

## Current Status

Expand Down
2 changes: 1 addition & 1 deletion deploy/terraform-gcp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,5 +39,5 @@ If you decide to enable the taint on the virtual node (via removing the `--disab
nodeSelector:
type: kip
tolerations:
- key: kip.io/provider
- key: virtual-kubelet.io/provider
operator: Exists
2 changes: 1 addition & 1 deletion deploy/terraform/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,5 +50,5 @@ If you decide to enable the taint on the virtual node (via removing the `--disab
nodeSelector:
type: kip
tolerations:
- key: kip.io/provider
- key: virtual-kubelet.io/provider
operator: Exists
4 changes: 2 additions & 2 deletions docs/cells.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ We maintain images that are optimized for cells and come with our tools pre-inst
Update your provider config:

cells:
cloudInitFile: /etc/virtual-kubelet/cloudinit.yaml
cloudInitFile: /etc/kip/cloudinit.yaml
bootImageSpec:
owners: "099720109477"
filters: name=ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*
Expand All @@ -32,4 +32,4 @@ Add a cloud-init section to the provider configmap, e.g. for ubuntu or debian:

Finally, restart the provider:

$ kubectl delete pod -n kube-system -l app=virtual-kubelet
$ kubectl delete pod -n kube-system -l app=kip
4 changes: 2 additions & 2 deletions docs/cloud-init.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,10 @@ Kip's cloud-init system provides the following initialization functions:

### Cloud-init Example

In provider.yaml specify the location for the cloud-init file in the virtual-kubelet pod:
In provider.yaml specify the location for the cloud-init file in the kip pod:
```yaml
cells:
cloudInitFile: /etc/virtual-kubelet/cloudinit.yaml
cloudInitFile: /etc/kip/cloudinit.yaml
```
cloudinit.yaml contents:
Expand Down
2 changes: 1 addition & 1 deletion docs/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Kip allocates two IP addresses for each cell: one for management communication (between the provider and a small agent running on the instance), and one for the pod. Unless the pod has hostNetwork enabled, a new Linux network namespace is created for the pod with the second IP. Both IP addresses come from the VPC address space — fortunately, even the tiniest cloud instances are allowed to allocate at least two IP addresses. This design ensures that the pod can’t interfere with management communications.

As for network interoperability between regular pods and virtual-kubelet pods, we recommend the native CNI plugin that integrates with the cloud provider VPC, i.e. the aws-vpc-cni plugin on AWS. That way both virtual-kubelet pods and regular pods will get their IP addresses from the VPC address space, and the VPC network will take care of routing.
As for network interoperability between regular pods and kip pods, we recommend the native CNI plugin that integrates with the cloud provider VPC, i.e. the aws-vpc-cni plugin on AWS. That way both virtual-kubelet pods and regular pods will get their IP addresses from the VPC address space, and the VPC network will take care of routing.

If you would like to use another CNI plugin for some reason, that will also work as long as the cloud controller is configured to create cloud routes with the PodCIDR allocated to nodes and the CNI plugin used in the cluster is able to use the PodCIDR (most CNI plugins can do this).
Currently, Kip needs to run in host network mode. Since NodePorts are managed by the service proxy running on Kubernetes nodes, they also work seamlessly. Iptables rules for HostPort mappings are created and maintained by Kip.
Expand Down
2 changes: 1 addition & 1 deletion docs/provider-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ cells:
# modifications to this file. Cells started afte a modification are
# made will get the updated cloudInit file.
#
# cloudInitFile: /etc/virtual-kubelet/cloudinit.yml
# cloudInitFile: /etc/kip/cloudinit.yml

# standbyCells is used to speicfy pools of standby cells kip will
# keep so pods created can be dispatched to cells quickly.
Expand Down
12 changes: 6 additions & 6 deletions docs/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ The output of `kubectl describe` is helpful to see why a pod is stuck in Pending

### Virtual Kubelet Logs

The a good place to look for more answers is the output of the virtual-kubelet provider pod.
The a good place to look for more answers is the output of the kip pod.

```bash
./kubectl -nkube-system logs virtual-kubelet -f
./kubectl -nkube-system logs kip kip -f
```

### Logging into Cells via SSH
Expand All @@ -20,9 +20,9 @@ As an extreme measure, it might be necessary to enable ssh access to a Cell in o

```yaml

# snippet of /etc/virtual-kubelet/provider.yml
# snippet of /etc/kip/provider.yml
cells:
cloudInitFile: /etc/virtual-kubelet/cloudinit.yaml
cloudInitFile: /etc/kip/cloudinit.yaml
```
2. Create provider.yaml and cloud-init.yaml in a ConfigMap:
Expand All @@ -31,7 +31,7 @@ cells:
kubectl create configmap kip-config --from-file=./provider.yaml --from-file=./cloudinit.yaml
```

3. Add cloudinit.yaml as an item in the kip-config ConfigMap volume for virtual-kubelet:
3. Add cloudinit.yaml as an item in the kip-config ConfigMap volume for kip:

```yaml
spec:
Expand Down Expand Up @@ -60,7 +60,7 @@ $ curl www.myhost.com:80

### Viewing Kip's Internal State

If you're doing development on the kip provider, it's helpful to see the state of resources inside the virtual-kubelet process. The virtual-kubelet image packages an executable `kipctl` alongside the virtual-kubelet executable to communicate with the virtual-kubelet provider. To enable kipctl to talk to virtual-kubelet, add the `--debug-server` flag to the virtual-kubelet's command line arguments and restart the virtual-kubelet pod. After execing into the pod you can inspect the internal state of virtual-kubelet.
If you're doing development on the kip provider, it's helpful to see the state of resources inside the kip process. The kip image packages an executable `kipctl` alongside the kip executable to communicate with kip. To enable kipctl to talk to kip, add the `--debug-server` flag to the kip's command line arguments and restart the kip pod. After execing into the pod you can inspect the internal state of kip.

```bash
./kipctl get pods
Expand Down

0 comments on commit 303ce23

Please sign in to comment.