Skip to content

Commit

Permalink
Merge pull request kubernetes#11438 from davidopp/doc3
Browse files Browse the repository at this point in the history
Various minor edits/clarifications to docs/admin/ docs.
  • Loading branch information
davidopp committed Jul 17, 2015
2 parents 9d06b37 + d64250c commit 341f3a8
Show file tree
Hide file tree
Showing 14 changed files with 89 additions and 135 deletions.
3 changes: 0 additions & 3 deletions docs/admin/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,9 +87,6 @@ project.](salt.md).

## Multi-tenant support

* **Namespaces** ([namespaces.md](namespaces.md)): Namespaces help different
projects, teams, or customers to share a kubernetes cluster.

* **Resource Quota** ([resource-quota.md](resource-quota.md))

## Security
Expand Down
14 changes: 5 additions & 9 deletions docs/admin/accessing-the-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ By default the Kubernetes APIserver serves HTTP on 2 ports:
- uses token-file or client-certificate based [authentication](authentication.md).
- uses policy-based [authorization](authorization.md).
3. Removed: ReadOnly Port
- For security reasons, this had to be removed. Use the service account feature instead.
- For security reasons, this had to be removed. Use the [service account](../user-guide/service-accounts.md) feature instead.

## Proxies and Firewall rules

Expand All @@ -80,26 +80,22 @@ variety of uses cases:
1. Clients outside of a Kubernetes cluster, such as human running `kubectl`
on desktop machine. Currently, accesses the Localhost Port via a proxy (nginx)
running on the `kubernetes-master` machine. Proxy uses bearer token authentication.
2. Processes running in Containers on Kubernetes that need to do read from
the apiserver. Currently, these can use a service account.
2. Processes running in Containers on Kubernetes that need to read from
the apiserver. Currently, these can use a [service account](../user-guide/service-accounts.md).
3. Scheduler and Controller-manager processes, which need to do read-write
API operations. Currently, these have to run on the operations on the
apiserver. Currently, these have to run on the same host as the
API operations. Currently, these have to run on the same host as the
apiserver and use the Localhost Port. In the future, these will be
switched to using service accounts to avoid the need to be co-located.
4. Kubelets, which need to do read-write API operations and are necessarily
on different machines than the apiserver. Kubelet uses the Secure Port
to get their pods, to find the services that a pod can see, and to
write events. Credentials are distributed to kubelets at cluster
setup time.
setup time. Kubelets use cert-based auth, while kube-proxy uses token-based auth.

## Expected changes
- Policy will limit the actions kubelets can do via the authed port.
- Kubelets will change from token-based authentication to cert-based-auth.
- Scheduler and Controller-manager will use the Secure Port too. They
will then be able to run on different machines than the apiserver.
- A general mechanism will be provided for [giving credentials to
pods](https://github.com/GoogleCloudPlatform/kubernetes/issues/1907).
- Clients, like kubectl, will all support token-based auth, and the
Localhost will no longer be needed, and will not be the default.
However, the localhost port may continue to be an option for
Expand Down
12 changes: 7 additions & 5 deletions docs/admin/admission-controllers.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ This plug-in will observe the incoming request and ensure that it does not viola
enumerated in the ```ResourceQuota``` object in a ```Namespace```. If you are using ```ResourceQuota```
objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.

See the [resourceQuota design doc](../design/admission_control_resource_quota.md) and the [example of Resource Quota](../user-guide/resourcequota/).
See the [resourceQuota design doc](../design/admission_control_resource_quota.md) and the [example of Resource Quota](../user-guide/resourcequota/) for more details.

It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is
so that quota is not prematurely incremented only for the request to be rejected later in admission control.
Expand All @@ -121,9 +121,11 @@ so that quota is not prematurely incremented only for the request to be rejected

This plug-in will observe the incoming request and ensure that it does not violate any of the constraints
enumerated in the ```LimitRange``` object in a ```Namespace```. If you are using ```LimitRange``` objects in
your Kubernetes deployment, you MUST use this plug-in to enforce those constraints.
your Kubernetes deployment, you MUST use this plug-in to enforce those constraints. LimitRanger can also
be used to apply default resource requests to Pods that don't specify any; currently, the default LimitRanger
applies a 0.1 CPU requirement to all Pods in the ```default``` namespace.

See the [limitRange design doc](../design/admission_control_limit_range.md) and the [example of Limit Range](../user-guide/limitrange/).
See the [limitRange design doc](../design/admission_control_limit_range.md) and the [example of Limit Range](../user-guide/limitrange/) for more details.

### NamespaceExists

Expand All @@ -140,9 +142,9 @@ We strongly recommend ```NamespaceExists``` over ```NamespaceAutoProvision```.

### NamespaceLifecycle

This plug-in enforces that a ```Namespace``` that is undergoing termination cannot have new content created in it.
This plug-in enforces that a ```Namespace``` that is undergoing termination cannot have new objects created in it.

A ```Namespace``` deletion kicks off a sequence of operations that remove all content (pods, services, etc.) in that
A ```Namespace``` deletion kicks off a sequence of operations that remove all objects (pods, services, etc.) in that
namespace. In order to enforce integrity of that process, we strongly recommend running this plug-in.

Once ```NamespaceAutoProvision``` is deprecated, we anticipate ```NamespaceLifecycle``` and ```NamespaceExists``` will
Expand Down
8 changes: 4 additions & 4 deletions docs/admin/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,13 +34,13 @@ Documentation for other releases can be found at

Kubernetes uses client certificates, tokens, or http basic auth to authenticate users for API calls.

Client certificate authentication is enabled by passing the `--client_ca_file=SOMEFILE`
**Client certificate authentication** is enabled by passing the `--client_ca_file=SOMEFILE`
option to apiserver. The referenced file must contain one or more certificates authorities
to use to validate client certificates presented to the apiserver. If a client certificate
is presented and verified, the common name of the subject is used as the user name for the
request.

Token authentication is enabled by passing the `--token_auth_file=SOMEFILE` option
**Token authentication** is enabled by passing the `--token_auth_file=SOMEFILE` option
to apiserver. Currently, tokens last indefinitely, and the token list cannot
be changed without restarting apiserver. We plan in the future for tokens to
be short-lived, and to be generated as needed rather than stored in a file.
Expand All @@ -51,7 +51,7 @@ and is a csv file with 3 columns: token, user name, user uid.
When using token authentication from an http client the apiserver expects an `Authorization`
header with a value of `Bearer SOMETOKEN`.

Basic authentication is enabled by passing the `--basic_auth_file=SOMEFILE`
**Basic authentication** is enabled by passing the `--basic_auth_file=SOMEFILE`
option to apiserver. Currently, the basic auth credentials last indefinitely,
and the password cannot be changed without restarting apiserver. Note that basic
authentication is currently supported for convenience while we finish making the
Expand All @@ -60,7 +60,7 @@ more secure modes described above easier to use.
The basic auth file format is implemented in `plugin/pkg/auth/authenticator/password/passwordfile/...`
and is a csv file with 3 columns: password, user name, user id.

When using basic authentication from an http client the apiserver expects an `Authorization` header
When using basic authentication from an http client, the apiserver expects an `Authorization` header
with a value of `Basic BASE64ENCODEDUSER:PASSWORD`.

## Plugin Development
Expand Down
4 changes: 1 addition & 3 deletions docs/admin/authorization.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,7 @@ In Kubernetes, authorization happens as a separate step from authentication.
See the [authentication documentation](authentication.md) for an
overview of authentication.

Authorization applies to all HTTP accesses on the main apiserver port. (The
readonly port is not currently subject to authorization, but is planned to be
removed soon.)
Authorization applies to all HTTP accesses on the main (secure) apiserver port.

The authorization check for any request compares attributes of the context of
the request, (such as user, resource, and namespace) with access
Expand Down
6 changes: 3 additions & 3 deletions docs/admin/cluster-large.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,15 +33,15 @@ Documentation for other releases can be found at
# Kubernetes Large Cluster

## Support
At v1.0, Kubernetes supports clusters up to 100 nodes with 30-50 pods per node and 1-2 container per pod (as defined in the [1.0 roadmap](../../docs/roadmap.md#reliability-and-performance)).
At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and 1-2 container per pod (as defined in the [1.0 roadmap](../../docs/roadmap.md#reliability-and-performance)).

## Setup

Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](../../cluster/gce/config-default.sh)).

Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.

When setting up a large Kubernetes cluster, the following must be taken into consideration.
When setting up a large Kubernetes cluster, the following issues must be considered.

### Quota Issues

Expand All @@ -56,7 +56,7 @@ To avoid running into cloud provider quota issues, when creating a cluster with
* Forwarding rules
* Routes
* Target pools
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers limit the number of VMs you can create during a given period.
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs.

### Addon Resources
To prevent memory leaks or other resource issues in [cluster addons](../../cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://github.com/GoogleCloudPlatform/kubernetes/pull/10653/files) and [#10778](https://github.com/GoogleCloudPlatform/kubernetes/pull/10778/files)).
Expand Down
20 changes: 11 additions & 9 deletions docs/admin/cluster-troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,10 @@ Documentation for other releases can be found at

<!-- END MUNGE: UNVERSIONED_WARNING -->
# Cluster Troubleshooting
Most of the time, if you encounter problems, it is your application that is having problems. For application
problems please see the [application troubleshooting guide](../user-guide/application-troubleshooting.md). You may also visit [troubleshooting document](../troubleshooting.md) for more information.
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
problem you are experiencing. See
the [application troubleshooting guide](../user-guide/application-troubleshooting.md) for tips on application debugging.
You may also visit [troubleshooting document](../troubleshooting.md) for more information.

## Listing your cluster
The first thing to debug in your cluster is if your nodes are all registered correctly.
Expand All @@ -46,7 +48,7 @@ And verify that all of the nodes you expect to see are present and that they are

## Looking at logs
For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations
of the relevant log files. (note that on systemd based systems, you may need to use ```journalctl``` instead)
of the relevant log files. (note that on systemd-based systems, you may need to use ```journalctl``` instead)

### Master
* /var/log/kube-apiserver.log - API Server, responsible for serving the API
Expand All @@ -59,7 +61,7 @@ of the relevant log files. (note that on systemd based systems, you may need to

## A general overview of cluster failure modes

This is an incomplete list of things that could go wrong, and how to deal with them.
This is an incomplete list of things that could go wrong, and how to adjust your cluster setup to mitigate the problems.

Root causes:
- VM(s) shutdown
Expand Down Expand Up @@ -102,18 +104,18 @@ Specific scenarios:
- etc.

Mitigations:
- Action: Use IaaS providers automatic VM restarting feature for IaaS VMs
- Action: Use IaaS provider's automatic VM restarting feature for IaaS VMs
- Mitigates: Apiserver VM shutdown or apiserver crashing
- Mitigates: Supporting services VM shutdown or crashes

- Action use IaaS providers reliable storage (e.g GCE PD or AWS EBS volume) for VMs with apiserver+etcd
- Mitigates: Apiserver backing storage lost

- Action: Use [replicated APIserver](high-availability.md) feature
- Mitigates: Apiserver VM shutdown or apiserver crashing
- Will tolerate one or more simultaneous apiserver failures
- Action: Use (experimental) [high-availability](high-availability.md) configuration
- Mitigates: Master VM shutdown or master components (scheduler, API server, controller-managing) crashing
- Will tolerate one or more simultaneous node or component failures
- Mitigates: Apiserver backing storage (i.e., etcd's data directory) lost
- Each apiserver has independent storage. Etcd will recover from loss of one member. Risk of total data loss greatly reduced.
- Assuming you used clustered etcd.

- Action: Snapshot apiserver PDs/EBS-volumes periodically
- Mitigates: Apiserver backing storage lost
Expand Down
4 changes: 2 additions & 2 deletions docs/admin/dns.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Documentation for other releases can be found at

As of kubernetes 0.8, DNS is offered as a [cluster add-on](../../cluster/addons/README.md).
If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
configured to tell individual containers to use the DNS Service's IP.
configured to tell individual containers to use the DNS Service's IP to resolve DNS names.

Every Service defined in the cluster (including the DNS server itself) will be
assigned a DNS name. By default, a client Pod's DNS search list will
Expand All @@ -51,7 +51,7 @@ supports forward lookups (A records) and service lookups (SRV records).

## How it Works

The running DNS pod holds 3 containers - skydns, etcd (which skydns uses),
The running DNS pod holds 3 containers - skydns, etcd (a private instance which skydns uses),
and a kubernetes-to-skydns bridge called kube2sky. The kube2sky process
watches the kubernetes master for changes in Services, and then writes the
information to etcd, which skydns reads. This etcd instance is not linked to
Expand Down
12 changes: 6 additions & 6 deletions docs/admin/multi-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Documentation for other releases can be found at
# Considerations for running multiple Kubernetes clusters

You may want to set up multiple kubernetes clusters, both to
have clusters in different regions to be nearer to your users; and to tolerate failures and/or invasive maintenance.
have clusters in different regions to be nearer to your users, and to tolerate failures and/or invasive maintenance.
This document describes some of the issues to consider when making a decision about doing so.

Note that at present,
Expand All @@ -54,8 +54,8 @@ We suggest that all the VMs in a Kubernetes cluster should be in the same availa

It is okay to have multiple clusters per availability zone, though on balance we think fewer is better.
Reasons to prefer fewer clusters are:
- improved bin packing of Pods in some cases with more nodes in one cluster.
- reduced operational overhead (though the advantage is diminished as ops tooling and processes matures).
- improved bin packing of Pods in some cases with more nodes in one cluster (less resource fragmentation)
- reduced operational overhead (though the advantage is diminished as ops tooling and processes matures)
- reduced costs for per-cluster fixed resource costs, e.g. apiserver VMs (but small as a percentage
of overall cluster cost for medium to large clusters).

Expand All @@ -82,13 +82,13 @@ you need `R + U` clusters. If it is not (e.g you want to ensure low latency for
cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone.

Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then
you may need even more clusters. Our [roadmap](../roadmap.md)
calls for maximum 100 node clusters at v1.0 and maximum 1000 node clusters in the middle of 2015.
you may need even more clusters. Kubernetes v1.0 currently supports clusters up to 100 nodes in size, but we are targeting
1000-node clusters by early 2016.

## Working with multiple clusters

When you have multiple clusters, you would typically create services with the same config in each cluster and put each of those
service instances behind a load balancer (AWS Elastic Load Balancer, GCE Forwarding Rule or HTTP Load Balancer), so that
service instances behind a load balancer (AWS Elastic Load Balancer, GCE Forwarding Rule or HTTP Load Balancer) spanning all of them, so that
failures of a single cluster are not visible to end users.


Expand Down
49 changes: 0 additions & 49 deletions docs/admin/namespaces.md

This file was deleted.

Loading

0 comments on commit 341f3a8

Please sign in to comment.