Skip to content

Commit

Permalink
Merge pull request kubernetes#1735 from halfcrazy/typo
Browse files Browse the repository at this point in the history
doc: fix some typo
  • Loading branch information
k8s-ci-robot authored Feb 4, 2018
2 parents f2ad474 + ec3d22e commit f367271
Show file tree
Hide file tree
Showing 26 changed files with 44 additions and 44 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ GET /api/v1/pods?limit=500&continue=DEF...

Some clients may wish to follow a failed paged list with a full list attempt.

The 5 minute default compaction interval for etcd3 bounds how long a list can run. Since clients may wish to perform processing over very large sets, increasing that timeout may make sense for large clusters. It should be possible to alter the interval at which compaction runs to accomodate larger clusters.
The 5 minute default compaction interval for etcd3 bounds how long a list can run. Since clients may wish to perform processing over very large sets, increasing that timeout may make sense for large clusters. It should be possible to alter the interval at which compaction runs to accommodate larger clusters.


#### Types of clients and impact
Expand Down
2 changes: 1 addition & 1 deletion contributors/design-proposals/auth/kubectl-exec-plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ type ExecAuthProviderConfig struct {
// to pass argument to the plugin.
Env []ExecEnvVar `json:"env"`

// Prefered input version of the ExecInfo. The returned ExecCredentials MUST use
// Preferred input version of the ExecInfo. The returned ExecCredentials MUST use
// the same encoding version as the input.
APIVersion string `json:"apiVersion,omitempty"`

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -336,7 +336,7 @@ type VerticalPodAutoscalerStatus {
StatusMessage string
}

// UpdateMode controls when autoscaler applies changes to the pod resoures.
// UpdateMode controls when autoscaler applies changes to the pod resources.
type UpdateMode string
const (
// UpdateModeOff means that autoscaler never changes Pod resources.
Expand All @@ -354,7 +354,7 @@ const (

// PodUpdatePolicy describes the rules on how changes are applied to the pods.
type PodUpdatePolicy struct {
// Controls when autoscaler applies changes to the pod resoures.
// Controls when autoscaler applies changes to the pod resources.
// +optional
UpdateMode UpdateMode
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -359,7 +359,7 @@ There's ongoing effort for adding Event deduplication and teeing to the server s
Another effort to protect API server from too many Events by dropping requests servers side in admission plugin is worked on by @staebler.
## Considered alternatives for API changes
### Leaving current dedup mechanism but improve backoff behavior
As we're going to move all semantic informations to fields, instead of passing some of them in message, we could just call it a day, and leave the deduplication logic as is. When doing that we'd need to depend on the client-recorder library on protecting API server, by using number of techniques, like batching, aggressive backing off and allowing admin to reduce number of Events emitted by the system. This solution wouldn't drastically reduce number of API requests and we'd need to hope that small incremental changes would be enough.
As we're going to move all semantic information to fields, instead of passing some of them in message, we could just call it a day, and leave the deduplication logic as is. When doing that we'd need to depend on the client-recorder library on protecting API server, by using number of techniques, like batching, aggressive backing off and allowing admin to reduce number of Events emitted by the system. This solution wouldn't drastically reduce number of API requests and we'd need to hope that small incremental changes would be enough.

### Timestamp list as a dedup mechanism
Another considered solution was to store timestamps of Events explicitly instead of only count. This gives users more information, as people complain that current dedup logic is too strong and it's hard to "decompress" Event if needed. This change has clearly worse performance characteristic, but fixes the problem of "decompressing" Events and generally making deduplication lossless. We believe that individual repeated events are not interesting per se, what's interesting is when given series started and when it finished, which is how we ended with the current proposal.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ horizontally, though it’s rather complicated and is out of the scope of this d

Metrics server will be Kubernetes addon, create by kube-up script and managed by
[addon-manager](https://git.k8s.io/kubernetes/cluster/addons/addon-manager).
Since there is a number of dependant components, it will be marked as a critical addon.
Since there is a number of dependent components, it will be marked as a critical addon.
In the future when the priority/preemption feature is introduced we will migrate to use this
proper mechanism for marking it as a high-priority, system component.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,5 +77,5 @@ The logic to determine if an object is sent to a Federated Cluster will have two

## Open Questions

1. Should there be any special considerations for when dependant resources would not be forwarded together to a Federated Cluster.
1. Should there be any special considerations for when dependent resources would not be forwarded together to a Federated Cluster.
1. How to improve usability of this feature long term. It will certainly help to give first class API support but easier ways to map labels or requirements to objects may be required.
Original file line number Diff line number Diff line change
Expand Up @@ -335,7 +335,7 @@ only supports a simple list of acceptable clusters. Workloads will be
evenly distributed on these acceptable clusters in phase one. After
phase one we will define syntax to represent more advanced
constraints, like cluster preference ordering, desired number of
splitted workloads, desired ratio of workloads spread on different
split workloads, desired ratio of workloads spread on different
clusters, etc.

Besides this explicit “clusterSelector” filter, a workload may have
Expand Down
2 changes: 1 addition & 1 deletion contributors/design-proposals/node/cri-windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
**Status**: Proposed

## Background
Container Runtime Interface (CRI) defines [APIs and configuration types](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/cri/v1alpha1/runtime/api.proto) for kubelet to integrate various container runtimes. The Open Container Initiative (OCI) Runtime Specification defines [platform specific configuration](https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration), including Linux, Windows, and Solaris. Currently CRI only suppports Linux container configuration. This proposal is to bring the Memory & CPU resource restrictions already specified in OCI for Windows to CRI.
Container Runtime Interface (CRI) defines [APIs and configuration types](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/cri/v1alpha1/runtime/api.proto) for kubelet to integrate various container runtimes. The Open Container Initiative (OCI) Runtime Specification defines [platform specific configuration](https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration), including Linux, Windows, and Solaris. Currently CRI only supports Linux container configuration. This proposal is to bring the Memory & CPU resource restrictions already specified in OCI for Windows to CRI.

The Linux & Windows schedulers differ in design and the units used, but can accomplish the same goal of limiting resource consumption of individual containers.

Expand Down
4 changes: 2 additions & 2 deletions contributors/design-proposals/node/pod-resource-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ The following formula is used to convert CPU in millicores to cgroup values:
The `kubelet` will create a cgroup sandbox for each pod.

The naming convention for the cgroup sandbox is `pod<pod.UID>`. It enables
the `kubelet` to associate a particular cgroup on the host filesytem
the `kubelet` to associate a particular cgroup on the host filesystem
with a corresponding pod without managing any additional state. This is useful
when the `kubelet` restarts and needs to verify the cgroup filesystem.

Expand Down Expand Up @@ -433,7 +433,7 @@ eviction decisions for the unbounded QoS tiers (Burstable, BestEffort).
The following describes the cgroup representation of a node with pods
across multiple QoS classes.

### Cgroup Hierachy
### Cgroup Hierarchy

The following identifies a sample hierarchy based on the described design.

Expand Down
4 changes: 2 additions & 2 deletions contributors/design-proposals/node/sysctl.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ supports setting a number of whitelisted sysctls during the container creation p
Some real-world examples for the use of sysctls:

- PostgreSQL requires `kernel.shmmax` and `kernel.shmall` (among others) to be
set to reasonable high values (compare [PostgresSQL Manual 17.4.1. Shared Memory
set to reasonable high values (compare [PostgreSQL Manual 17.4.1. Shared Memory
and Semaphores](http://www.postgresql.org/docs/9.1/static/kernel-resources.html)).
The default of 32 MB for shared memory is not reasonable for a database.
- RabbitMQ proposes a number of sysctl settings to optimize networking: https://www.rabbitmq.com/networking.html.
Expand Down Expand Up @@ -342,7 +342,7 @@ Issues:
* [x] **namespaced** in net ns
* [ ] **might have application influence** for high values as it limits the socket queue length
* [?] **No real evidence found until now for accounting**. The limit is checked by `sk_acceptq_is_full` at http://lxr.free-electrons.com/source/net/ipv4/tcp_ipv4.c#L1276. After that a new socket is created. Probably, the tcp socket buffer sysctls apply then, with their accounting, see below.
* [ ] **very unreliable** tcp memory accounting. There have a been a number of attemps to drop that from the kernel completely, e.g. https://lkml.org/lkml/2014/9/12/401. On Fedora 24 (4.6.3) tcp accounting did not work at all, on Ubuntu 16.06 (4.4) it kind of worked in the root-cg, but in containers only values copied from the root-cg appeared.
* [ ] **very unreliable** tcp memory accounting. There have a been a number of attempts to drop that from the kernel completely, e.g. https://lkml.org/lkml/2014/9/12/401. On Fedora 24 (4.6.3) tcp accounting did not work at all, on Ubuntu 16.06 (4.4) it kind of worked in the root-cg, but in containers only values copied from the root-cg appeared.
e - `net.ipv4.tcp_wmem`/`net.ipv4.tcp_wmem`/`net.core.rmem_max`/`net.core.wmem_max`: socket buffer sizes
* [ ] **not namespaced in net ns**, and they are not even available under `/sys/net`
- `net.ipv4.ip_local_port_range`: local tcp/udp port range
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,13 +38,13 @@ to Kubelet and monitor them without writing custom Kubernetes code.
We also want to provide a consistent and portable solution for users to
consume hardware devices across k8s clusters.

This document describes a vendor independant solution to:
This document describes a vendor independent solution to:
* Discovering and representing external devices
* Making these devices available to the containers, using these devices,
scrubbing and securely sharing these devices.
* Health Check of these devices

Because devices are vendor dependant and have their own sets of problems
Because devices are vendor dependent and have their own sets of problems
and mechanisms, the solution we describe is a plugin mechanism that may run
in a container deployed through the DaemonSets mechanism or in bare metal mode.

Expand Down Expand Up @@ -187,7 +187,7 @@ sockets and follow this simple pattern:
gRPC request)
2. Kubelet answers to the `RegisterRequest` with a `RegisterResponse`
containing any error Kubelet might have encountered
3. The device plugin start it's gRPC server if it did not recieve an
3. The device plugin start it's gRPC server if it did not receive an
error

## Unix Socket
Expand Down Expand Up @@ -242,7 +242,7 @@ service Registration {
// DevicePlugin is the service advertised by Device Plugins
service DevicePlugin {
// ListAndWatch returns a stream of List of Devices
// Whenever a Device state change or a Device disapears, ListAndWatch
// Whenever a Device state change or a Device disappears, ListAndWatch
// returns the new list
rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}

Expand Down Expand Up @@ -282,7 +282,7 @@ message AllocateResponse {
}

// ListAndWatch returns a stream of List of Devices
// Whenever a Device state change or a Device disapears, ListAndWatch
// Whenever a Device state change or a Device disappears, ListAndWatch
// returns the new list
message ListAndWatchResponse {
repeated Device devices = 1;
Expand Down Expand Up @@ -485,7 +485,7 @@ spec:
Currently we require exact version match between Kubelet and Device Plugin.
API version is expected to be increased only upon incompatible API changes.
Follow protobuf guidelines on versionning:
Follow protobuf guidelines on versioning:
* Do not change ordering
* Do not remove fields or change types
* Add optional fields
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ type PriorityClass struct {
metav1.ObjectMeta
// The value of this priority class. This is the actual priority that pods
// recieve when they have the above name in their pod spec.
// receive when they have the above name in their pod spec.
Value int32
GlobalDefault bool
Description string
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
In addition to Kubernetes core components like api-server, scheduler, controller-manager running on a master machine
there is a bunch of addons which due to various reasons have to run on a regular cluster node, not the master.
Some of them are critical to have fully functional cluster: Heapster, DNS, UI. Users can break their cluster
by evicting a critical addon (either manually or as a side effect of an other operation like upgrade)
by evicting a critical addon (either manually or as a side effect of another operation like upgrade)
which possibly can become pending (for example when the cluster is highly utilized).
To avoid such situation we want to have a mechanism which guarantees that
critical addons are scheduled assuming the cluster is big enough.
Expand Down
6 changes: 3 additions & 3 deletions contributors/design-proposals/storage/raw-block-pv.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ This document presents a proposal for managing raw block storage in Kubernetes u
# Value add to Kubernetes

By extending the API for volumes to specifically request a raw block device, we provide an explicit method for volume consumption,
whereas previously any request for storage was always fulfilled with a formatted fileystem, even when the underlying storage was
whereas previously any request for storage was always fulfilled with a formatted filesystem, even when the underlying storage was
block. In addition, the ability to use a raw block device without a filesystem will allow
Kubernetes better support of high performance applications that can utilize raw block devices directly for their storage.
Block volumes are critical to applications like databases (MongoDB, Cassandra) that require consistent I/O performance
Expand Down Expand Up @@ -113,7 +113,7 @@ spec:

## Persistent Volume API Changes:
For static provisioning the admin creates the volume and also is intentional about how the volume should be consumed. For backwards
compatibility, the absence of volumeMode will default to filesystem which is how volumes work today, which are formatted with a filesystem depending on the plug-in chosen. Recycling will not be a supported reclaim policy as it has been deprecated. The path value in the local PV definition would be overloaded to define the path of the raw block device rather than the fileystem path.
compatibility, the absence of volumeMode will default to filesystem which is how volumes work today, which are formatted with a filesystem depending on the plug-in chosen. Recycling will not be a supported reclaim policy as it has been deprecated. The path value in the local PV definition would be overloaded to define the path of the raw block device rather than the filesystem path.
```
kind: PersistentVolume
apiVersion: v1
Expand Down Expand Up @@ -841,4 +841,4 @@ Feature: Discovery of block devices

Milestone 1: Dynamically provisioned PVs to dynamically allocated devices

Milestone 2: Plugin changes with dynamic provisioning support (RBD, iSCSI, GCE, AWS & GlusterFS)
Milestone 2: Plugin changes with dynamic provisioning support (RBD, iSCSI, GCE, AWS & GlusterFS)
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ type VolumeNodeAffinity struct {
The `Required` field is a hard constraint and indicates that the PersistentVolume
can only be accessed from Nodes that satisfy the NodeSelector.

In the future, a `Preferred` field can be added to handle soft node contraints with
In the future, a `Preferred` field can be added to handle soft node constraints with
weights, but will not be included in the initial implementation.

The advantages of this NodeAffinity field vs the existing method of using zone labels
Expand Down Expand Up @@ -492,7 +492,7 @@ if the API update fails, the cached updates need to be reverted and restored
with the actual API object. The cache will return either the cached-only
object, or the informer object, whichever one is latest. Informer updates
will always override the cached-only object. The new predicate and priority
functions must get the objects from this cache intead of from the informer cache.
functions must get the objects from this cache instead of from the informer cache.
This cache only stores pointers to objects and most of the time will only
point to the informer object, so the memory footprint per object is small.

Expand Down
2 changes: 1 addition & 1 deletion contributors/devel/gubernator.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ test results.
Gubernator simplifies the debugging process and makes it easier to track down failures by automating many
steps commonly taken in searching through logs, and by offering tools to filter through logs to find relevant lines.
Gubernator automates the steps of finding the failed tests, displaying relevant logs, and determining the
failed pods and the corresponing pod UID, namespace, and container ID.
failed pods and the corresponding pod UID, namespace, and container ID.
It also allows for filtering of the log files to display relevant lines based on selected keywords, and
allows for multiple logs to be woven together by timestamp.

Expand Down
2 changes: 1 addition & 1 deletion contributors/devel/kubemark-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ and Scheduler talk with API server using insecure port 8080.</sub>
(We use gcr.io/ as our remote docker repository in GCE, should be different for other providers)
3. [One-off] Create and upload a Docker image for NodeProblemDetector (see kubernetes/node-problem-detector repo),
which is one of the containers in the HollowNode pod, besides HollowKubelet and HollowProxy. However we
use it with a hollow config that esentially has an empty set of rules and conditions to be detected.
use it with a hollow config that essentially has an empty set of rules and conditions to be detected.
This step is required only for other cloud providers, as the docker image for GCE already exists on GCR.
4. Create secret which stores kubeconfig for use by HollowKubelet/HollowProxy, addons, and configMaps
for the HollowNode and the HollowNodeProblemDetector.
Expand Down
2 changes: 1 addition & 1 deletion contributors/devel/staging.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ At the time of this writing, this includes the branches
- release-1.8 / release-5.0,
- and release-1.9 / release-6.0

of the follwing staging repos in the k8s.io org:
of the following staging repos in the k8s.io org:

- api
- apiextensions-apiserver
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,6 @@ Call for topics and voting is now closed. You can view the complete list of prop

## Misc:

A photographer and videographer will be onsite collecting b-roll and other shots for KubeCon. If you would rather not be involved, please reach out to an organizer on the day of so we may accomodate you.
A photographer and videographer will be onsite collecting b-roll and other shots for KubeCon. If you would rather not be involved, please reach out to an organizer on the day of so we may accommodate you.

Further details to be updated on this doc. Please check back for a complete guide.
Loading

0 comments on commit f367271

Please sign in to comment.