Skip to content

Commit

Permalink
Merge pull request kubernetes#4227 from rifelpet/docs_update
Browse files Browse the repository at this point in the history
Update state and cloudLabels docs, fix --target description
  • Loading branch information
k8s-ci-robot authored Jan 9, 2018
2 parents eac162f + 9b3f0c1 commit a9a7aff
Show file tree
Hide file tree
Showing 4 changed files with 13 additions and 3 deletions.
2 changes: 1 addition & 1 deletion cmd/kops/create_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -246,7 +246,7 @@ func NewCmdCreateCluster(f *util.Factory, out io.Writer) *cobra.Command {
}

cmd.Flags().BoolVarP(&options.Yes, "yes", "y", options.Yes, "Specify --yes to immediately create the cluster")
cmd.Flags().StringVar(&options.Target, "target", options.Target, fmt.Sprintf("Valid targets: %s, %s, %s. Set this flag to %s if you want kops to generate terraform", cloudup.TargetDirect, cloudup.TargetTerraform, cloudup.TargetDirect, cloudup.TargetTerraform))
cmd.Flags().StringVar(&options.Target, "target", options.Target, fmt.Sprintf("Valid targets: %s, %s, %s. Set this flag to %s if you want kops to generate terraform", cloudup.TargetDirect, cloudup.TargetTerraform, cloudup.TargetCloudformation, cloudup.TargetTerraform))
cmd.Flags().StringVar(&options.Models, "model", options.Models, "Models to apply (separate multiple models with commas)")

// Configuration / state location
Expand Down
2 changes: 1 addition & 1 deletion docs/cli/kops_create_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ kops create cluster
--ssh-access stringSlice Restrict SSH access to this CIDR. If not set, access will not be restricted by IP. (default [0.0.0.0/0])
--ssh-public-key string SSH public key to use (default "~/.ssh/id_rsa.pub")
--subnets stringSlice Set to use shared subnets
--target string Valid targets: direct, terraform, direct. Set this flag to terraform if you want kops to generate terraform (default "direct")
--target string Valid targets: direct, terraform, cloudformation. Set this flag to terraform if you want kops to generate terraform (default "direct")
-t, --topology string Controls network topology for the cluster. public|private. Default is 'public'. (default "public")
--utility-subnets stringSlice Set to use shared utility subnets
--vpc string Set to use a shared VPC
Expand Down
2 changes: 1 addition & 1 deletion docs/instance_groups.md
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@ spec:

## Add Tags on AWS autoscalling groups and instances

If you need to add tags on auto scaling groups or instnaces (propagate ASG tags), you can add it in the instance group specs with *cloudLabels*.
If you need to add tags on auto scaling groups or instances (propagate ASG tags), you can add it in the instance group specs with *cloudLabels*. Cloud Labels defined at the cluster spec level will also be inherited.

```
# Example for nodes
Expand Down
10 changes: 10 additions & 0 deletions docs/state.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,3 +29,13 @@ may prefer to just edit the config file with `kops edit cluster`.

Because the configuration is merged, this is how you can just specify the changed arguments when
reconfiguring your cluster - for example just `kops create cluster` after a dry-run.

## Moving state between S3 buckets

The state store can easily be moved to a different s3 bucket. The steps for a single cluster are as follows:
1. Recursively copy all files from `${OLD_KOPS_STATE_STORE}/${CLUSTER_NAME}` to `${NEW_KOPS_STATE_STORE}/${CLUSTER_NAME}` with `aws s3 sync` or a similar tool.
2. Update the `KOPS_STATE_STORE` environment variable to use the new S3 bucket.
3. Either run `kops edit cluster ${CLUSTER_NAME}` or edit the cluster manifest yaml file. Update `.spec.configBase` to reference the new s3 bucket.
4. Run `kops update cluster ${CLUSTER_NAME} --yes` to apply the changes to the cluster. Newly launched nodes will now retrieve their dependent files from the new S3 bucket. The files in the old bucket are now safe to be deleted.

Repeat for each cluster needing to be moved.

0 comments on commit a9a7aff

Please sign in to comment.