In order to deploy Atlassian’s Data Center products, the following is required:
- An understanding of Kubernetes and Helm concepts
- A Kubernetes cluster, running Kubernetes 1.19 or later
- kubectl 1.19 or later, must be compatible with your cluster
- Helm v 3.3 or later
Before installing the Data Center Helm charts you need to set up your environment:
-
Install tools
-
Create and connect to the Kubernetes cluster
-
See examples of provisioning Kubernetes clusters on cloud-based providers.
-
In order to install the charts to your Kubernetes cluster, your kubernetes client config must be configured appropriately, and you must have the necessary permissions.
-
It is up to you to set up security policies.
-
-
Provision an Ingress Controller
-
See an example of provisioning an NGINX Ingress Controller.
-
This step is necessary in order to make your Atlassian product available from outside of the Kubernetes cluster after deployment.
-
The Kubernetes project supports and maintains ingress controllers for the major cloud providers including; AWS, GCE and nginx. There are also a number of open-source third-party projects available.
-
Because different Kubernetes clusters use different ingress configurations/controllers, the Helm charts provide Ingress Object templates only.
-
The Ingress resource provided as part of the Helm charts is geared toward the NGINX Ingress Controller and can be configured via the
ingress
stanza in the appropriatevalues.yaml
(an alternative controller can be used). -
For more information about the Ingress controller go to the Ingress section of the configuration guide.
-
-
Provision a database
-
See an example of provisioning databases on cloud-based providers.
-
Must be of a type and version supported by the Data Center product you wish to install:
i. Confluence supported databases
-
Must be reachable from the product deployed within your Kubernetes cluster.
-
The database service may be deployed within the same Kubernetes cluster as the Data Center product or elsewhere.
-
The products need to be provided with the information they need to connect to the database service. Configuration for each product is mostly the same, with some small differences. For more information go to the Database connectivity section of the configuration guide.
-
-
Configure a shared-home volume
-
See examples of creating shared storage.
-
All of the Data Center products require a shared network filesystem if they are to be operated in multi-node clusters. If no shared filesystem is available, the products can only be operated in single-node configuration.
-
The
shared-home
volume must be correctly configured as a read-write shared filesystem (e.g. NFS, AWS EFS, Azure Files) -
The recommended setup is to use Kubernetes PersistentVolumes and PersistentVolumeClaims. The
local-home
volume requires a PersistentVolume withReadWriteOnce (RWO)
capability, andshared-home
requires a PersistentVolume withReadWriteMany (RWX)
capability. Typically, this will be a NFS volume provided as part of your infrastructure, but some public-cloud Kubernetes engines provide their own RWX volumes (e.g. AzureFile, ElasticFileStore). -
For more information about volumes go to the Volumes section of the configuration guide.
-
- Continue to the installation guide
- Dive deeper into the configuration options
- Go back to README.md