Skip to content

Commit

Permalink
[terraform] Add VULTR (vultr.com) terraform setup for a fullnode on k8s.
Browse files Browse the repository at this point in the history
  • Loading branch information
Ivan Morozov authored and aptos-bot committed May 4, 2022
1 parent 7025756 commit 2415fd6
Show file tree
Hide file tree
Showing 6 changed files with 313 additions and 0 deletions.
83 changes: 83 additions & 0 deletions terraform/fullnode/vultr/.terraform.lock.hcl

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

67 changes: 67 additions & 0 deletions terraform/fullnode/vultr/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
Aptos Fullnodes VULTR (https://www.vultr.com/) Deployment
==============================

This directory contains Terraform configs to deploy a public fullnode on VULTR.

These instructions assume that you have a functioning VULTR account.
The default configuration will create a single node cluster with 4CPU/8GB and a automatically allocate and bind a persistant block storage (SSD) using VULTR-CSI (https://github.com/vultr/vultr-csi)


1. Install pre-requisites if needed:

* Terraform 1.1.7: https://www.terraform.io/downloads.html
* Docker: https://www.docker.com/products/docker-desktop
* Kubernetes cli: https://kubernetes.io/docs/tasks/tools/

Once you have a VULTR account, log into VULTR, go into ACCOUNT -> API and obtain your Personal Access Token.
Configure the Access Control to whitelist the IP of the machine where you will run Terraform from.


2. Clone the aptos-core repo and go to the terraform vultr folder.

$ git clone https://github.com/aptos-labs/aptos-core.git

$ cd aptos-core/terraform/fullnode/vultr

3. Change the cluster Name in `cluster.tf`

4. Configure cluster properties in `variables.tf`.

The most important variable is `api_key`, make sure you use the API key obtained in step 1. It will create a 1 machine with 4CPU/8GB in Frankfurt per default.

5. Apply the configuration with (it might take a while)

$ terraform apply

6. Configure your Kubernetes client:

Log in your VULTR account. Go to Products -> Kubernetes. Press the 3 dots on the right side and choose "Manage".
Press Download Configuration, it will download a YAML containing the access config to your cluster.

$ export KUBECONFIG=~/vke...yaml

7. Check that your fullnode pods are now running (this may take a few minutes):

$ kubectl get pods -n aptos

8. Get your fullnode IP:

$ kubectl get svc -o custom-columns=IP:status.loadBalancer.ingress -n aptos

9. Check REST API, make sure the ledge version is increasing.

$ curl http://<IP>

10. To verify the correctness of your FullNode, as outlined in the documentation (https://aptos.dev/tutorials/run-a-fullnode/#verify-the-correctness-of-your-fullnode), you will need to set up a port-forwarding mechanism directly to the aptos pod in one ssh terminal and test it in another ssh terminal

* Set up the port-forwarding to the aptos-fullnode pod. Use `kubectl get pods -n aptos` to get the name of the pod

$ kubectl port-forward -n aptos <pod-name> 9101:9101

* Open a new ssh terminal. Execute the following curl calls to verify the correctness

$ curl -v http://0:9101/metrics 2> /dev/null | grep "aptos_state_sync_version{type=\"synced\"}"

$ curl -v http://0:9101/metrics 2> /dev/null | grep "aptos_connections{direction=\"outbound\""

* Exit port-forwarding when you are done by entering control-c in the terminal
16 changes: 16 additions & 0 deletions terraform/fullnode/vultr/cluster.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
resource "vultr_kubernetes" "k8" {
region = var.fullnode_region
label = "aptos-devnet"
version = "v1.23.5+3"

node_pools {
node_quantity = var.num_fullnodes
plan = var.machine_type
label = "aptos-fullnode"
}
}

resource "local_file" "kube_config" {
content = base64decode(vultr_kubernetes.k8.kube_config)
filename = "${path.module}/vultr_kube_config.yml"
}
64 changes: 64 additions & 0 deletions terraform/fullnode/vultr/kubernetes.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
provider "kubernetes" {
config_path = local_file.kube_config.filename
}

resource "kubernetes_namespace" "aptos" {
metadata {
name = var.k8s_namespace
}
}

resource "kubernetes_storage_class" "ssd" {
metadata {
name = "ssd"
}
storage_provisioner = "block.csi.vultr.com"
volume_binding_mode = "WaitForFirstConsumer"
parameters = {
block_type = "high_perf"
}
}

provider "helm" {
kubernetes {
config_path = local_file.kube_config.filename
}
}

resource "helm_release" "fullnode" {
count = var.num_fullnodes
name = "${terraform.workspace}${count.index}"
chart = "${path.module}/../../helm/fullnode"
max_history = 100
wait = false
namespace = var.k8s_namespace
create_namespace = true

values = [
jsonencode({
chain = {
era = var.era
}
image = {
tag = var.image_tag
}
nodeSelector = {
"vke.vultr.com/node-pool" = "node"
}
storage = {
class = kubernetes_storage_class.ssd.metadata[0].name
}
service = {
type = "LoadBalancer"
}
}),
jsonencode(var.fullnode_helm_values),
jsonencode(var.fullnode_helm_values_list == {} ? {} : var.fullnode_helm_values_list[count.index]),
]

set {
name = "timestamp"
value = var.helm_force_update ? timestamp() : ""
}
}

16 changes: 16 additions & 0 deletions terraform/fullnode/vultr/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
terraform {
required_providers {
vultr = {
source = "vultr/vultr"
version = "2.10.1"
}
}
}

provider "local" {}

provider "vultr" {
api_key = var.api_key
rate_limit = 700
retry_limit = 3
}
67 changes: 67 additions & 0 deletions terraform/fullnode/vultr/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
variable "helm_values" {
description = "Map of values to pass to Helm"
type = any
default = {}
}

variable "fullnode_helm_values" {
description = "Map of values to pass to public fullnode Helm"
type = any
default = {}
}

variable "fullnode_helm_values_list" {
description = "List of values to pass to public fullnode, for setting different value per node. length(fullnode_helm_values_list) must equal var.num_fullnodes"
type = any
default = {}
}

variable "helm_force_update" {
description = "Force Terraform to update the Helm deployment"
default = false
}

variable "k8s_namespace" {
default = "aptos"
description = "Kubernetes namespace that the fullnode will be deployed into"
}

variable "k8s_api_sources" {
description = "List of CIDR subnets which can access the Kubernetes API endpoint"
default = ["0.0.0.0/0"]
}

variable "num_fullnodes" {
default = 1
description = "Number of fullnodes"
}

variable "image_tag" {
default = "devnet"
description = "Docker image tag to use for the fullnode"
}

variable "era" {
description = "Chain era, used to start a clean chain"
default = 1
}

variable "chain_id" {
description = "aptos chain ID"
default = "DEVNET"
}

variable "machine_type" {
description = "Machine type for running fullnode. All configurations can be obtained at https://www.vultr.com/api/#tag/plans"
default = "vc2-4c-8gb"
}

variable "api_key" {
description = "API Key, can be obtained at https://my.vultr.com/settings/#settingsapi"
default = ""
}

variable "fullnode_region" {
description = "Geographical region for the node location. All 25 regions can be obtained at https://api.vultr.com/v2/regions"
default = "fra"
}

0 comments on commit 2415fd6

Please sign in to comment.