From 215ed6f7cd32241a7cd01d45dda0059cd3e87c85 Mon Sep 17 00:00:00 2001 From: Pearl Dsilva Date: Wed, 6 Oct 2021 15:02:14 +0530 Subject: [PATCH] Update Documentation --- README.md | 44 ++++++++++++++--------------- files/aliasrc | 77 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 99 insertions(+), 22 deletions(-) create mode 100644 files/aliasrc diff --git a/README.md b/README.md index e293493..57381b9 100644 --- a/README.md +++ b/README.md @@ -32,7 +32,7 @@ Table of Contents ![mbx architecture](doc/images/arch.png) -A `mbx` environment consists of VMs that runs the CloudStack management server +An `mbx` environment consists of VMs that runs the CloudStack management server and hypervisor hosts. These VMs are provisioned on a local host-only `monkeynet` network which is a /16 nat-ed RFC1918 IPv4 network. The diagram above shows how nested guest VMs and virtual router are plugged in nested-virtual networks that @@ -51,8 +51,8 @@ https://github.com/shapeblue/hackerbook/blob/main/1-user.md `/export/testing` for environment-specific primary and secondary storages. A typical `mbx` environment deployment makes copy of a CloudStack -version-specific gold-master directory that generally containing two empty primary -storage directory (`primary1` and `primary2`) and one secondary storage +version-specific gold-master directory that generally contains two empty primary +storage directories (`primary1` and `primary2`) and one secondary storage directory (`secondary`). The secondary storage directory must be seeded with CloudStack version-specific `systemvmtemplates`. The `systemvmtemplate` is then used to create system VMs such as the Secondary-Storage VM, Console-Proxy @@ -75,10 +75,10 @@ network. The `mbx init` command initialises this network. | IP: 172.20.x.y | +-----------------+ -The 172.20.0.0/16 RFC1918 private network is used as the other 192.168.x.x and -10.x.x.x CIDRs may be already in by VPN, lab resources and office/home networks. +The 172.20.0.0/16 RFC1918 private network is used, as the other 192.168.x.x and +10.x.x.x CIDRs may already be in use by VPN, lab resources and office/home networks. -To keep the setup simple all MonkeyBox VMs have a single nic which can be +To keep the setup simple, all MonkeyBox VMs have a single NIC which can be used as a single physical network in CloudStack that has the public, private, management/control and storage networks. A complex setup is possible by adding multiple virtual networks and nics on them. @@ -167,7 +167,7 @@ on your machine: The `mbx init` is idempotent and can be used to update templates and domain xml definitions. -The `mbx init` command initialises this network. You can check and confirm the +The `mbx init` command initialises the `monkeynet` network. You can check and confirm the network using: $ virsh net-list @@ -186,8 +186,8 @@ like below: ![VM Manager Virt Network](doc/images/virt-net.png) This will create a virtual network with NAT and CIDR 172.20.0.0/16, the gateway -`172.20.0.1` is also workstation/host's virtual bridge IP. The virtual network's -bridge name `virbrX`may be different and it does not matter as long as you've a +`172.20.0.1` is also the workstation/host's virtual bridge IP. The virtual network's +bridge name `virbrX` may be different and it does not matter as long as you've a NAT-enabled virtual network in 172.20.0.0/16. Your workstation/host IP address is `172.20.0.1`. @@ -196,11 +196,11 @@ NAT-enabled virtual network in 172.20.0.0/16. After setting up NFS on the workstation host, you need to create a CloudStack-version specific storage golden master directory that contains two -primary storages and secondary storage folder with the systemvmtemplate for a +primary storage folders and a secondary storage folder with the systemvmtemplate for the specific version of CloudStack seeded. The storage golden master is used as -storage source of a mbx environment during `mbx deploy` command execution. +storage source of an mbx environment during `mbx deploy` command execution. -Note: This is required one-time only for a specific version of CloudStack. +Note: This is required to be done only once for a specific version of CloudStack. For example, the following is needed only one-time for creating a golden master storage directory for 4.15 version: @@ -225,7 +225,7 @@ storage directory for 4.15 version: ## Using `mbx` -`mbx` tool can be used to build CloudStack packages, deploy dev or QA +The `mbx` tool can be used to build CloudStack packages, deploy dev or QA environments with KVM, VMware, XenServer and XCP-ng hypervisors, and run smoketests on them. @@ -249,11 +249,11 @@ smoketests on them. mbx init -1. To list available environments and `mbx` templates (mbxts) run: +1. To list available environments and `mbx` templates (mbxts), run: mbx list -2. To deploy an environment run: +2. To deploy an environment, run: mbx deploy @@ -284,7 +284,7 @@ More examples with specific repositories and custom storage source: (custom stor ## CloudStack Development -This section cover how a developer can run management server and MySQL server +This section covers how a developer can run management server and MySQL server locally to do local CloudStack development along side an IDE. For developer env, it is recommended that you run your favourite IDE such as @@ -292,7 +292,7 @@ IntelliJ IDEA, text-editors, your management server, MySQL server and NFS server (secondary and primary storages) on your workstation (not in a VM) where these services can be accessible to VMs, KVM hosts etc. at your host IP `172.20.0.1`. -To ssh into deployed VMs (with NSS configured), you can login simply using: +To ssh into deployed VMs (with NSS configured), you can login by simply using: $ mbx ssh @@ -315,8 +315,8 @@ Install pyenv, jenv as well. Setup `aliasrc` that defines some useful bash aliases, exports and utilities such as `agentscp`. Run the following while in the directory root: - $ echo "source $PWD/aliasrc" >> ~/.bashrc - $ echo "source $PWD/aliasrc" >> ~/.zshrc + $ echo "source $PWD/files/aliasrc" >> ~/.bashrc + $ echo "source $PWD/files/aliasrc" >> ~/.zshrc You may need to `source` your shell's rc/profile or relaunch shell/terminal to use `agentscp`. @@ -349,7 +349,7 @@ cloned CloudStack git repository you can use the `cloud-install-sys-tmplt` to seed the systemvmtemplate. The following is an example to setup `4.15` systemvmtemplate which you should -run after deploying CloudStack db: (please use CloudStack branch/version specific +run after deploying the CloudStack db: (please use CloudStack branch/version specific systemvmtemplate) cd /path/to/cloudstack/git/repo @@ -375,7 +375,7 @@ Noredist CloudStack builds requires additional jars that may be installed from: https://github.com/shapeblue/cloudstack-nonoss Clone the above repository and run the install.sh script, you'll need to do -this only once or whenver the noredist jar dependencies are updated in above +this only once or whenever the noredist jar dependencies are updated in the above repository. Build using: @@ -433,7 +433,7 @@ To remote-debug the KVM agent, put the following in JAVA=/usr/bin/java -Xdebug -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n -The above will ensure that JVM with start with debugging enabled on port 8787. +The above will ensure that JVM will start with debugging enabled on port 8787. In IntelliJ, or your IDE/editor you can attach a remote debugger to this address:port and put breakpoints (and watches) as applicable. diff --git a/files/aliasrc b/files/aliasrc new file mode 100644 index 0000000..a264438 --- /dev/null +++ b/files/aliasrc @@ -0,0 +1,77 @@ +# +# Source this file in your ~/.bashrc or ~/.zshrc using: +# echo "source $PWD/aliasrc" >> ~/.bashrc +# echo "source $PWD/aliasrc" >> ~/.zshrc +# + +# Utf8 exports +export LC_ALL=en_US.UTF-8 +export LANG=en_US.UTF-8 + +# Local apps +export PATH=$HOME/bin:$PATH + +# Maven +export MAVEN_OPTS="-Xmx4096m -XX:MaxPermSize=500m -Xdebug -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n" + +# Jenv +export PATH="$HOME/.jenv/bin:$PATH" +eval "$(jenv init -)" + +# Pyenv +#export PATH="$HOME/.pyenv/bin:$PATH" +#eval "$(pyenv init -)" +#eval "$(pyenv virtualenv-init -)" + +mgmtscp() { + MS=$1 + ROOT=$PWD + echo "[acs server] Stopping MS: $MS" + sshpass -pP@ssword123 ssh -o StrictHostKeyChecking=no root@$MS "systemctl stop cloudstack-management" + echo "[acs server] Cleaning old jar on server: $MS" + sshpass -pP@ssword123 ssh -o StrictHostKeyChecking=no root@$MS "rm -f /usr/share/cloudstack-management/lib/cloudstack*jar" + sshpass -pP@ssword123 ssh -o StrictHostKeyChecking=no root@$MS "rm -f /usr/share/cloudstack-management/lib/cloud-client-ui*jar" + sshpass -pP@ssword123 ssh -o StrictHostKeyChecking=no root@$MS "mv /var/log/cloudstack/management/management-server.log /var/log/cloudstack/management/management-server.log-`date +%Y%m%dT%H%M%S`" + echo "[acs server] Copying jar to server: $MS" + sshpass -pP@ssword123 scp -Cv -o StrictHostKeyChecking=no $ROOT/client/target/cloud-client-ui-*.jar root@$MS:/usr/share/cloudstack-management/lib/ + echo "[acs server] Copying systemvm.iso" + sshpass -pP@ssword123 scp -Cv -o StrictHostKeyChecking=no $ROOT/systemvm/dist/systemvm.iso root@$MS:/usr/share/cloudstack-common/vms/ + echo "[acs server] Starting MS: $MS" + sshpass -pP@ssword123 ssh -o StrictHostKeyChecking=no root@$MS "systemctl start cloudstack-management" +} + +agentscp() { + ROOT=$PWD + echo "[acs agent] Syncing changes to agent: $1" + + echo "[acs agent] Copied systemvm.iso" + scp $ROOT/systemvm/dist/systemvm.iso root@$1:/usr/share/cloudstack-common/vms/ + + echo "[acs agent] Syncing python lib changes to agent: $1" + scp -r $ROOT/python/lib/* root@$1:/usr/lib64/python2.6/site-packages/ 2>/dev/null || true + scp -r $ROOT/python/lib/* root@$1:/usr/lib64/python2.7/site-packages/ 2>/dev/null || true + + echo "[acs agent] Syncing scripts" + scp -r $ROOT/scripts/* root@$1:/usr/share/cloudstack-common/scripts/ + + echo "[acs agent] Syncing kvm hypervisor jars" + ssh root@$1 "rm -f /usr/share/cloudstack-agent/lib/*" + scp -r $ROOT/plugins/hypervisors/kvm/target/*jar root@$1:/usr/share/cloudstack-agent/lib/ + scp -r $ROOT/plugins/hypervisors/kvm/target/dependencies/*jar root@$1:/usr/share/cloudstack-agent/lib/ + + echo "[acs agent] Syncing cloudstack-agent config and scripts" + scp $ROOT/agent/target/transformed/log4j-cloud.xml root@$1:/etc/cloudstack/agent/ + ssh root@$1 "sed -i 's/INFO/DEBUG/g' /etc/cloudstack/agent/log4j-cloud.xml" + ssh root@$1 "sed -i 's/logs\/agent.log/\/var\/log\/cloudstack\/agent\/agent.log/g' /etc/cloudstack/agent/log4j-cloud.xml" + scp $ROOT/agent/target/transformed/libvirtqemuhook root@$1:/usr/share/cloudstack-agent/lib/ + + scp $ROOT/agent/target/transformed/cloud-setup-agent root@$1:/usr/bin/cloudstack-setup-agent + ssh root@$1 "sed -i 's/@AGENTSYSCONFDIR@/\/etc\/cloudstack\/agent/g' /usr/bin/cloudstack-setup-agent" + scp $ROOT/agent/target/transformed/cloud-ssh root@$1:/usr/bin/cloudstack-ssh + scp $ROOT/agent/target/transformed/cloudstack-agent-upgrade root@$1:/usr/bin/cloudstack-agent-upgrade + ssh root@$1 "chmod +x /usr/bin/cloudstack*" + + ssh root@$1 "systemctl status cloudstack-agent && systemctl restart cloudstack-agent" + + echo "[acs agent] Copied all files, start hacking!" +}