To get started quickly you need docker and docker-compose.
To run the example with a local PostgreSQL DB in docker create a default-env.json
file with the following content:
{
"VCAP_SERVICES": {
"postgres": [
{
"name": "postgres",
"label": "postgres",
"tags": [
"database"
],
"credentials": {
"host": "localhost",
"port": "5432",
"database": "beershop",
"user": "postgres",
"password": "postgres"
}
}
]
}
}
Start the PostgreSQL database and Adminer using:
npm run docker:start:pg
It will use the latest available PostgreSQL Docker container. If you want to test with PostgreSQL 11 then run:
npm run docker:start:pg:11
Now deploy the database schema using cds-dbm with the command:
npm run deploy:pg
Then open http://localhost:8080/ and login by selecting System PostgreSQL, Server: beershop-postgresql, Username postgres and Password postgres. The database beershop should already exist as you've just deployed it. If you have issues with the deployment you can run the SQL commands via adminer. You find them in the file beershop.sql.
Now you can start the CAP application by using:
cds run
Then open http://localhost:4004/beershop/Beers in the browser and you should see:
{
"@odata.context": "$metadata#Beers",
"value": [
{
"ID": "b8c3fc14-22e2-4f42-837a-e6134775a186",
"name": "Lagerbier Hell",
"abv": 5.2,
"ibu": 12,
"brewery_ID": "9c937100-d459-491f-a72d-81b2929af10f"
},
{
"ID": "9e1704e3-6fd0-4a5d-bfb1-13ac47f7976b",
"name": "Schönramer Hell",
"abv": 5,
"ibu": 20,
"brewery_ID": "fa6b959e-3a01-40ef-872e-6030ee4de4e5"
}
]
}
To stop the docker containers run either npm run docker:stop:pg
or npm run docker:stop:pg:11
.
We're using the mbt build to create a mtar that can be deployed to the SAP CP Cloud Foundry. The build ist started with:
npm run build:cf
then you can deploy with:
npm run deploy:cf
Until SAP will provide a fully managed PostgreSQL DB you need to provide your on PostgreSQL DB. One way is to install a Open Service Broker. The page Compliant Service Brokers lists brokers supporting AWS, Azure and GCP. The SAP Developers Tutorial Mission Use Microsoft Azure Services in SAP Cloud Platform describes in great detail how to setup the Service Broker for Azure. When you finished this setup you can run:
npm run create-service:pg:dbms
to instanciate a PostgreSQL DBMS. Then run:
npm run create-service:pg:db
to create a the beershop database in the DBMS. With that opreperation done you can build the MTA by running:
npm run build:mta
That MTA can be deployed using:
npm run deploy:cf
The created database is empty. As currently no deploy script is available the needed tables and views for the CAP application need to be created before you can run the application. The easiest way to create the tables and views is to use Adminer as for the local deployment. You can get the credentials by opening the pg-beershop-srv application via the SAP Cloud Platform Cockpit. Navigate to the Service Bindings and click on "Show sensitive data". Enter the data in the corresponsing fields of the Adminer login screen. Execute the SQL commands you find in beershop.sql. To fill the database with data also execute the ones in beershop-data.sql. Now try out the URL you find in the Overview of the pg-beershop-srv application.
If you want to build your own docker image replace gregorwolf in package.json and deployment/beershop.yaml with your own hub.docker.com account. Then run:
npm run build:docker
To test the image locally you have to create a .env file that provides the environment variable VCAP_SERVICES which contains the connection information. Fill it with the following content:
VCAP_SERVICES={"docker-postgres":[{"name":"postgres","label":"postgres","tags":["database"],"credentials":{"host":"beershop-postgresql","port":"5432","database":"beershop","user":"postgres","password":"postgres"}}]}
Then run:
npm run docker:start:cds
to start the image gregorwolf/pg-beershop:latest from hub.docker.com. If you want to run your own image run che command you find in package.json with your image. Finally publish the created image with:
npm run push:docker
Download the kubeconfig from your Kyma instance via the menu behind the account Icon in the upper right corner. Save it in ~/.kube/kubeconfig-kyma.yml. Then run:
export KUBECONFIG=~/.kube/kubeconfig-kyma.yml
Please note that the token in the kubeconfig is only valid for 8 hours. So you might have to redo the download whenever you want to run the commands again.
To keep this project separate from your other deployments I would suggest to create a namespace:
kubectl create namespace pg-beershop
Deploy the configuration:
kubectl -n pg-beershop apply -f deployment/beershop.yaml
To create the beershop database a port forwarding must be started:
kubectl -n pg-beershop port-forward service/beershop-postgresql 5432:5432
Then you can connect with the psql client. The password is postgres:
psql -h localhost -U postgres --password
Run the SQL command from db/init/beershop.sql.
For troubleshooting you can SSH into the CAP container:
kubectl -n pg-beershop exec $(kubectl -n pg-beershop get pods -l tier=frontend -o jsonpath='{.items[0].metadata.name}') -t -i /bin/bash
If you want to delete the deployment, then run:
kubectl -n pg-beershop delete -f deployment/beershop.yaml
Install Azure CLI for your resprective OS. With the comand:
az account list-locations -o table
you can retrieve the list of locations and select the one fitting your needs best. To deploy a PostgreSQL the extension db-up needs to be installed:
az extension add --name db-up
Set the environment variables:
export postgreservername=<yourServerName>
export adminpassword=<yourAdminPassword>
Then the PostgreSQL server and database can be created:
az postgres up --resource-group beershop --location germanywestcentral --sku-name B_Gen5_1 --server-name $postgreservername --database-name beershop --admin-user beershop --admin-password $adminpassword --ssl-enforcement Enabled --version 11
If you want to use this database from your own location or from SAP Cloud Platform Trial in eu10 then you have to add a firewall rule. Based on the information found in SAP Cloud Platform Connectivity - Network I add the following rule:
az postgres server firewall-rule create -g beershop -s $postgreservername -n cfeu10 --start-ip-address 3.122.0.0 --end-ip-address 3.124.255.255
Store the DB connection information in default-env.json. It must contain the certificate for the TLS connection documented in Configure TLS connectivity in Azure Database for PostgreSQL - Single Server. The format must be the following:
{
"VCAP_SERVICES": {
"postgres": [
{
"label": "azure-postgresql-database",
"provider": null,
"plan": "database",
"name": "beershop-database",
"tags": ["PostgreSQL"],
"instance_name": "beershop-database",
"binding_name": null,
"credentials": {
"host": "<yourServerName>.postgres.database.azure.com",
"port": 5432,
"database": "beershop",
"password": "<yourAdminPassword>",
"username": "beershop@<yourServerName>",
"ssl": {
"rejectUnauthorized": false,
"ca": "-----BEGIN CERTIFICATE-----MIIDdzCCAl+gAwIBAgIEAgAAuTANBgkqhkiG9w0BAQUFADBaMQswCQYDVQQGEwJJRTESMBAGA1UEChMJQmFsdGltb3JlMRMwEQYDVQQLEwpDeWJlclRydXN0MSIwIAYDVQQDExlCYWx0aW1vcmUgQ3liZXJUcnVzdCBSb290MB4XDTAwMDUxMjE4NDYwMFoXDTI1MDUxMjIzNTkwMFowWjELMAkGA1UEBhMCSUUxEjAQBgNVBAoTCUJhbHRpbW9yZTETMBEGA1UECxMKQ3liZXJUcnVzdDEiMCAGA1UEAxMZQmFsdGltb3JlIEN5YmVyVHJ1c3QgUm9vdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKMEuyKrmD1X6CZymrV51Cni4eiVgLGw41uOKymaZN+hXe2wCQVt2yguzmKiYv60iNoS6zjrIZ3AQSsBUnuId9Mcj8e6uYi1agnnc+gRQKfRzMpijS3ljwumUNKoUMMo6vWrJYeKmpYcqWe4PwzV9/lSEy/CG9VwcPCPwBLKBsua4dnKM3p31vjsufFoREJIE9LAwqSuXmD+tqYF/LTdB1kC1FkYmGP1pWPgkAx9XbIGevOF6uvUA65ehD5f/xXtabz5OTZydc93Uk3zyZAsuT3lySNTPx8kmCFcB5kpvcY67Oduhjprl3RjM71oGDHweI12v/yejl0qhqdNkNwnGjkCAwEAAaNFMEMwHQYDVR0OBBYEFOWdWTCCR1jMrPoIVDaGezq1BE3wMBIGA1UdEwEB/wQIMAYBAf8CAQMwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3DQEBBQUAA4IBAQCFDF2O5G9RaEIFoN27TyclhAO992T9Ldcw46QQF+vaKSm2eT929hkTI7gQCvlYpNRhcL0EYWoSihfVCr3FvDB81ukMJY2GQE/szKN+OMY3EU/t3WgxjkzSswF07r51XgdIGn9w/xZchMB5hbgF/X++ZRGjD8ACtPhSNzkE1akxehi/oCr0Epn3o0WC4zxe9Z2etciefC7IpJ5OCBRLbf1wbWsaY71k5h+3zvDyny67G7fyUIhzksLi4xaNmjICq44Y3ekQEe5+NauQrz4wlHrQMz2nZQ/1/I6eYs9HRCwBXbsdtTLSR9I4LtD+gdwyah617jzV/OeBHRnDJELqYzmp-----END CERTIFICATE-----"
},
"sslRequired": true,
"tags": ["postgresql"]
},
"syslog_drain_url": null,
"volume_mounts": []
}
]
}
}
Connect to the database as described in the last paragraph of Run on SAP Cloud Platform.
Store the file content in the environment variable VCAP_SERVICES (jq must be installed):
export VCAP_SERVICES="$(cat default-env.json | jq .VCAP_SERVICES)"
Now create the app service plan:
az appservice plan create --name beershop --resource-group beershop --sku F1 --is-linux
Check out what Node.JS runtimes are available:
az webapp list-runtimes --linux
Then create the web app:
az webapp create --resource-group beershop --plan beershop --name beershop --runtime "NODE|12.9"
Configure an environment variable the variable created before:
az webapp config appsettings set --name beershop --resource-group beershop --settings VCAP_SERVICES="$VCAP_SERVICES"
Now you can publish the app using the Azure DevOps Pipeline.
To delete the database you can run:
az postgres server delete --resource-group beershop --name beershop
You have to confirm the execution with y.
Install Google Cloud SDK for your resprective OS. Then work through the Quickstart for Node.js in the standard environment to deploy
gcloud app create
Store the environment variable in env_variables.yaml:
env_variables:
VCAP_SERVICES: '{}'
This file is included in app.yaml.
When you run:
npm run compile:tosql
the CDS model will be compiled to the beershop-cds.sql file. Using the script cdssql2pgsql.js this SQL is converted to support PostgreSQL. Currently only the datatype NVARCHAR must be replaced with VARCHAR.
The path db/data is mounted to the docker container at /tmp/data. That allows to run the COPY commands generated at the end of beershop.sql.
Running the jest test with npm test
currently fails with:
Failed: Object {
"code": -20005,
"message": "Failed to load DBCAPI.",
"sqlState": "HY000",
"stack": "Error: Failed to load DBCAPI.
when running the standalone script node test/test-db.js
that uses the same way to connect everything is OK.
Right now for the Kyma deployment the schema must be updated manually. Using Liquibase in Kubernetes describes the use of Init Containers in K8n. I think the Docker image timbru31/java-node:11-erbium shoul be a good basis.