diff --git a/README.md b/README.md index 74fcd7b..7299651 100644 --- a/README.md +++ b/README.md @@ -29,7 +29,7 @@ Hope, everything went well in product provisioning . Lets now get started to exp Hurrah!!!!.. Now that the product is up and running and let's get into the real action of using it. First let us create an API and add some enforcement to it. -1. Login to the webapp ([http://localhost:9072/)](http://localhost:9072/)) using the default credentials (Administrator/manage). If you are using API cloud, make sure to launch the "API Gateway" app.  +1. Login to the webapp ([http://localhost:9072/)](http://localhost:9072/)) using the default credentials. If you are using API cloud, make sure to launch the "API Gateway" app.  2. Create a "DateAPI" by importing [the attached archive](attachments/651659260/651661834.zip) through the import option under the user menu. By default, this API is protected with the API Key security enforcement. @@ -87,7 +87,7 @@ References * [API Gateway DevOps Repo](https://github.com/SoftwareAG/webmethods-api-gateway-devops) * [DevOps Templates](https://github.com/SoftwareAG/sagdevops-templates)  * [Product Documentation](https://docs.webmethods.io/)  -* [Tech community](http://techcommunity.softwareag.com/pwiki/-/wiki/tag/api-gateway) +* [Tech community](https://tech.forums.softwareag.com/tags/c/knowledge-base/6/API-Gateway) * [Software AG Devcast Videos](https://www.youtube.com/results?search_query=software+ag+devcast)  ______________________ diff --git a/docs/articles/architecture/Elasticsearch Best Practices (v7.2.0)/README.md b/docs/articles/architecture/Elasticsearch Best Practices (v7.2.0)/README.md index a8119a7..519618f 100644 --- a/docs/articles/architecture/Elasticsearch Best Practices (v7.2.0)/README.md +++ b/docs/articles/architecture/Elasticsearch Best Practices (v7.2.0)/README.md @@ -486,8 +486,7 @@ For Elasticsearch monitoring, no plugins are shipped along with API Gateway but 1. [Elastic-HQ](http://www.elastichq.org/) 2. [Elasticsearch-kopf](http://elasticsearch-kopf/) -3. [Bigdesk](http://bigdesk.org/) -4. [Elasticsearch-head](https://github.com/mobz/elasticsearch-head) +3. [Elasticsearch-head](https://github.com/mobz/elasticsearch-head) ### Across Data Centre deployment  diff --git a/docs/articles/features/README.md b/docs/articles/features/README.md index 062fc37..662af56 100644 --- a/docs/articles/features/README.md +++ b/docs/articles/features/README.md @@ -16,12 +16,12 @@ API Mocking is nothing but the imitation of real API. It simulates the behavior SOAP to REST Transformation --------------------------- -SOAP web services are commonly used to expose data within enterprises. With the rapid adoption of the REST APIs, it is now a necessity for API providers to have the ability to provide RESTful interfaces to their existing SOAP web services instead of creating new REST APIs. Using the API Gateway SOAP to REST transformation feature, the API provider can either expose the parts of the SOAP API or expose the complete SOAP API with RESTful interface. API Gateway allows you to customize the way the SOAP operations are exposed as REST resources. Additionally, the Swagger or RAML definitions can be generated for these REST interfaces. **[Read on...](http://techcommunity.softwareag.com/pwiki/-/wiki/Main/SOAP%20to%20REST%20Transformation)** +SOAP web services are commonly used to expose data within enterprises. With the rapid adoption of the REST APIs, it is now a necessity for API providers to have the ability to provide RESTful interfaces to their existing SOAP web services instead of creating new REST APIs. Using the API Gateway SOAP to REST transformation feature, the API provider can either expose the parts of the SOAP API or expose the complete SOAP API with RESTful interface. API Gateway allows you to customize the way the SOAP operations are exposed as REST resources. Additionally, the Swagger or RAML definitions can be generated for these REST interfaces. **[Read on...](https://tech.forums.softwareag.com/t/soap-to-rest-transformation/236956)** Teams in API Gateway -------------------- -Team support feature allows you to group the users who work in a project, or users with similar roles, as a team. Using this feature, you can assign assets for each team and specify the access level of team members based on the team members' project requirements. This feature is helpful for organizations that have multiple teams, who work on different projects. Users can access only the assets that are assigned to them. For example, consider an organization with different teams such as Development, Configuration Management, Product Analytics, and Quality Assurance. Each of these teams needs access to different assets at different levels. That is, developers would require APIs to develop applications and they require the necessary privileges to manage APIs and applications. Similarly, analysts would want the necessary privileges to view performance dashboards of assets. In such scenarios, you can group users based on their roles as a team and assign them the necessary privileges based on their responsibility. **[Read on...](http://techcommunity.softwareag.com/pwiki/-/wiki/Main/Teams%20in%20APIGateway)** +Team support feature allows you to group the users who work in a project, or users with similar roles, as a team. Using this feature, you can assign assets for each team and specify the access level of team members based on the team members' project requirements. This feature is helpful for organizations that have multiple teams, who work on different projects. Users can access only the assets that are assigned to them. For example, consider an organization with different teams such as Development, Configuration Management, Product Analytics, and Quality Assurance. Each of these teams needs access to different assets at different levels. That is, developers would require APIs to develop applications and they require the necessary privileges to manage APIs and applications. Similarly, analysts would want the necessary privileges to view performance dashboards of assets. In such scenarios, you can group users based on their roles as a team and assign them the necessary privileges based on their responsibility. **[Read on...](https://tech.forums.softwareag.com/t/teams-in-api-gateway/237355)** API Mashups ----------- diff --git a/docs/articles/operations/Configure and Operate API Gateway for handling large data volume/README.md b/docs/articles/operations/Configure and Operate API Gateway for handling large data volume/README.md index 317f63c..3fef41e 100644 --- a/docs/articles/operations/Configure and Operate API Gateway for handling large data volume/README.md +++ b/docs/articles/operations/Configure and Operate API Gateway for handling large data volume/README.md @@ -90,12 +90,12 @@ OOTB, Elasticsearch, or internal data store will have a default configuration. |Minimum number of nodes|Minimum number of nodes required is 3| |Set all three nodes as master.|By default, all nodes will be master unless explicitly set node.master as false| |Set minimum heap space as 2gb|Follow below steps to increase or decrease heap space of Elasticsearch node
1. Go to -> \\\InternalDataStore\\config\\jvm.options
2. Change the value of property -Xmx\g.ex: to increase from 2g to 4g, customer can set the value as -Xmx4g| -|node.name|Set a human readable node name by setting [node.name](http://node.name) in elasticsearch.yml in all nodes| +|node.name|Set a human readable node name by setting "node.name" property in elasticsearch.yml in all nodes| |Initial master nodes|Add all the three node names in [initial.master\_nodes](https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-discovery-bootstrap-cluster.html) in elasticsearch.yml. These are the nodes that are responsible for forming a single cluster for the very first when we start Elasticsearch cluster. As per Elasticsearch recommendation add at least three master eligible nodes in cluster.initial.master\_nodes| |Discovery seed hosts|Add the three nodes host:httpport as discovery.seed\_hosts. Elasticsearch will discover the cluster nodes using the hosts specified in this property.| |Path Repo|Configure the **repo** to common location that is accessible for all Elasticsearch nodes. All the backups taken using either Elasticsearch snapshot or API Gateway backup utility will be stored here. Refer this article [https://techcommunity.softwareag.com/pwiki/-/wiki/Main/Periodical%20Data%20backup](https://techcommunity.softwareag.com/pwiki/-/wiki/Main/Periodical%20Data%20backup)
1. Backup to [AWS S3](https://www.elastic.co/guide/en/elasticsearch/plugins/current/repository-s3.html) bucket or shared file system options are available, so that the local disk space will not be occupied.| |Refresh Interval|After starting the API Gateway, set the [refresh](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-refresh.html) interval for events index type as below

1. Go to API Gateway UI -> Administration -> Extended settings -> **eventRefreshInterval** to 60s and save it. In Elasticsearch, the operation that makes any updates to the data visible to search is called a refresh . It is costly operation when there are large volumes of data and calling it often while there is ongoing indexing activity can impact indexing speed. The below queries will make the index refresh every 1 minute| -|[Disk based shard allocation settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/disk-allocator.html)|If node disk spaces are equal, then provide the percentage
Elasticsearch uses the below settings to consider the available disk space on a node before deciding whether to allocate new shards to that node or to actively relocate shards away from that node. 

1. cluster.routing.allocation.disk.watermark.low
**Default: 85%** which means Elasticsearch will stop allocating new shards to nodes that have more than 85% disk used
2. cluster.routing.allocation.disk.watermark.high
**Default: 90%** which means Elasticsearch will attempt to relocate shards away from a node whose disk usage is above 90%
3.    cluster.routing.allocation.disk.watermark.flood\_stage
**Default: 95%** which means Elasticsearch enforces a read-only index block (index.blocks.read\_only\_allow\_delete) on every index that has one or more shards allocated on the node that has at least one disk exceeding the flood stage. This is the last resort to prevent nodes from running out of disk space.

The values can be set in percentage and absolute. If the nodes have equal space, then the customer can configure the values in percentage
curl -X PUT "[http://localhost:9240/\_cluster/settings?pretty](http://localhost:9240/_cluster/settings?pretty)" -H 'Content-Type: application/json' -d'
{
    "persistent" : {
    "cluster.routing.allocation.disk.watermark.low": "75%",
    "cluster.routing.allocation.disk.watermark.high": "85%",
    "cluster.routing.allocation.disk.watermark.flood\_stage": "95%",
    "[cluster.info](http://cluster.info).update.interval": "1m"
  }
}

If the node disk spaces are not equal, then provide in absolute value. Set the absolute value based on disk size available. Ex:
curl -X PUT "[http://localhost:9240/\_cluster/settings?pretty](http://localhost:9240/_cluster/settings?pretty)" " -H 'Content-Type: application/json' -d'
{
  " persistent" : {
    "cluster.routing.allocation.disk.watermark.low": "100gb ",
    "cluster.routing.allocation.disk.watermark.high": "50gb",
    "cluster.routing.allocation.disk.watermark.flood\_stage": "10gb",
    "[cluster.info](http://cluster.info).update.interval": "1m"
  }
}'| +|[Disk based shard allocation settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/disk-allocator.html)|If node disk spaces are equal, then configure it in percentage.

Elasticsearch uses the below settings to consider the available disk space on a node before deciding whether to allocate new shards to that node or to actively relocate shards away from that node. 

1. cluster.routing.allocation.disk.watermark.low
**Default: 85%** which means Elasticsearch will stop allocating new shards to nodes that have more than 85% disk used
2. cluster.routing.allocation.disk.watermark.high
**Default: 90%** which means Elasticsearch will attempt to relocate shards away from a node whose disk usage is above 90%
3.    cluster.routing.allocation.disk.watermark.flood\_stage
**Default: 95%** which means Elasticsearch enforces a read-only index block (index.blocks.read\_only\_allow\_delete) on every index that has one or more shards allocated on the node that has at least one disk exceeding the flood stage. This is the last resort to prevent nodes from running out of disk space.

The values can be set in percentage and absolute. If the nodes have equal space, then the customer can configure the values in percentage
curl -X PUT "http://localhost:9240/\_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
    "persistent" : {
    "cluster.routing.allocation.disk.watermark.low": "75%",
    "cluster.routing.allocation.disk.watermark.high": "85%",
    "cluster.routing.allocation.disk.watermark.flood\_stage": "95%",
    "cluster.info.update.interval": "1m"
  }
}

If the node disk spaces are not equal, then provide in absolute value. Set the absolute value based on disk size available. Ex:
curl -X PUT "http://localhost:9240/\_cluster/settings?pretty" " -H 'Content-Type: application/json' -d'
{
  " persistent" : {
    "cluster.routing.allocation.disk.watermark.low": "100gb ",
    "cluster.routing.allocation.disk.watermark.high": "50gb",
    "cluster.routing.allocation.disk.watermark.flood\_stage": "10gb",
    "cluster.info.update.interval": "1m"
  }
}'| Kibana Configuration -------------------- @@ -353,7 +353,7 @@ Any one of the below actions can be taken to recover disk space * [Roll over the index](https://www.elastic.co/guide/en/elasticsearch/reference/7.2/indices-rollover-index.html) - By default, the API gateway has created an alias for all events. Below are the aliases ([http://localhost:9240/\_cat/aliases?v](http://localhost:9240/_cat/aliases?v)) and you can find the corresponding index by checking [http://localhost:9240/. It will display the current write index and below is the list of aliases in API Gateway 10.5 + By default, the API gateway has created an alias for all events. Below are the aliases (you can list them using this URL http://localhost:9240/\_cat/aliases?v) and you can find the corresponding index by checking http://localhost:9240/. It will display the current write index. Below is the list of aliases in API Gateway 10.5 * gateway\_\\_analytics\_transactionalevents * gateway\_\\_analytics\_performancemetrics @@ -497,7 +497,7 @@ curl -X GET "http://localhost:5555/rest/apigateway/health/engine" curl -X GET "http://localhost:5555/rest/apigateway/health/admin" ``` -Additionally API Gateway also provides endpoints for metrics - [http://localhost:5555/](http://localhost:5555/rest/apigateway/health/engine)metrics  +Additionally API Gateway also provides endpoints for metrics - http://localhost:5555/metrics  #### Actions diff --git a/samples/kubernetes/helm/cluster-deployment/README.md b/samples/kubernetes/helm/cluster-deployment/README.md index 6876a11..c16a565 100644 --- a/samples/kubernetes/helm/cluster-deployment/README.md +++ b/samples/kubernetes/helm/cluster-deployment/README.md @@ -8,12 +8,10 @@ In order to setup an API Gateway cluster: * follow the instructions in the [chart readme](apigateway/README.md) to provide Helm values, * finally run `helm install my-helm-release apigateway -f my-values.yaml`. -At the time of writing this article there is no prepared Docker image for the Terracotta BigMemory product. -Users need to create their own Docker image from an on-premise installation of Terracotta BigMemory by following the -product documentation. The Terracotta installation comes with a Dockerfile for this purpose. - -Apart from that, the chart by default pulls the API Gateway trial image from [Dockerhub](https://hub.docker.com/_/softwareag-apigateway), and it pulls -open source images for ElasticSearch and Kibana from the [ElasticSearch Docker repository](https://www.docker.elastic.co/). +By default the chart pulls these images: +* the API Gateway trial image from [Dockerhub](https://hub.docker.com/_/softwareag-apigateway), +* the Terracotta Bigmemory Max trial image from [Dockerhub](https://hub.docker.com/_/software-ag-bigmemory-max), +* the open source images for ElasticSearch and Kibana from the [ElasticSearch Docker repository](https://www.docker.elastic.co/). ## Technical details diff --git a/samples/kubernetes/helm/cluster-deployment/apigateway/README.md b/samples/kubernetes/helm/cluster-deployment/apigateway/README.md index e4438fc..a88d214 100644 --- a/samples/kubernetes/helm/cluster-deployment/apigateway/README.md +++ b/samples/kubernetes/helm/cluster-deployment/apigateway/README.md @@ -28,7 +28,7 @@ provided as configmaps. Hence before running `helm install` create the configmaps: ``` -kubectl create configmap tc-licencse-config --from-file=terracotta-license.key= +kubectl create configmap tc-license-config --from-file=terracotta-license.key= kubectl create configmap apigw-license-config --from-file=licenseKey.xml= ``` @@ -51,13 +51,10 @@ explicitly when running `helm install`, for example in a separate file `my-value ``` # my-values.yaml k8sClusterSuffix: " .. cluster suffix .. " - -terracotta: - terracottaImage: " .. the image location .. " - terracottaTag: " .. the tag .. " ``` The cluster suffix must match the suffix of the URLs published by the Kubernetes cluster's Ingress Controller. +The suffix needs to start with a dot `.` character. ## Trial Usage diff --git a/samples/kubernetes/helm/cluster-deployment/apigateway/charts/terracotta/templates/terracotta-statefulset.yaml b/samples/kubernetes/helm/cluster-deployment/apigateway/charts/terracotta/templates/terracotta-statefulset.yaml index 36c6892..784b29c 100644 --- a/samples/kubernetes/helm/cluster-deployment/apigateway/charts/terracotta/templates/terracotta-statefulset.yaml +++ b/samples/kubernetes/helm/cluster-deployment/apigateway/charts/terracotta/templates/terracotta-statefulset.yaml @@ -61,7 +61,7 @@ spec: subPath: tc-config.xml readOnly: false - name: license-volume - mountPath: /license/license.key + mountPath: /licenses/license.key subPath: terracotta-license.key readOnly: false imagePullSecrets: diff --git a/samples/kubernetes/helm/cluster-deployment/apigateway/charts/terracotta/values.yaml b/samples/kubernetes/helm/cluster-deployment/apigateway/charts/terracotta/values.yaml index 6b0e2b6..e9d590a 100644 --- a/samples/kubernetes/helm/cluster-deployment/apigateway/charts/terracotta/values.yaml +++ b/samples/kubernetes/helm/cluster-deployment/apigateway/charts/terracotta/values.yaml @@ -24,12 +24,12 @@ port: 9410 syncPort: 9430 -terracottaImage: "specify-a-terracotta-image" -terracottaTag: "specify-a-terracotta-tag" +terracottaImage: "store/softwareag/bigmemorymax-server" +terracottaTag: "4.3.8" tcConfig: "tc-config" -tcLicenseConfig: "tc-licencse-config" +tcLicenseConfig: "tc-license-config" tcLicenseFilename: "terracotta-license.key"