Skip to content

Commit

Permalink
GITBOOK-123: No subject
Browse files Browse the repository at this point in the history
  • Loading branch information
doppleware authored and gitbook-bot committed Jul 18, 2024
1 parent 2479bc4 commit 801f6e8
Show file tree
Hide file tree
Showing 7 changed files with 13 additions and 13 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ description: >-

### What you can find here

* How to install Digma, [locally](<README (1).md>) or in your [K8s Cluster](installation/central-on-prem-install/)
* How to install Digma, [locally](<README (1).md>) or in your [K8s Cluster](installation/central-on-prem-install.md)
* How to [send observability data](broken-reference) from your code for Digma to Analyze
* Understanding the [core concepts](broken-reference) and terminology
* A deeper dive into the different [features and functionality](broken-reference)
2 changes: 1 addition & 1 deletion SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
* [Local Install](<README (1).md>)
* [Local Install Architecture](installation/local-install/local-install-architecture.md)
* [Installation Troubleshooting](installation/local-install/installation-troubleshooting.md)
* [Central (on-prem) Install](installation/central-on-prem-install/README.md)
* [Central (on-prem) Install](installation/central-on-prem-install.md)
* [Resource Requirements](installation/central-on-prem-install/resource-requirements.md)

## Instrumentation
Expand Down
2 changes: 1 addition & 1 deletion digma-core-concepts/performance-impact.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Performance Impact

Performance impact is a unique Digma feature that is only measured in real-world-like scenarios such as end-to-end testing and staging environments. Therefore, this feature is only activated once you install Digma in a [central environment.](../installation/central-on-prem-install/)&#x20;
Performance impact is a unique Digma feature that is only measured in real-world-like scenarios such as end-to-end testing and staging environments. Therefore, this feature is only activated once you install Digma in a [central environment.](../installation/central-on-prem-install.md)&#x20;

### What is a performance impact?

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ description: >-

Digma is deployed into the K8s cluster into its own namespace. Depending on your application deployment architecture you may want to deploy Digma with different parameters to enable the right connectivity.

<figure><img src="../../.gitbook/assets/deployment_arch.png" alt="" width="375"><figcaption></figcaption></figure>
<figure><img src="../.gitbook/assets/deployment_arch.png" alt="" width="375"><figcaption></figcaption></figure>

You should pay attention to the following regarding the deployment architecture:

Expand Down Expand Up @@ -50,7 +50,7 @@ helm install digma digma/digma --set digma.licenseKey=[DIGMA_LICENSE] --namespac

**Other optional parameters:**

* `size` (small | medium | large) - The cluster can be deployed in multiple scales, depending on the expected load. The default sizing is `medium`. If you select a size that is too small to handle the number of spans per second, you'll get a message from the Digma plugin prompting you to upgrade to a bigger size. Please consult the [resource-requirements.md](resource-requirements.md "mention") page for allocating the relevant nodes.
* `size` (small | medium | large) - The cluster can be deployed in multiple scales, depending on the expected load. The default sizing is `medium`. If you select a size that is too small to handle the number of spans per second, you'll get a message from the Digma plugin prompting you to upgrade to a bigger size. Please consult the [resource-requirements.md](central-on-prem-install/resource-requirements.md "mention") page for allocating the relevant nodes.
* `digmaAnalytics.accesstoken`(any string): This is a unique key you’ll need to provide any IDE that connects to this Digma instance, you can choose any token you'd like.
* `embeddedJaeger.enabled` (true/false) – Setting this to False will not expose the port for the Jaeger instance included with Digma. If you’re using your own APM and want to link to that instead, you can leave that at the default value (false)

Expand All @@ -64,9 +64,9 @@ helm install digma digma/digma --set digma.licenseKey=[DIGMA_LICENSE] --namespac

Digma can be set up to use either a public or an internal DNS. You should choose the option that better suits your requirements.

**Terraform**

**Deploying an EKS cluster**

If you'd like to create an EKS cluster from scratch, we created a simple Terraform file to help automate that process. You can find it in [this](https://github.com/digma-ai/digma/tree/main/dev/eks/terraform) repo.

**Internal DNS**

Expand Down Expand Up @@ -124,7 +124,7 @@ To check everything is working properly we can check the pod status and make sur

For example, this is the expected output:

<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>

**Step 4: Get the IP/DNS value for the Digma deployment**

Expand Down Expand Up @@ -157,7 +157,7 @@ If you received a non-error response back you’re good to go for the next step!

Once Digma is up and running you can now set your IDE plugin to connect to it. To do that, open the plugin settings (Go to IntelliJ IDEA -> Settings/Preferences and search for ‘Digma’)

<figure><img src="../../.gitbook/assets/image (25).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../.gitbook/assets/image (25).png" alt=""><figcaption></figcaption></figure>

* Set the `Digma API URL` parameter using the ANALYTICS-API value you’ve captured previously (By default this should be prefixed as ‘https’ and use port 5051)
* Set the `Runtime observability backend URL` parameter using the ‘COLLECTOR-API’ value you’ve captured previously
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ sudo chmod +x ./kustomize/kustomize_build.sh
sudo chmod +x ./kustomize/create_customization.sh
```

4. Run the `create_customization.sh` script. You'll need to pass it some parameters based on the application and the Digma `collector-api` IP/DNS address (read more [here](../../installation/central-on-prem-install/)). Once you run this command, the helper scripts will generate a patch that can be applied with your Helm file to instrument the application. &#x20;
4. Run the `create_customization.sh` script. You'll need to pass it some parameters based on the application and the Digma `collector-api` IP/DNS address (read more [here](../../installation/central-on-prem-install.md)). Once you run this command, the helper scripts will generate a patch that can be applied with your Helm file to instrument the application. &#x20;

{% code overflow="wrap" %}
```bash
Expand Down
4 changes: 2 additions & 2 deletions troubleshooting/digma-overload-warning.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@

Since Digma is running locally on your development machine, it is set up to avoid consuming too much CPU resources. Therefore, instead of scaling up to handle traces concurrently, when Digma reaches a certain throughput limit, it will begin throttling incoming observability data. In essence, this means that some of the incoming traces will be dropped and Digma will continue to process traces at the same speed as before.

If you've received the `Digma Overloaded` warning and you would like to scale up Digma to process the additional data - you can [install a Centralized Digma](../installation/central-on-prem-install/) on a local Kubernetes cluster.
If you've received the `Digma Overloaded` warning and you would like to scale up Digma to process the additional data - you can [install a Centralized Digma](../installation/central-on-prem-install.md) on a local Kubernetes cluster.

### Overload when using Digma centrally

When deploying Digma centrally, the Analytics Engine will scale based on the deployment configuration. Digma will monitor the size of its queues to detect if it is still unable to catch up with incoming data. In such a scenario, Digma must begin throttling or else risk running out of memory or accumulating too much lag. In such a scenario, you will receive a message that throttling is in place. One way to avoid this state is to deploy Digma using a larger deployment size. See the [central install documentation](../installation/central-on-prem-install/) for more details.
When deploying Digma centrally, the Analytics Engine will scale based on the deployment configuration. Digma will monitor the size of its queues to detect if it is still unable to catch up with incoming data. In such a scenario, Digma must begin throttling or else risk running out of memory or accumulating too much lag. In such a scenario, you will receive a message that throttling is in place. One way to avoid this state is to deploy Digma using a larger deployment size. See the [central install documentation](../installation/central-on-prem-install.md) for more details.



2 changes: 1 addition & 1 deletion use-cases-wip/prioritize-technical-debt.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Digma assesses each asset (query, code location, endpoint etc.) to determine its

For example, a slow query that is rarely used or has a marginal effect on the overall request would be ranked lower than a slow query that is heavily used and critically affects multiple flows.

Using this feature requires installing [Digma Centrally ](../installation/central-on-prem-install/)and collecting data from a shared environment such as CI, Staging, Testing, or Production.
Using this feature requires installing [Digma Centrally ](../installation/central-on-prem-install.md)and collecting data from a shared environment such as CI, Staging, Testing, or Production.



Expand Down

0 comments on commit 801f6e8

Please sign in to comment.