Skip to content

Latest commit

 

History

History
275 lines (202 loc) · 11.6 KB

lab101.md

File metadata and controls

275 lines (202 loc) · 11.6 KB

Splunk Observability Suite GDI: 101

Goals:

  • Learn how to deploy the Splunk OpenTelemetry Connector on Linux via UI wizard
  • Learn how to modify the configuration of OTel Collector
  • Learn how to configure a smartagent receiver (for Splunk Infrastructure Monitoring)
  • Learn how to modify the configuration of Fluentd (for Splunk Log Observer)
  • Learn how to instrument an application to emit spans (for Splunk APM)
  • Learn how to instrument a JS page to emit data (for Splunk RUM)

Prerequisites

Please ensure you have followed the repository README for initial configuration.

WARNING: To make things easier in the workshop we will be using the latest tag of Docker images. This is not recommended in production for a variety of reasons including potential changes to behavior on container restart.

Given you may already have some images locally and this lab required very recent versions of the images, let's clean up the ones we care about locally:

docker rmi -f quay.io/signalfx/splunk-otel-collector \
    jrei/systemd-ubuntu okhttp workshop-app >/dev/null 2>/dev/null

For the instrumentation parts of this workshop we need to prepare some items locally that take several minutes to complete. Let's kick off the process now. Please open a separate terminal, cd into the o11y-gdi-workshop directory and run:

cd demo-app && ./bootstrap.sh && cd ..

Continue with this lab while the above command runs.

Deploy Splunk OpenTelemetry Connector for Linux via UI wizard

WARNING: Using the Linux installer script is NOT supported or recommended for containerized Linux environments (e.g. Docker). This is because the installer script requires systemd and for security reasons, containers do not have permission to run systemd. This is purely for demonstration purposes to see how the installer script works. Directions on the supported way to deploy on Docker can be found here.

Run (ensure ${USER} returns a unique value):

sudo docker run --name docker-${USER} --rm -p 4317:4317 -p 55679:55679 -p 9411:9411 -v /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged -d jrei/systemd-ubuntu
docker exec -it docker-${USER} /bin/bash

IMPORTANT: If you stop the container (docker stop <containerName>), which you should do at the end of this workshop, the state in the container will be lost.

Within the container run:

apt-get update && apt-get install -y curl lynx sudo vim wget

Note: For customer environments the above command is not required. It is meant for the workshop only.

Go to the Splunk Observability Suite UI > Data Sources > Linux and follow the wizard pasting the commands into container terminal window.

Note: For Access Token select Workshop.

That's it! You have successfully deployed the splunk-otel-collector and td-agent services. Metrics and logs should automatically start being collected and sent to the suite. Please confirm you see data on both the Dashboards and Log Observer pages for your given container. To do this, add a filter for host.name. The value can be found by running the hostname command within your container. Note there may be a delay up to a minute for initial data to start appearing.

Remember that trace data is push. You can simulate a trace by following the directions here or continue with the workshop below.

Bonus: Run the following command:

sh /tmp/splunk-otel-collector.sh --help

Notice there are more configuration options available including the ability to not install Fluentd (e.g. if you are not using Log Observer).

Bonus: You can generate a support bundle on Linux by running:

./etc/otel/collector/splunk-support-bundle.sh

Check the contents of the tarball in /tmp directory.

Change the default configuration

Why might you want to change the default configuration? Many reasons:

  • Some receivers are not needed (remove for security reasons)
  • Need to configure one or more processors (e.g. attributes process to add environment attribute)
  • Need to configure an additional log source
  • Need to configure multiline parsing of a log source

If you use the Linux installer, where is the configuration for splunk-otel-collector and td-agent stored?

(Hint: the installer output tells you...but otherwise check the installer getting started documentation.)

Let's modify some configs!

Enable the logging exporter with debug logs

WARNING: Debug logging is very verbose and will result in increased resource utilization. Only enable debug logging when you are actively troubleshooting an issue.

Why? Because the logging exporter is great for troubleshooting and enabling debug logs can help get you comfortable changing the collector configuration.

How? The collector troubleshooting guide contains this information.

IMPORTANT: Enable the logging exporter NOT debug logs for the binary!

Only enable the logging exporter for the tracing pipeline for now. (Reminder: you need to configure the logging exporter and enabling it in at least one pipeline!)

Once enabled and the collector restarted, you can send some synthetic data to confirm it is working.

Disable the Jaeger receiver

Why? By default the Splunk OpenTelemetry Connector enables a variety of receivers to make it easy to collect data from both greenfield and brownfield environments. Depending on the environment, some receivers may not be needed. Let's assume Jaeger is not used within the environment and as such we wish to remove the Jaeger receiver.

How? Check the documentation. Look for Jaeger in the configuration file and try to figure it out. Questions to think about:

  • Do you need to remove it from the receivers: section? Why or why not?
  • Is removing it from the receivers: section enough? Why or why not?
  • In addition to the collector configuration, what other configuration may need to change?

Enable the ntp smartagent receiver

Why? To see how Smart Agent monitors can be used with the Splunk OpenTelemetry Connector.

How? See the Smart Agent receiver documentation and the ntp monitor documentation

Once enabled and the collector restarted, confirm that ntp.offset_seconds metric is being collected. Consider enabling the logging exporter in the collector's metrics pipeline to test this. If you do, disable once done confirming.

Add an environment tag to spans for Splunk APM

Why? Environments provide a way to view your data. For example, you may have the same services in your staging and production environments, but you want to view the data between the two environments separately.

How? The collector provides one or more processors that can be configured to do things including CRUD operations on metadata. Span tags are called attributes in OpenTelemetry and the collector offers an attributes processor. Configure it to add environment=<yourUsername>. (Hint: already commented out in config just needs to be enabled!)

Once enabled and the collector restarted, you can send some synthetic trace data to confirm it is working. Given the logging exporter was previously configured you can check the collector logs to verify the environment attribute is added.

Bonus: Adding environment information to logs for Splunk Observer can also be beneficial. Update the configuration to also add this information to logs.

Add a custom source to the Fluentd configuration

Why? Because you may have logs writing to a location that Fluentd is not configured to monitor by default. Examples of this would be applications that do not log to standard log locations like journald, /var/log, or Windows Event Viewer.

How? Create a custom source configuration in the conf.d/ direction. Have it monitor /var/log/test.log. (Hint: look at other source configurations and remember the SPLUNK label is critical!)

Once configured and Fluentd restarted, you can send some synthetic trace data to confirm it is working. Consider enabling the logging exporter in the collector's log pipeline to test this.

Instrument an application to emit data

WARNING: Please keep the Docker container above running as it will be used to send data below.

We will instrument a Java application using Splunk distribution of OpenTelemetry Java. Read the documentation to learn how to get started. The java command can be found in demo-app/tracing-examples/signalfx-tracing/runJava.sh. Modify this file as appropriate to enable auto-instrumentation (note the location of the javaagent JAR file can be found in okhttp-and-jedis.Dockerfile). Be sure to set the service.name as well!

Now that everything is configured, let's start the application.

Note: This application will send to your running container with splunk-otel-collector.

cd demo-app/tracing-examples/signalfx-tracing/
docker build -f okhttp-and-jedis.Dockerfile -t workshop-app .
docker run -e OTEL_EXPORTER_OTLP_ENDPOINT=http://0.0.0.0:4317 --network host workshop-app ./runJava.sh

Note: We are changing the default endpoint because in containerized environments localhost refers to the container only.

The application will take a few seconds to come up. Once the application is up, you should start seeing logs from the splunk-otel-collector.

Note: Be sure to shut down all containers when you are done else they will consume resources and battery life. You can confirm everything is turned off by running docker ps.

Instrument a JS page to emit data

WARNING: Manually adding JS code to a web application is for demonstration purposes only! In addition, be aware that ad blockers and other extensions can prevent the JS code from running properly so consider doing this in an incognito window.

The following directions are for Chrome, but equivalent capabilities are available in other browsers. Go to the website of your choice -- for example https://google.com. Right-click on the page and select Inspect. Configure overrides as described here.

On the Sources tab, select (index). Click immediately after <head> in index.html and paste in the code snippet from the getting started documentation. Ensure the beaconUrl, rumAuth, app parameters are properly configured.

IMPORTANT: rumAuth is a RUM specific token. Admins can create it in the UI like other access tokens. A RUM workshop token has already been created that you can use.

Once done, right-click on (index.html) and select Save for overrides. Refresh the page and ensure no errors. Go to RUM in the UI and you should immediately start seeing data!