Skip to content

feat: implement global lifecycle hooks and stats/failbot components #20

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 28 commits into from
Jun 12, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
2d73563
Initial plan for issue
Copilot Jun 11, 2025
9473701
Initial analysis and plan for global lifecycle hooks implementation
Copilot Jun 11, 2025
5340e15
Implement global lifecycle hooks and stats/failbot components
Copilot Jun 11, 2025
75f85d5
Fix linting issue in integration test
Copilot Jun 11, 2025
d0c99ed
fix bundle
GrantBirki Jun 12, 2025
52fdb86
fix message
GrantBirki Jun 12, 2025
1c42a81
Fix acceptance test log output and integration test failure
Copilot Jun 12, 2025
1969571
revert
GrantBirki Jun 12, 2025
1b25383
revert
GrantBirki Jun 12, 2025
044472b
Add logger accessor to Lifecycle and implement RequestMethodLogger ex…
GrantBirki Jun 12, 2025
0ffb0f1
Set log level to error to reduce noise in tests
GrantBirki Jun 12, 2025
d9e9d28
start the logger as early in the application call stack as possible
GrantBirki Jun 12, 2025
6fee9c5
Plan implementation of instrument plugins system
Copilot Jun 12, 2025
dd84c7a
Implement pluggable instrument system and comprehensive documentation
Copilot Jun 12, 2025
dc727aa
fix vendor
GrantBirki Jun 12, 2025
8e1379a
lint
GrantBirki Jun 12, 2025
ba4076f
add a custom stats instrument to the acceptance stack
GrantBirki Jun 12, 2025
bd1f331
add an on_response hook to the acceptance stack as an example
GrantBirki Jun 12, 2025
0627714
fix: ensure app is built before accessing global components in integr…
Copilot Jun 12, 2025
7f8b54f
docs
GrantBirki Jun 12, 2025
cbb6689
add a custom failbot that works
GrantBirki Jun 12, 2025
50ddead
fix: enhance logging for webhook event processing and add start time …
GrantBirki Jun 12, 2025
4a2b81e
fix: remove grape-swagger dependency and update Gemfile.lock
GrantBirki Jun 12, 2025
762fb91
refactor: remove unused methods from Failbot and Stats instrument cla…
GrantBirki Jun 12, 2025
a0495d4
bump coverage
GrantBirki Jun 12, 2025
658107e
refactor: improve comments and simplify request context in CatchallEn…
GrantBirki Jun 12, 2025
89d4f15
adding nocov blocks and 90% coverage
GrantBirki Jun 12, 2025
feefdd7
bump version
GrantBirki Jun 12, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Gemfile
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ gemspec

group :development do
gem "irb", "~> 1"
gem "rack-test", "~> 2.2"
gem "rspec", "~> 3"
gem "rubocop", "~> 1"
gem "rubocop-github", "~> 0.26"
Expand Down
7 changes: 2 additions & 5 deletions Gemfile.lock
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
PATH
remote: .
specs:
hooks-ruby (0.0.2)
hooks-ruby (0.0.3)
dry-schema (~> 1.14, >= 1.14.1)
grape (~> 2.3)
grape-swagger (~> 2.1, >= 2.1.2)
puma (~> 6.6)
redacting-logger (~> 1.5)
retryable (~> 3.0, >= 3.0.5)
Expand Down Expand Up @@ -76,9 +75,6 @@ GEM
mustermann-grape (~> 1.1.0)
rack (>= 2)
zeitwerk
grape-swagger (2.1.2)
grape (>= 1.7, < 3.0)
rack-test (~> 2)
hashdiff (1.2.0)
i18n (1.14.7)
concurrent-ruby (~> 1.0)
Expand Down Expand Up @@ -225,6 +221,7 @@ PLATFORMS
DEPENDENCIES
hooks-ruby!
irb (~> 1)
rack-test (~> 2.2)
rspec (~> 3)
rubocop (~> 1)
rubocop-github (~> 0.26)
Expand Down
8 changes: 8 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -294,6 +294,14 @@ See the [Auth Plugins](docs/auth_plugins.md) documentation for even more informa

See the [Handler Plugins](docs/handler_plugins.md) documentation for in-depth information about handler plugins and how you can create your own to extend the functionality of the Hooks framework for your own deployment.

### Lifecycle Plugins

See the [Lifecycle Plugins](docs/lifecycle_plugins.md) documentation for information on how to create lifecycle plugins that can hook into the request/response/error lifecycle of the Hooks framework, allowing you to add custom behavior at various stages of processing webhook requests.

### Instrument Plugins

See the [Instrument Plugins](docs/instrument_plugins.md) documentation for information on how to create instrument plugins that can be used to collect metrics or report exceptions during webhook processing. These plugins can be used to integrate with monitoring and alerting systems.

## Contributing 🤝

See the [Contributing](CONTRIBUTING.md) document for information on how to contribute to the Hooks project, including setting up your development environment, running tests, and releasing new versions.
330 changes: 330 additions & 0 deletions docs/instrument_plugins.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,330 @@
# Instrument Plugins

Instrument plugins provide global components for cross-cutting concerns like metrics collection and error reporting. The hooks framework includes two built-in instrument types: `stats` for metrics and `failbot` for error reporting. By default, these instruments are no-op implementations that do not require any external dependencies. You can create custom implementations to integrate with your preferred monitoring and error reporting services.

## Overview

By default, the framework provides no-op stub implementations that do nothing. This allows you to write code that calls instrument methods without requiring external dependencies. You can replace these stubs with real implementations that integrate with your monitoring and error reporting services.

The instrument plugins are accessible throughout the entire application:

- In handlers via `stats` and `failbot` methods
- In auth plugins via `stats` and `failbot` class methods
- In lifecycle plugins via `stats` and `failbot` methods

## Creating Custom Instruments

To create custom instrument implementations, inherit from the appropriate base class and implement the required methods.

To actually have `stats` and `failbot` do something useful, you need to create custom classes that inherit from the base classes provided by the framework. Here’s an example of how to implement custom stats and failbot plugins.

You would then set the following attribute in your `hooks.yml` configuration file to point to these custom instrument plugins:

```yaml
# hooks.yml
instruments_plugin_dir: ./plugins/instruments
```

### Custom Stats Implementation

```ruby
# plugins/instruments/stats.rb
class Stats < Hooks::Plugins::Instruments::StatsBase
def initialize
# Initialize your metrics client
@client = MyMetricsService.new(
api_key: ENV["METRICS_API_KEY"],
namespace: "webhooks"
)
end

def record(metric_name, value, tags = {})
@client.gauge(metric_name, value, tags: tags)
rescue => e
log.error("Failed to record metric: #{e.message}")
end

def increment(metric_name, tags = {})
@client.increment(metric_name, tags: tags)
rescue => e
log.error("Failed to increment metric: #{e.message}")
end

def timing(metric_name, duration, tags = {})
# Convert to milliseconds if your service expects that
duration_ms = (duration * 1000).round
@client.timing(metric_name, duration_ms, tags: tags)
rescue => e
log.error("Failed to record timing: #{e.message}")
end

# Optional: Add custom methods specific to your service
def histogram(metric_name, value, tags = {})
@client.histogram(metric_name, value, tags: tags)
rescue => e
log.error("Failed to record histogram: #{e.message}")
end
end
```

### Custom Failbot Implementation

```ruby
# plugins/instruments/failbot.rb
class Failbot < Hooks::Plugins::Instruments::FailbotBase
def initialize
# Initialize your error reporting client
@client = MyErrorService.new(
api_key: ENV["ERROR_REPORTING_API_KEY"],
environment: ENV["RAILS_ENV"] || "production"
)
end

def report(error_or_message, context = {})
if error_or_message.is_a?(Exception)
@client.report_exception(error_or_message, context)
else
@client.report_message(error_or_message, context)
end
rescue => e
log.error("Failed to report error: #{e.message}")
end

def critical(error_or_message, context = {})
enhanced_context = context.merge(severity: "critical")
report(error_or_message, enhanced_context)
end

def warning(message, context = {})
enhanced_context = context.merge(severity: "warning")
@client.report_message(message, enhanced_context)
rescue => e
log.error("Failed to report warning: #{e.message}")
end

# Optional: Add custom methods specific to your service
def set_user_context(user_id:, email: nil)
@client.set_user_context(user_id: user_id, email: email)
rescue => e
log.error("Failed to set user context: #{e.message}")
end

def add_breadcrumb(message, category: "webhook", data: {})
@client.add_breadcrumb(message, category: category, data: data)
rescue => e
log.error("Failed to add breadcrumb: #{e.message}")
end
end
```

## Configuration

To use custom instrument plugins, specify the `instruments_plugin_dir` in your configuration:

```yaml
# hooks.yml
instruments_plugin_dir: ./plugins/instruments
handler_plugin_dir: ./plugins/handlers
auth_plugin_dir: ./plugins/auth
lifecycle_plugin_dir: ./plugins/lifecycle
```

Place your instrument plugin files in the specified directory:

```text
plugins/
└── instruments/
├── stats.rb
└── failbot.rb
```

## File Naming and Class Detection

The framework automatically detects which type of instrument you're creating based on inheritance:

- Classes inheriting from `StatsBase` become the `stats` instrument
- Classes inheriting from `FailbotBase` become the `failbot` instrument

File naming follows snake_case to PascalCase conversion:

- `stats.rb` → `stats`
- `sentry_failbot.rb` → `SentryFailbot`

You can only have one `stats` plugin and one `failbot` plugin loaded. If multiple plugins of the same type are found, the last one loaded will be used.

## Usage in Your Code

Once configured, your custom instruments are available throughout the application:

### In Handlers

```ruby
class MyHandler < Hooks::Plugins::Handlers::Base
def call(payload:, headers:, config:)
# Use your custom stats methods
stats.increment("handler.calls", { handler: "MyHandler" })

# Use custom methods if you added them
stats.histogram("payload.size", payload.to_s.length) if stats.respond_to?(:histogram)

result = stats.measure("handler.processing", { handler: "MyHandler" }) do
process_webhook(payload, headers, config)
end

# Use your custom failbot methods
failbot.add_breadcrumb("Handler completed successfully") if failbot.respond_to?(:add_breadcrumb)

result
rescue => e
failbot.report(e, { handler: "MyHandler", event: headers["x-github-event"] })
raise
end
end
```

### In Lifecycle Plugins

```ruby
class MetricsLifecycle < Hooks::Plugins::Lifecycle
def on_request(env)
# Your custom stats implementation will be used
stats.increment("requests.total", {
path: env["PATH_INFO"],
method: env["REQUEST_METHOD"]
})
end

def on_error(exception, env)
# Your custom failbot implementation will be used
failbot.report(exception, {
path: env["PATH_INFO"],
handler: env["hooks.handler"]
})
end
end
```

## Popular Integrations

### DataDog Stats

```ruby
class DatadogStats < Hooks::Plugins::Instruments::StatsBase
def initialize
require "datadog/statsd"
@statsd = Datadog::Statsd.new("localhost", 8125, namespace: "webhooks")
end

def record(metric_name, value, tags = {})
@statsd.gauge(metric_name, value, tags: format_tags(tags))
end

def increment(metric_name, tags = {})
@statsd.increment(metric_name, tags: format_tags(tags))
end

def timing(metric_name, duration, tags = {})
@statsd.timing(metric_name, duration, tags: format_tags(tags))
end

private

def format_tags(tags)
tags.map { |k, v| "#{k}:#{v}" }
end
end
```

### Sentry Failbot

```ruby
class SentryFailbot < Hooks::Plugins::Instruments::FailbotBase
def initialize
require "sentry-ruby"
Sentry.init do |config|
config.dsn = ENV["SENTRY_DSN"]
config.environment = ENV["RAILS_ENV"] || "production"
end
end

def report(error_or_message, context = {})
Sentry.with_scope do |scope|
context.each { |key, value| scope.set_context(key, value) }

if error_or_message.is_a?(Exception)
Sentry.capture_exception(error_or_message)
else
Sentry.capture_message(error_or_message)
end
end
end

def critical(error_or_message, context = {})
Sentry.with_scope do |scope|
scope.set_level(:fatal)
context.each { |key, value| scope.set_context(key, value) }

if error_or_message.is_a?(Exception)
Sentry.capture_exception(error_or_message)
else
Sentry.capture_message(error_or_message)
end
end
end

def warning(message, context = {})
Sentry.with_scope do |scope|
scope.set_level(:warning)
context.each { |key, value| scope.set_context(key, value) }
Sentry.capture_message(message)
end
end
end
```

## Testing Your Instruments

When testing, you may want to use test doubles or capture calls:

```ruby
# In your test setup
class TestStats < Hooks::Plugins::Instruments::StatsBase
attr_reader :recorded_metrics

def initialize
@recorded_metrics = []
end

def record(metric_name, value, tags = {})
@recorded_metrics << { type: :record, name: metric_name, value: value, tags: tags }
end

def increment(metric_name, tags = {})
@recorded_metrics << { type: :increment, name: metric_name, tags: tags }
end

def timing(metric_name, duration, tags = {})
@recorded_metrics << { type: :timing, name: metric_name, duration: duration, tags: tags }
end
end

# Use in tests
test_stats = TestStats.new
Hooks::Core::GlobalComponents.stats = test_stats

# Your test code here

expect(test_stats.recorded_metrics).to include(
{ type: :increment, name: "webhook.processed", tags: { handler: "MyHandler" } }
)
```

## Best Practices

1. **Handle errors gracefully**: Instrument failures should not break webhook processing
2. **Use appropriate log levels**: Log instrument failures at error level
3. **Add timeouts**: Network calls to external services should have reasonable timeouts
4. **Validate configuration**: Check for required environment variables in `initialize`
5. **Document custom methods**: If you add methods beyond the base interface, document them
6. **Consider performance**: Instruments are called frequently, so keep operations fast
7. **Use connection pooling**: For high-throughput scenarios, use connection pooling for external services
Loading