Skip to content

feat: docs #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 13, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions ebpf/analogy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Here actual EBPF analogy is documented what are we using exactly

## Inodes

* In ext4 systems we get to see inodes

* Here is a reddit post if you want to read more about it.

* They are unique ids for a single file which can be used for filtering

* For ntfs file system the unique id changes to a 8 bit hash

## BPF Maps

* These are used for storing inodes where we use lookup function to get logs of exact file provided via cli

* Events & structures are transferred in maps

## BPF RING BUFFER

* This is a kind of map

* This is use to save logs into memory incase the machine gets slower. By incorporating it we can ensure that logs are not being dropped

* Memory limit increases, along with max entries are 4096, 8192, 16384, 32768

* More on Map Type 'BPF_MAP_TYPE_RINGBUF' - eBPF Docs

## BPF Per CPU Array

* Kind of a map to store struct in memory

* In order to tackle storage issue as ebpf provides only 512 bytes of storage we preferred BPF_PERCPU_ARRAY which means that only we can utilize storage upto 32KB

* This per-CPU version has a separate array for each logical CPU.

* More on ebpf.io

## EBPF Debug logs

* These are located in /sys/kernel/debug/tracing/trace_pipe

* Do a cat command on the above to read it

39 changes: 39 additions & 0 deletions ebpf/architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Architecture

![1752410338305](../image/architecture/1752410338305.png)

## Daemonset

* This will be golang code which will query into cgroup using k8s api and would bring in the actual path of file instead of regex
* These path will be added into CRDs itself. Piece by Piece they will be added
* Whenever their is rename or delete event then these paths will be changed accordingly by StatefulSet

## StatefulSet

* This is where we insert kprobes into the kernel
* It will listen to the ebpf logs and print them
* They can be eventually be recorded by loki and can be seen in grafana
* It can also modify crds whenever there is delete or rename event

## CRDs

* These CRDs can be like
* These CRDs will be used by daemonset to send information to ebpf program in the linux kernel
* Regex to be used exclusively
* They can be in different namespace as it will be applied on pods that are there
* We can have a global monitoring policy as well

```yaml
apiVersion: sentinelfs.io/v2
kind: FileMonitoringPolicy
metadata:
name: monitor
spec:
podSelector:
labels:
- key: app
value: pipeline
files:
- path: /app/*
- path: /etc/secrets/*
```
Binary file added image/architecture/1752410338305.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
21 changes: 21 additions & 0 deletions product/ebpf_based_file_monitor.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# What is EBPF?

A debugger on Linux kernel

## What are we building?

A stealthy debugger that can record unexpected file operation done in a project directory which indeed can detect the vulnerabilities if container process is able to access certain file in a non intended way

So it is basically container testing tool to detect file access which was not suppose to happen

## Why a container testing tool?

There is a single tool that can do everything from file monitoring to network monitoring. That is falco

We do not get a specific tool for actual file testing or we get tools that do not work on container. Here are some example fanotify and auditctl

Testing is usually preferred on low end hardware so building is specifically for k3s may be good as k3s is basically light weight kubernetes

This is would help the user of this product to actually see how much is their container Harding actually working

Here is a rough architecture of the project