Beyond the Kernel
eBPF-Driven Security Observability in Cloud-Native Environments

A versatile DevSecOps Engineer specialized in creating secure, scalable, and efficient systems that bridge development and operations. My expertise lies in automating complex processes, integrating AI-driven solutions, and ensuring seamless, secure delivery pipelines. With a deep understanding of cloud infrastructure, CI/CD, and cybersecurity, I thrive on solving challenges at the intersection of innovation and security, driving continuous improvement in both technology and team dynamics.
Security no longer lives outside the system. It lives within the kernel.
In a world of microservices, containers, and ephemeral workloads, traditional observability tools see only what applications expose; not what’s actually happening beneath.
To detect, understand, and stop threats in real time, DevSecOps teams need a lens that looks beyond the user space.
That lens is eBPF.
Part 1: The Vision - From Blind Spots to Kernel Clarity
Cloud-native architectures have fragmented visibility. Each microservice, container, and node is its own ephemeral universe.
By the time logs reach your SIEM, the process that caused them is already gone.
eBPF (Extended Berkeley Packet Filter) rewrites this story.
It allows developers to inject small, sandboxed programs directly into the Linux kernel, observing syscalls, network traffic, and process behaviour as they happen without modifying the kernel or impacting performance.
Think of eBPF as a programmable security microscope inside your nodes.
Why It Matters
Unlike traditional agents:
eBPF observes every process, syscall, and packet in real time.
It enriches events with Kubernetes metadata, including pod name, namespace, and service account.
It operates in-kernel, avoiding user-space latency or privilege escalation.
This is observability at the source, before anything can hide or mutate.
The Stack: eBPF, Falco, and Automated Defence
To make kernel-level visibility actionable, we combine three pillars:
| Layer | Purpose | Tool |
|---|---|---|
| Kernel Hooks | Captures real-time events | eBPF |
| Rule Engine | Analyzes behaviour and triggers alerts | Falco |
| Response Automation | Reacts and isolates threats | Remediator + Cilium + Operator |
Each layer complements the next, creating a feedback loop of detection, context, and prevention.
Architecture Overview
Let’s visualize the architecture that powers this approach.
Minimalistic Schematic Diagram
This shows the logical flow:
eBPF probes capture kernel-level events (syscalls, sockets).
Falco consumes these events and applies security rules.
Detected anomalies trigger alerts to Sidekick → Remediator → Isolation logic.
in AWS EKS:
Pods (frontend, backend, payments) run in distinct namespaces.
eBPF DaemonSets monitor every node.
Falco analyzes syscall streams.
Alerts flow to Slack, Prometheus, and AWS GuardDuty.
The Remediator patches pods in real time.
Cilium applies zero-trust policies to isolate compromised containers.
Part 2: Implementation - Turning Theory into Defense
All implementation resources live in the repository:
👉 SubhanshuMG/eBPF-Driven-Security-Observability
This open repository demonstrates a full end-to-end, production-ready pipeline built on:
Falco Helm deployment
Python BCC DaemonSet
Falco Sidekick with Slack + webhook output
Automated Remediator service
Cilium runtime blocking
Optional Go-based Operator and eBPF-LSM for advanced scenarios
Step 1: Deploy Falco (eBPF Mode)
Falco listens directly to kernel events via eBPF probes:
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco -n falco --create-namespace -f charts/falco-values.yaml
Key Helm settings:
backend:
enable_bpf: true
mountHost:
sys: true
dev: true
proc: true
libModules: true
This ensures Falco runs in eBPF mode without kernel modules.
Step 2: Add a Python BCC Agent
The repository includes a DaemonSet that runs a Python eBPF (BCC) script to detect outbound connections to suspicious ports.
from bcc import BPF
program = r"""
#include <net/sock.h>
#include <bcc/proto.h>
int trace_connect(struct pt_regs *ctx, struct sock *sk) {
u16 dport = sk->__sk_common.skc_dport;
if (dport == bpf_htons(4444)) {
bpf_trace_printk("ALERT: Pod attempted connect to port 4444\\n");
}
return 0;
}
"""
b = BPF(text=program)
b.attach_kprobe(event="tcp_connect", fn_name="trace_connect")
b.trace_print()
Deployed as a privileged DaemonSet, this agent continuously monitors tcp_connect syscalls and streams alerts to Falco or your logs.
Step 3: Falco Sidekick + Automated Remediation
Falco Sidekick forwards alerts to multiple sinks, including Slack and a Remediator webhook.
helm install falcosidekick falcosecurity/falcosidekick \
--set config.slack.webhookurl="https://hooks.slack.com/services/..." \
--set config.outputs.webhook.url="http://remediator.remediator.svc.cluster.local:8080/webhook"
The Remediator (Python Flask service) parses these events and patches pods with:
"metadata": {
"labels": {
"compromised": "true"
}
}
Once patched, the Cilium NetworkPolicy isolates the pod instantly.
Step 4: Auto-Isolation with Cilium
Cilium enforces kernel-native network segmentation via eBPF.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: isolate-compromised
spec:
endpointSelector:
matchLabels:
compromised: "true"
ingress:
- {}
egress:
- {}
When a pod receives the compromised=true label, it loses all network connectivity — preventing lateral movement or exfiltration.
Step 5: CI/CD and Testing
GitHub Actions (in .github/workflows/ci-build-push.yml) automates:
Building and pushing Remediator and Python-BCC images to ECR
Validating manifests
Deploying to test clusters
Test the system:
kubectl run test-shell --image=ubuntu -- sleep 3600
kubectl exec -it test-shell -- bash
nc 1.2.3.4 4444
Expected outcomes:
Falco detects
connect()syscallSidekick forwards alert to Slack and Remediator
Pod labeled
compromised=trueCilium isolates it automatically
Beyond Automation: The Operator & eBPF-LSM Frontier
🧩 The Go Operator
In operator-remediator/, a hardened Go controller extends remediation logic to Kubernetes-native workflows.
It watches ConfigMaps (or future CRDs) and automatically patches targeted pods — a scalable foundation for multi-tenant clusters.
Built with controller-runtime, it exemplifies operator-style automation: secure, event-driven, and declarative.
🧬 eBPF-LSM (Linux Security Module)
For kernels ≥5.7, eBPF can attach directly to LSM hooks, allowing in-kernel blocking.
This enables actions like denying socket creation or file writes based on live context.
Example snippet (from /eBPF-LSM/lsm_sample.c):
SEC("lsm/socket_create")
int BPF_PROG(socket_create_lsm, int family, int type, int protocol) {
if (family == AF_INET) {
return -1; // deny
}
return 0; // allow
}
This brings preventive enforcement right into the kernel itself.
Observability Meets Prevention
eBPF changes how DevSecOps teams think.
Instead of relying on logs, you rely on kernel telemetry.
Instead of waiting for alerts, your systems respond autonomously.
Falco + eBPF gives you real-time behavioural insight.
Cilium + Operator + LSM turns that insight into action.
This is what we call Kernel-Native Security Observability — where the infrastructure defends itself.
Repository & Resources
All implementation assets are available in the repository:
👉 GitHub: SubhanshuMG/eBPF-Driven-Security-Observability
Includes:
Helm values
Falco rules
Remediator service
Cilium NetworkPolicy
Go Operator
CI/CD pipeline
eBPF-LSM sample
Deployment scripts (
scripts/deploy-all.sh/cleanup-all.sh)
Each component forms part of a living DevSecOps pipeline built around real-time kernel visibility.
The Future: eBPF as the DNA of DevSecOps
In the coming years, observability and defence will merge.
We won’t monitor systems; we’ll listen to their kernels.
We won’t react to breaches; we’ll intercept them mid-syscall.
eBPF makes this possible. By uniting runtime telemetry, policy enforcement, and automated remediation, you’re not just securing workloads; you’re building a self-healing & self-observing cloud.







