
DeepDiveInfra – Self-Hosted CI/CD & Kubernetes Delivery Lab
A self-hosted cloud-native style learning project focused on the delivery chain around a deliberately simple ASP.NET Core backend. The project demonstrates code → test → build → Docker → self-hosted deployment → Kubernetes runtime → Prometheus/Grafana observability.
Tech Stack
C# · .NET 10 · ASP.NET Core · Minimal API · SQLite · EF Core · xUnit · GitHub Actions · Docker · Kubernetes · kind · kubectl · Linux · WSL · Prometheus · Grafana · Scalar / OpenAPI
Problem / Context
The main problem was not advanced domain logic. It was understanding what happens after code is written: how a backend moves from source code to a tested build, to a container, to a self-hosted deployment, to a Kubernetes runtime, to an observable workload. This project was built to close that gap by working through each layer hands-on.
Solution / Architecture
The solution is a self-hosted local lab: a Windows host with WSL/Linux tooling, Docker Desktop, a kind cluster, GitHub Actions for build and test, a self-hosted runner for deployment, and Prometheus + Grafana for monitoring. The ASP.NET Core API uses SQLite and EF Core with automatic migrations on startup. The app itself remains intentionally small so the infrastructure and delivery chain stay in focus.
How It Works – Architecture Flow
- 1Build a simple ASP.NET Core API with SQLite and EF Core
- 2Push changes to GitHub
- 3GitHub Actions restores, builds, and tests the solution
- 4A self-hosted Linux runner picks up the deployment job
- 5Docker packages the API as a deployable image
- 6The image is deployed to a local kind Kubernetes cluster
- 7Kubernetes runs the workload as pods/services
- 8Prometheus scrapes app and cluster metrics
- 9Grafana visualizes system health and runtime behavior
Goals
- Understand the full delivery chain end to end
- Compare GitHub-hosted and self-hosted runners
- Deploy a real containerized backend to Kubernetes
- Add monitoring and make the system observable
- Document architecture decisions and trade-offs
Challenges
- Understanding the boundaries between development, build, deploy, and runtime environments
- Configuring the self-hosted runner correctly on Linux/WSL
- Handling the Windows + WSL + Docker Desktop + kind stack
- Structuring Kubernetes manifests and deployment flow cleanly
- Exposing useful Prometheus and Grafana metrics without overcomplicating the project
Key Technical Decisions
- Keep the backend intentionally simple – the value is the delivery chain, not the domain
- Use SQLite because the goal is delivery-chain learning, not distributed data design
- Use kind for a local Kubernetes learning environment
- Use Prometheus/Grafana for practical observability
- Keep the project self-hosted and local instead of managed cloud
Results / Impact
- Working CI/CD flow from GitHub push to deployed Kubernetes workload
- Self-hosted deployment runner in a Linux environment
- Running ASP.NET Core API inside a kind Kubernetes cluster
- Visible metrics in Prometheus and Grafana dashboards
- A portfolio-ready infrastructure case showing delivery, deployment, and observability understanding
What I Learned
This project clarified the difference between development, build, deploy, and runtime environments — and where problems happen between those layers. I learned why containers matter as deployable artifacts, what Kubernetes actually manages at the pod and service level, and how observability with Prometheus and Grafana makes even a simple system much easier to reason about. The biggest takeaway was that infrastructure understanding is a skill in itself, separate from writing application code.
What I Would Improve Next
- Add CPU/memory usage metrics and alerting rules
- Introduce ingress for external access to the cluster
- Move from SQLite to a more production-like data setup
- Package manifests more cleanly with Helm or a similar approach