-
Notifications
You must be signed in to change notification settings - Fork 96
Add hackathon summary for 2026-03 #960
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
+265
−0
Merged
Changes from all commits
Commits
Show all changes
5 commits
Select commit
Hold shift + click to select a range
5bf188d
docs(hackathon): Add images for 2026-03
marc1404 cc79b58
docs(hackathon): Add summary for 2026-03
marc1404 ef80041
docs(hackathon): Update index entry for 2026-03
marc1404 e174517
docs(hackathon): Remove loadbalancers for provider-local topic as it …
marc1404 fd26a4a
docs(hackathon): Use smaller images (PR review)
marc1404 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,264 @@ | ||
| --- | ||
| title: March 2026 | ||
| weight: -202603 | ||
| outline: 2 | ||
| --- | ||
|
|
||
| # Hack The Garden Sofia Edition 03/2026 Wrap Up | ||
|
|
||
| - 🗓️ **Date:** 16.03.2026 – 20.03.2026 | ||
| - 📍 **Location:** [SAP Center Sofia](https://maps.app.goo.gl/SPdvQ4F2p7Qqfx4p9) | ||
| - 👤 **Organizer:** [SAP](https://www.sap.com/) | ||
| - 📘 **Topics:** [hackathon/discussions#41](https://github.com/gardener/hackathon/discussions/41) | ||
|
|
||
|  | ||
|
|
||
| ## 🐳 [GEP-28] Self-Hosted Shoot: Gardener-in-Docker (`gind`) | ||
|
|
||
| > [!TIP] | ||
| > You can find out more about [Self-Hosted Shoot Clusters in GEP-28](https://github.com/gardener/enhancements/blob/main/geps/0028-self-hosted-shoot-clusters). | ||
|
|
||
| > **Tracking:** [hackathon#8](https://github.com/gardener/hackathon/issues/8) | ||
|
|
||
| ### Problem Statement | ||
|
|
||
| It should be possible to create self-hosted shoot clusters using `gardenadm` and run Gardener inside such a cluster. | ||
| Before introducing a tool like `gind` (which runs the self-hosted shoot directly in Docker), we first need to support hosting Gardener inside a self-hosted shoot cluster. | ||
|
|
||
| ### Achievements | ||
|
|
||
| * Deployed `gardener-operator` into the self-hosted shoot. | ||
| * Deployed a `Garden` resource — the self-hosted shoot now serves as runtime cluster for the virtual garden. | ||
| * Enabled the `ManagedSeed` controller in the shoot `gardenlet`, allowing the self-hosted `Shoot` itself to be referenced in the `ManagedSeed`. | ||
| * Adapted the local setup for direct API access to both the self-hosted shoot API server and the virtual garden API server from the host machine (no port-forwarding). | ||
|
|
||
| ### Next Steps | ||
|
|
||
| * Cleanup the code and commits, adapt documentation and `make` rules. | ||
| * Open individual PRs for the different features and get them merged. | ||
| * Introduce an e2e test for this scenario ("fully self-contained Gardener"). | ||
| * Try spinning up workerless and regular hosted shoots on this seed. | ||
|
|
||
| ### Code & Pull Requests | ||
|
|
||
| * [Don't create duplicate `ControllerInstallation`s when self-hosted `Shoot` is also a `Seed` – gardener/gardener#14282](https://github.com/gardener/gardener/pull/14282) | ||
| * [Ensure all system pods run on system component nodes – gardener/gardener#14367](https://github.com/gardener/gardener/pull/14367) | ||
| * [`gardenadm connect`: Enable `ManagedSeed` controller in `gardenlet` – gardener/gardener#14369](https://github.com/gardener/gardener/pull/14369) | ||
| * [Make self-hosted shoot API server accessible from host machine - gardener/gardener#14370](https://github.com/gardener/gardener/pull/14370) | ||
| * [Enable deployment of `gardener-operator` (and `Garden`) inside self-hosted `Shoot`s – gardener/gardener#14387](https://github.com/gardener/gardener/pull/14387) | ||
| * [Introduce GinD (Gardener-in-Docker) dev setup for self-hosted `Shoot`s w/ unmanaged infra – gardener/gardener#14700](https://github.com/gardener/gardener/issues/14700) | ||
| * [Create `ManagedSeed` for self-hosted `Shoot` to promote it to seed cluster – gardener/gardener#14747](https://github.com/gardener/gardener/pull/14747) | ||
|
|
||
| ## 🕹️ [GEP-28] Ensure System Pods Run on Control Plane Nodes | ||
|
|
||
| > **Tracking:** [hackathon#16](https://github.com/gardener/hackathon/issues/16) | ||
|
|
||
| ### Problem Statement | ||
|
|
||
| System components in a self-hosted shoot (pods in `garden` namespace, extensions, system pods in `kube-system`) are not guaranteed to run exclusively on control plane nodes. | ||
| Over time, they might get rescheduled to worker nodes. | ||
|
|
||
| ### Achievements | ||
|
|
||
| * Implemented placement enforcement so that system pods exclusively run on control plane nodes. | ||
|
|
||
| ### Code & Pull Requests | ||
|
|
||
| * [Ensure all system pods run on system component nodes – gardener/gardener#14367](https://github.com/gardener/gardener/pull/14367) | ||
|
|
||
| ## ❤️🩹 [GEP-28] Self-Hosted Shoot Control Plane Restoration | ||
|
|
||
| > **Tracking:** [hackathon#22](https://github.com/gardener/hackathon/issues/22) | ||
|
|
||
| ### Problem Statement | ||
|
|
||
| When a self-hosted shoot cluster loses its control plane, it must be possible to restore the secrets and control plane state from the `ShootState` resource. | ||
|
|
||
| ### Achievements | ||
|
|
||
| * PoC branch using the `ShootState` for restoring secrets when restoring a control plane of a self-hosted shoot: [poc/gep-28-dr](https://github.com/ialidzhikov/gardener/commits/poc/gep-28-dr/). | ||
| * Demo scripts: [demo/restore-from-shootstate](https://github.com/ialidzhikov/gardener/commits/demo/restore-from-shootstate/). | ||
| * Fixed a bug when computing the ShootState for a self-hosted shoot cluster. | ||
|
|
||
| ### Next Steps | ||
|
|
||
| * Eliminate hacks/workarounds: enable etcd encryption, adapt csrapprover for gardener-node-agent CSRs, fix pod network availability check, sanitize etcd data, eliminate second-phase restore. | ||
| * Design and implement how to read/compute the ShootState (via `gardenadm discover` or from etcd backup). | ||
| * Design and implement etcd backup restore. | ||
| * Add support for restoring a self-hosted shoot with worker nodes. | ||
|
|
||
| ### Code & Pull Requests | ||
|
|
||
| * [Fix ShootState resource deployment for a self-hosted Shoot – gardener/gardener#14339](https://github.com/gardener/gardener/pull/14339) | ||
|
|
||
| ## 🗝️ [GEP-28] Eliminate Static Admin Token After `gardenadm connect` | ||
|
|
||
| > **Tracking:** [hackathon#14](https://github.com/gardener/hackathon/issues/14) | ||
|
|
||
| ### Problem Statement | ||
|
|
||
| After `gardenadm init`, the control plane components use a static token with cluster-admin privileges for bootstrapping. | ||
| Once the cluster is fully connected (`gardenadm connect`), this should be replaced with short-lived tokens from `gardener-resource-manager`. | ||
|
|
||
| ### Outcome | ||
|
|
||
| Discussed and decided that this will be part of the `shoot/shoot` controller and should not be handled in `gardenadm init` or `gardenadm connect` explicitly. | ||
| Closed in favor of [Experiment with `shoot/shoot` controller in Self-Hosted Shoot Clusters – hackathon#45](https://github.com/gardener/hackathon/issues/45). | ||
|
|
||
| ## 🔑 Functional Local Setup with Workload Identity | ||
|
|
||
| > **Tracking:** [hackathon#28](https://github.com/gardener/hackathon/issues/28) | ||
|
|
||
| ### Problem Statement | ||
|
|
||
| The local development setup does not work with Workload Identity (WI), making it impossible to test WI-dependent scenarios locally. | ||
|
|
||
| ### Achievements | ||
|
|
||
| * Initial implementation establishing trust between the local KinD cluster and the Gardener Workload Identity Issuer. | ||
| * Identified that Machine Controller Manager is not deployed with minimal permissions — opened a PR to address this. | ||
|
|
||
| ### Next Steps | ||
|
|
||
| * Clean up the code in the linked branch. | ||
| * Verify all scenarios work as expected and current tests pass. | ||
| * Implement new e2e tests leveraging Workload Identity or enable it for existing ones. | ||
| * Add support for Workload Identity in other local scenarios (ETCD backups, DNS, etc.). | ||
|
|
||
| ### Code & Pull Requests | ||
|
|
||
| * Branch: [dimityrmirchev/gardener:wi-local-setup](https://github.com/dimityrmirchev/gardener/tree/wi-local-setup) | ||
| * [Reduce MCM permissions – gardener/gardener#14372](https://github.com/gardener/gardener/pull/14372) | ||
|
|
||
| ## 🤖 AGENTS.md / SKILLS.md for Gardener Repos | ||
|
|
||
| > **Tracking:** [hackathon#31](https://github.com/gardener/hackathon/issues/31) | ||
|
|
||
| ### Problem Statement | ||
|
|
||
| AI-native development tools (Claude Code, Codex CLI, Gemini CLI) benefit from repository-level context files (`AGENTS.md`, `SKILLS.md`). | ||
| The question is how to best leverage these for the Gardener ecosystem. | ||
|
|
||
| ### Achievements | ||
|
|
||
| * Researched recent papers: *Evaluating AGENTS.md* (Feb 2026) and *SkillsBench* (Feb/Mar 2026). | ||
| * Key findings: curated skills provide +16.2pp average improvement; LLM-generated context provides negligible or negative benefit; focused skills with 2–3 modules outperform comprehensive documentation. | ||
| * Proposed a minimal `AGENTS.md` template focused on "common mistakes and confusion points" rather than comprehensive documentation. | ||
|
|
||
| ### Next Steps | ||
|
|
||
| * Experiment with proposed `AGENTS.md` file in gardener org repos (not `gardener/gardener`). | ||
| * If significant benefit is observed, present findings in a larger forum (Gardener Review Meeting). | ||
|
|
||
| ## 📦 PoC: Repo Tools Integration with Extension Repositories | ||
|
|
||
| > **Tracking:** [hackathon#18](https://github.com/gardener/hackathon/issues/18) | ||
|
|
||
| ### Problem Statement | ||
|
|
||
| Extension repositories share ~10 almost identical make targets and hack scripts with `gardener/gardener`. | ||
| Changes to these shared scripts result in copy-paste effort across all repositories (~20 PRs for a single fix). | ||
|
|
||
| ### Achievements | ||
|
|
||
| * Explored a subtree approach for centralizing shared make targets and hack scripts into a separate repository. | ||
| * Adapted `gardener-extension-shoot-rsyslog-relp` and `pvc-autoscaler` as PoC repositories. | ||
|
|
||
| ### Next Steps | ||
|
|
||
| * Adapt additional repositories to validate the approach and catch problems early. | ||
| * Gather feature requests based on newfound use cases. | ||
|
|
||
| ## ✅ Diki as a Service | ||
|
|
||
| > **Tracking:** [hackathon#24](https://github.com/gardener/hackathon/issues/24) | ||
|
|
||
| ### Problem Statement | ||
|
|
||
| [Diki](https://github.com/gardener/diki) compliance checks should be schedulable, exportable, and operable as a service rather than one-off CLI runs. | ||
|
|
||
| ### Achievements | ||
|
|
||
| * Merged previous work (first PoC and `diki-exporter`) into a working operator: [hackathon-poc branch](https://github.com/georgibaltiev/diki-operator/tree/hackathon-poc). | ||
| * Implemented Postgres exporter in `diki-exporter`. | ||
| * Made the operator capable of running in a different cluster than the `ComplianceScan`s (needed for seed/shoot-namespace topology). | ||
| * PoC'ed `ScheduledComplianceScan`s — spawns scans based on a cron expression. | ||
|
|
||
| ### Next Steps | ||
|
|
||
| * More testing and cleanup. | ||
| * Pour the work into a Gardener extension. | ||
|
|
||
| ## 🧩 Extension: Generic Shoot Pack (CloudNativePG et al.) | ||
|
|
||
| > **Tracking:** [hackathon#19](https://github.com/gardener/hackathon/issues/19) | ||
|
|
||
| ### Problem Statement | ||
|
|
||
| Installing upstream operators into shoot clusters requires repetitive per-operator extension development. | ||
| A generic packaging mechanism would reduce this overhead. | ||
|
|
||
| ### Achievements | ||
|
|
||
| * Developed [gardener-extension-shoot-pack](https://github.com/dnaeon/gardener-extension-shoot-pack/tree/feat/initial) — a generic Gardener extension that uses package specifications to install operators as managed resources. | ||
| * Ships packages for: cert-manager, CloudNativePG, Prometheus Operator, and Valkey Operator. | ||
| * Tooling available to inspect, view, and create new package specs. | ||
|
|
||
| ### Next Steps | ||
|
|
||
| * Clean things up. | ||
| * Add more tests. | ||
|
|
||
| ## 🪣 Fix Leaking ValidatingWebhookConfigurations in (Virtual-)Garden | ||
|
|
||
| > **Tracking:** [hackathon#21](https://github.com/gardener/hackathon/issues/21) | ||
|
|
||
| ### Problem Statement | ||
|
|
||
| When deploying extensions via `gardener-operator` using `extensions.operator.gardener.cloud` resources, `ValidatingWebhookConfiguration`s remain in the virtual-garden cluster even after removing the extension. | ||
| The root cause: the `--webhook-config-owner-namespace` option defaults to `garden` namespace, preventing proper garbage collection. | ||
|
|
||
| ### Achievements | ||
|
|
||
| * Identified the root cause — missing `--webhook-config-owner-namespace` flag in extension admission deployments. | ||
| * A fix existed since ~2 years ([Cleanup webhook configuration from virtual cluster when removing admission deployment – gardener/gardener#10585](https://github.com/gardener/gardener/pull/10585)) but was not adopted by all extensions. | ||
| * Opened umbrella issue to track fixes across all affected extensions. | ||
|
|
||
| ### Next Steps | ||
|
|
||
| * Apply the `--webhook-config-owner-namespace` option to each affected extension's admission deployment. | ||
|
|
||
| ### Code & Pull Requests | ||
|
|
||
| * [Fix leaking of `validatingwebhookconfiguration` resources in extensions – gardener/gardener#14334](https://github.com/gardener/gardener/issues/14334) | ||
|
|
||
| ## 📡 Resolve the Istio Metrics Leak | ||
|
|
||
| > **Tracking:** [hackathon#12](https://github.com/gardener/hackathon/issues/12) | ||
|
|
||
| ### Problem Statement | ||
|
|
||
| Istio sidecar metrics for terminated pods accumulate indefinitely, leading to unbounded cardinality in Prometheus. | ||
|
|
||
| ### Achievements | ||
|
|
||
| * Configured Istio metric rotation via environment variables (`METRIC_ROTATION_INTERVAL`, `METRIC_GRACEFUL_DELETION_INTERVAL`). | ||
| * Verified correct behavior: after rotation interval, old pod metrics disappear and new pod metrics appear; long-lived connection metrics reset correctly (counters restart from 0, compatible with PromQL `rate` functions). | ||
| * Fixed duplicate scraping caused by two Istio services (`istio-gateway` LoadBalancer and `istio-gateway-internal` ClusterIP) matching the same ServiceMonitor label selector. | ||
|
|
||
| ### Next Steps | ||
|
|
||
| * The env-var approach is deprecated from Istio v1.28+. | ||
| * Migration to annotation-based configuration (`SidecarStatsEvictionInterval`) will be needed when upgrading beyond v1.27. | ||
|
|
||
| ### Code & Pull Requests | ||
|
|
||
| * [Evict stale Istio Gateway metrics and reenable scraping – gardener/gardener#14337](https://github.com/gardener/gardener/pull/14337) | ||
|
|
||
| --- | ||
|
|
||
|  | ||
|
|
||
| --- | ||
|
|
||
|  | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.