Skip to content

Release: Bump bundles to 0.0.39, core to 0.0.26, postgres 0.5.13#590

Open
umswmayj wants to merge 49 commits intoreleasefrom
main
Open

Release: Bump bundles to 0.0.39, core to 0.0.26, postgres 0.5.13#590
umswmayj wants to merge 49 commits intoreleasefrom
main

Conversation

@umswmayj
Copy link
Collaborator

@umswmayj umswmayj commented Mar 17, 2026

  • Added hypervisor overcommit controller that assigns overcommit ratios to hypervisors with specific traits
  • Updated scheduling to use hypervisor effective capacity for filtering and weighing decisions
  • Refactored KVM resource capacity KPI to use effective capacity from hypervisor CRD
  • Added basic committed resource functionality with reservations, including AZ support, flavor group knowledge, and commitments API endpoints
  • Added server group support to workload spawner for anti-affinity/affinity rules
  • Escalated CortexNovaSchedulingDown alert to critical severity
  • Restored descheduler alerting with error label support
  • Bumped PostgreSQL base image and chart versions
  • Removed visualizer components
  • Added AGENTS.md for AI tooling context

PhilippMatthes and others added 30 commits March 10, 2026 13:11
It was discussed that we probably don't need this filter in the future.
According to internal sources the filter was introduced to support the
transition of old images to new images needing this trait. Since, now,
all hosts provide this trait, it's unnecessary to keep this filter.
This adds basic committed resource (CR) functionality:
 - update reservations with AZ
 - reservation label
 - flavor group knowledge
 - knowledge with contentChanged
 - commitments syncer works with flavor group CRs
 - commitments API endpoint for batch update of CRs
 - commitments API version endpoint
 - skeletons for capacity endpoint
With the new use case of the decision crd (for now history crd) we can't
use the visualizer anymore. Goal is to make the explanation already good
enough :D
…tor digest to 05f22f6 (#573)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[github.com/cobaltcore-dev/openstack-hypervisor-operator](https://redirect.github.com/cobaltcore-dev/openstack-hypervisor-operator)
| require | digest | `733c59b` → `05f22f6` |

---

### Configuration

📅 **Schedule**: Branch creation - "before 8am on Friday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/cobaltcore-dev/cortex).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My42Ni40IiwidXBkYXRlZEluVmVyIjoiNDMuNjYuNCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[docker/setup-qemu-action](https://redirect.github.com/docker/setup-qemu-action)
| action | major | `v3` → `v4` |

---

### Release Notes

<details>
<summary>docker/setup-qemu-action (docker/setup-qemu-action)</summary>

###
[`v4`](https://redirect.github.com/docker/setup-qemu-action/compare/v3...v4)

[Compare
Source](https://redirect.github.com/docker/setup-qemu-action/compare/v3...v4)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 8am on Friday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/cobaltcore-dev/cortex).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My42Ni40IiwidXBkYXRlZEluVmVyIjoiNDMuNjYuNCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[docker/setup-buildx-action](https://redirect.github.com/docker/setup-buildx-action)
| action | major | `v3` → `v4` |

---

### Release Notes

<details>
<summary>docker/setup-buildx-action
(docker/setup-buildx-action)</summary>

###
[`v4`](https://redirect.github.com/docker/setup-buildx-action/compare/v3...v4)

[Compare
Source](https://redirect.github.com/docker/setup-buildx-action/compare/v3...v4)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 8am on Friday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/cobaltcore-dev/cortex).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My42Ni40IiwidXBkYXRlZEluVmVyIjoiNDMuNjYuNCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [docker/login-action](https://redirect.github.com/docker/login-action)
| action | major | `v3` → `v4` |

---

### Release Notes

<details>
<summary>docker/login-action (docker/login-action)</summary>

###
[`v4`](https://redirect.github.com/docker/login-action/compare/v3...v4)

[Compare
Source](https://redirect.github.com/docker/login-action/compare/v3...v4)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 8am on Friday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/cobaltcore-dev/cortex).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My42Ni40IiwidXBkYXRlZEluVmVyIjoiNDMuNjYuNCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[docker/build-push-action](https://redirect.github.com/docker/build-push-action)
| action | major | `v6` → `v7` |

---

### Release Notes

<details>
<summary>docker/build-push-action (docker/build-push-action)</summary>

###
[`v7`](https://redirect.github.com/docker/build-push-action/compare/v6...v7)

[Compare
Source](https://redirect.github.com/docker/build-push-action/compare/v6...v7)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 8am on Friday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/cobaltcore-dev/cortex).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My42Ni40IiwidXBkYXRlZEluVmVyIjoiNDMuNjYuNCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
This PR contains the following updates:

| Package | Change |
[Age](https://docs.renovatebot.com/merge-confidence/) |
[Adoption](https://docs.renovatebot.com/merge-confidence/) |
[Passing](https://docs.renovatebot.com/merge-confidence/) |
[Confidence](https://docs.renovatebot.com/merge-confidence/) | Type |
Update |
|---|---|---|---|---|---|---|---|
|
[github.com/gophercloud/gophercloud/v2](https://redirect.github.com/gophercloud/gophercloud)
| `v2.10.0` → `v2.11.1` |
![age](https://developer.mend.io/api/mc/badges/age/go/github.com%2fgophercloud%2fgophercloud%2fv2/v2.11.1?slim=true)
|
![adoption](https://developer.mend.io/api/mc/badges/adoption/go/github.com%2fgophercloud%2fgophercloud%2fv2/v2.11.1?slim=true)
|
![passing](https://developer.mend.io/api/mc/badges/compatibility/go/github.com%2fgophercloud%2fgophercloud%2fv2/v2.10.0/v2.11.1?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/go/github.com%2fgophercloud%2fgophercloud%2fv2/v2.10.0/v2.11.1?slim=true)
| require | minor |
| [k8s.io/api](https://redirect.github.com/kubernetes/api) | `v0.35.1` →
`v0.35.2` |
![age](https://developer.mend.io/api/mc/badges/age/go/k8s.io%2fapi/v0.35.2?slim=true)
|
![adoption](https://developer.mend.io/api/mc/badges/adoption/go/k8s.io%2fapi/v0.35.2?slim=true)
|
![passing](https://developer.mend.io/api/mc/badges/compatibility/go/k8s.io%2fapi/v0.35.1/v0.35.2?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/go/k8s.io%2fapi/v0.35.1/v0.35.2?slim=true)
| require | patch |
|
[k8s.io/apimachinery](https://redirect.github.com/kubernetes/apimachinery)
| `v0.35.1` → `v0.35.2` |
![age](https://developer.mend.io/api/mc/badges/age/go/k8s.io%2fapimachinery/v0.35.2?slim=true)
|
![adoption](https://developer.mend.io/api/mc/badges/adoption/go/k8s.io%2fapimachinery/v0.35.2?slim=true)
|
![passing](https://developer.mend.io/api/mc/badges/compatibility/go/k8s.io%2fapimachinery/v0.35.1/v0.35.2?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/go/k8s.io%2fapimachinery/v0.35.1/v0.35.2?slim=true)
| require | patch |
| [k8s.io/client-go](https://redirect.github.com/kubernetes/client-go) |
`v0.35.1` → `v0.35.2` |
![age](https://developer.mend.io/api/mc/badges/age/go/k8s.io%2fclient-go/v0.35.2?slim=true)
|
![adoption](https://developer.mend.io/api/mc/badges/adoption/go/k8s.io%2fclient-go/v0.35.2?slim=true)
|
![passing](https://developer.mend.io/api/mc/badges/compatibility/go/k8s.io%2fclient-go/v0.35.1/v0.35.2?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/go/k8s.io%2fclient-go/v0.35.1/v0.35.2?slim=true)
| require | patch |
|
[kube-prometheus-stack](https://redirect.github.com/prometheus-operator/kube-prometheus)
([source](https://redirect.github.com/prometheus-community/helm-charts))
| `82.4.1` → `82.10.3` |
![age](https://developer.mend.io/api/mc/badges/age/docker/ghcr.io%2fprometheus-community%2fcharts%2fkube-prometheus-stack/82.10.3?slim=true)
|
![adoption](https://developer.mend.io/api/mc/badges/adoption/docker/ghcr.io%2fprometheus-community%2fcharts%2fkube-prometheus-stack/82.10.3?slim=true)
|
![passing](https://developer.mend.io/api/mc/badges/compatibility/docker/ghcr.io%2fprometheus-community%2fcharts%2fkube-prometheus-stack/82.4.1/82.10.3?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/docker/ghcr.io%2fprometheus-community%2fcharts%2fkube-prometheus-stack/82.4.1/82.10.3?slim=true)
| | minor |
|
[sigs.k8s.io/controller-runtime](https://redirect.github.com/kubernetes-sigs/controller-runtime)
| `v0.23.1` → `v0.23.3` |
![age](https://developer.mend.io/api/mc/badges/age/go/sigs.k8s.io%2fcontroller-runtime/v0.23.3?slim=true)
|
![adoption](https://developer.mend.io/api/mc/badges/adoption/go/sigs.k8s.io%2fcontroller-runtime/v0.23.3?slim=true)
|
![passing](https://developer.mend.io/api/mc/badges/compatibility/go/sigs.k8s.io%2fcontroller-runtime/v0.23.1/v0.23.3?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/go/sigs.k8s.io%2fcontroller-runtime/v0.23.1/v0.23.3?slim=true)
| require | patch |

---

### Release Notes

<details>
<summary>gophercloud/gophercloud
(github.com/gophercloud/gophercloud/v2)</summary>

###
[`v2.11.1`](https://redirect.github.com/gophercloud/gophercloud/releases/tag/v2.11.1)

[Compare
Source](https://redirect.github.com/gophercloud/gophercloud/compare/v2.11.0...v2.11.1)

#### What's Changed

- \[v2] Do not specify go patch version by
[@&#8203;mandre](https://redirect.github.com/mandre) in
[#&#8203;3648](https://redirect.github.com/gophercloud/gophercloud/pull/3648)

**Full Changelog**:
<gophercloud/gophercloud@v2.11.0...v2.11.1>

###
[`v2.11.0`](https://redirect.github.com/gophercloud/gophercloud/releases/tag/v2.11.0)

[Compare
Source](https://redirect.github.com/gophercloud/gophercloud/compare/v2.10.0...v2.11.0)

#### What's Changed

- \[v2] Add PCIAddress field to baremetal InterfaceType by
[@&#8203;MahnoorAsghar](https://redirect.github.com/MahnoorAsghar) in
[#&#8203;3602](https://redirect.github.com/gophercloud/gophercloud/pull/3602)
- \[v2] Networking V2: Added support for ML2 extension
port\_trusted\_vif by
[@&#8203;dlaw4608](https://redirect.github.com/dlaw4608) in
[#&#8203;3610](https://redirect.github.com/gophercloud/gophercloud/pull/3610)
- \[v2] networking/v2/layer3/routers: Add external gateways management
by [@&#8203;aldokimi](https://redirect.github.com/aldokimi) in
[#&#8203;3611](https://redirect.github.com/gophercloud/gophercloud/pull/3611)
- \[v2] Use jimmy amphora in octavia job by
[@&#8203;eshulman2](https://redirect.github.com/eshulman2) in
[#&#8203;3625](https://redirect.github.com/gophercloud/gophercloud/pull/3625)
- \[v2] CI: Fix fwaas jobs by
[@&#8203;mandre](https://redirect.github.com/mandre) in
[#&#8203;3632](https://redirect.github.com/gophercloud/gophercloud/pull/3632)
- \[v2] Bump go by [@&#8203;mandre](https://redirect.github.com/mandre)
in
[#&#8203;3630](https://redirect.github.com/gophercloud/gophercloud/pull/3630)
- \[v2] Add a new Ironic field representing node health to Gophercloud
by [@&#8203;jacob-anders](https://redirect.github.com/jacob-anders) in
[#&#8203;3629](https://redirect.github.com/gophercloud/gophercloud/pull/3629)
- \[v2] Add TSIG key support for OpenStack DNS v2 API by
[@&#8203;omersch381](https://redirect.github.com/omersch381) in
[#&#8203;3633](https://redirect.github.com/gophercloud/gophercloud/pull/3633)
- \[v2] fix: networkipavailabilities: handle scientific notation in IP
counts by [@&#8203;ednxzu](https://redirect.github.com/ednxzu) in
[#&#8203;3640](https://redirect.github.com/gophercloud/gophercloud/pull/3640)
- Prepare v2.11.0 by
[@&#8203;mandre](https://redirect.github.com/mandre) in
[#&#8203;3641](https://redirect.github.com/gophercloud/gophercloud/pull/3641)

**Full Changelog**:
<gophercloud/gophercloud@v2.10.0...v2.11.0>

</details>

<details>
<summary>kubernetes/api (k8s.io/api)</summary>

###
[`v0.35.2`](https://redirect.github.com/kubernetes/api/compare/v0.35.1...v0.35.2)

[Compare
Source](https://redirect.github.com/kubernetes/api/compare/v0.35.1...v0.35.2)

</details>

<details>
<summary>kubernetes/apimachinery (k8s.io/apimachinery)</summary>

###
[`v0.35.2`](https://redirect.github.com/kubernetes/apimachinery/compare/v0.35.1...v0.35.2)

[Compare
Source](https://redirect.github.com/kubernetes/apimachinery/compare/v0.35.1...v0.35.2)

</details>

<details>
<summary>kubernetes/client-go (k8s.io/client-go)</summary>

###
[`v0.35.2`](https://redirect.github.com/kubernetes/client-go/compare/v0.35.1...v0.35.2)

[Compare
Source](https://redirect.github.com/kubernetes/client-go/compare/v0.35.1...v0.35.2)

</details>

<details>
<summary>prometheus-community/helm-charts
(kube-prometheus-stack)</summary>

###
[`v82.10.3`](https://redirect.github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-82.10.3)

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
and Prometheus rules combined with documentation and scripts to provide
easy to operate end-to-end Kubernetes cluster monitoring with Prometheus
using the Prometheus Operator.

#### What's Changed

- \[kube-prometheus-stack] Update kube-prometheus-stack dependency
non-major updates by
[@&#8203;renovate](https://redirect.github.com/renovate)\[bot] in
[#&#8203;6726](https://redirect.github.com/prometheus-community/helm-charts/pull/6726)

**Full Changelog**:
<prometheus-community/helm-charts@kube-prometheus-stack-82.10.2...kube-prometheus-stack-82.10.3>

###
[`v82.10.2`](https://redirect.github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-82.10.2)

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
and Prometheus rules combined with documentation and scripts to provide
easy to operate end-to-end Kubernetes cluster monitoring with Prometheus
using the Prometheus Operator.

#### What's Changed

- \[kube-prometheus-stack] Update kube-prometheus-stack dependency
non-major updates by
[@&#8203;renovate](https://redirect.github.com/renovate)\[bot] in
[#&#8203;6725](https://redirect.github.com/prometheus-community/helm-charts/pull/6725)

**Full Changelog**:
<prometheus-community/helm-charts@prometheus-pingdom-exporter-3.4.2...kube-prometheus-stack-82.10.2>

###
[`v82.10.1`](https://redirect.github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-82.10.1)

[Compare
Source](https://redirect.github.com/prometheus-community/helm-charts/compare/kube-prometheus-stack-82.10.0...kube-prometheus-stack-82.10.1)

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
and Prometheus rules combined with documentation and scripts to provide
easy to operate end-to-end Kubernetes cluster monitoring with Prometheus
using the Prometheus Operator.

#### What's Changed

- \[kube-prometheus-stack] add configurable reloaderWebNodePort by
[@&#8203;firasmosbehi](https://redirect.github.com/firasmosbehi) in
[#&#8203;6717](https://redirect.github.com/prometheus-community/helm-charts/pull/6717)

**Full Changelog**:
<prometheus-community/helm-charts@prometheus-sql-exporter-0.5.0...kube-prometheus-stack-82.10.1>

###
[`v82.10.0`](https://redirect.github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-82.10.0)

[Compare
Source](https://redirect.github.com/prometheus-community/helm-charts/compare/kube-prometheus-stack-82.9.0...kube-prometheus-stack-82.10.0)

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
and Prometheus rules combined with documentation and scripts to provide
easy to operate end-to-end Kubernetes cluster monitoring with Prometheus
using the Prometheus Operator.

#### What's Changed

- \[kube-prometheus-stack] Update kube-prometheus-stack dependency
non-major updates by
[@&#8203;renovate](https://redirect.github.com/renovate)\[bot] in
[#&#8203;6722](https://redirect.github.com/prometheus-community/helm-charts/pull/6722)

**Full Changelog**:
<prometheus-community/helm-charts@prometheus-nginx-exporter-1.20.1...kube-prometheus-stack-82.10.0>

###
[`v82.9.0`](https://redirect.github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-82.9.0)

[Compare
Source](https://redirect.github.com/prometheus-community/helm-charts/compare/kube-prometheus-stack-82.8.0...kube-prometheus-stack-82.9.0)

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
and Prometheus rules combined with documentation and scripts to provide
easy to operate end-to-end Kubernetes cluster monitoring with Prometheus
using the Prometheus Operator.

#### What's Changed

- \[kube-prometheus-stack] Update kube-prometheus-stack dependency
non-major updates by
[@&#8203;renovate](https://redirect.github.com/renovate)\[bot] in
[#&#8203;6706](https://redirect.github.com/prometheus-community/helm-charts/pull/6706)

**Full Changelog**:
<prometheus-community/helm-charts@prometheus-nginx-exporter-1.20.0...kube-prometheus-stack-82.9.0>

###
[`v82.8.0`](https://redirect.github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-82.8.0)

[Compare
Source](https://redirect.github.com/prometheus-community/helm-charts/compare/kube-prometheus-stack-82.7.0...kube-prometheus-stack-82.8.0)

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
and Prometheus rules combined with documentation and scripts to provide
easy to operate end-to-end Kubernetes cluster monitoring with Prometheus
using the Prometheus Operator.

#### What's Changed

- \[kube-prometheus-stack] Add VPA support for Prometheus by
[@&#8203;QuentinBisson](https://redirect.github.com/QuentinBisson) in
[#&#8203;6700](https://redirect.github.com/prometheus-community/helm-charts/pull/6700)

**Full Changelog**:
<prometheus-community/helm-charts@prometheus-nginx-exporter-1.19.5...kube-prometheus-stack-82.8.0>

###
[`v82.7.0`](https://redirect.github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-82.7.0)

[Compare
Source](https://redirect.github.com/prometheus-community/helm-charts/compare/kube-prometheus-stack-82.6.1...kube-prometheus-stack-82.7.0)

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
and Prometheus rules combined with documentation and scripts to provide
easy to operate end-to-end Kubernetes cluster monitoring with Prometheus
using the Prometheus Operator.

#### What's Changed

- \[kube-prometheus-stack] Add VPA support for Alertmanager by
[@&#8203;QuentinBisson](https://redirect.github.com/QuentinBisson) in
[#&#8203;6699](https://redirect.github.com/prometheus-community/helm-charts/pull/6699)

**Full Changelog**:
<prometheus-community/helm-charts@kube-prometheus-stack-82.6.1...kube-prometheus-stack-82.7.0>

###
[`v82.6.1`](https://redirect.github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-82.6.1)

[Compare
Source](https://redirect.github.com/prometheus-community/helm-charts/compare/kube-prometheus-stack-82.6.0...kube-prometheus-stack-82.6.1)

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
and Prometheus rules combined with documentation and scripts to provide
easy to operate end-to-end Kubernetes cluster monitoring with Prometheus
using the Prometheus Operator.

#### What's Changed

- \[kube-prometheus-stack] Update Helm release grafana to v11.2.3 by
[@&#8203;renovate](https://redirect.github.com/renovate)\[bot] in
[#&#8203;6701](https://redirect.github.com/prometheus-community/helm-charts/pull/6701)

**Full Changelog**:
<prometheus-community/helm-charts@kube-prometheus-stack-82.6.0...kube-prometheus-stack-82.6.1>

###
[`v82.6.0`](https://redirect.github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-82.6.0)

[Compare
Source](https://redirect.github.com/prometheus-community/helm-charts/compare/kube-prometheus-stack-82.5.0...kube-prometheus-stack-82.6.0)

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
and Prometheus rules combined with documentation and scripts to provide
easy to operate end-to-end Kubernetes cluster monitoring with Prometheus
using the Prometheus Operator.

#### What's Changed

- \[kube-prometheus-stack] Update <https://github.com/etcd-io/etcd>
digest to
[`b9b15e1`](https://redirect.github.com/prometheus-community/helm-charts/commit/b9b15e1)
by [@&#8203;renovate](https://redirect.github.com/renovate)\[bot] in
[#&#8203;6697](https://redirect.github.com/prometheus-community/helm-charts/pull/6697)

**Full Changelog**:
<prometheus-community/helm-charts@kube-prometheus-stack-82.5.0...kube-prometheus-stack-82.6.0>

###
[`v82.5.0`](https://redirect.github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-82.5.0)

[Compare
Source](https://redirect.github.com/prometheus-community/helm-charts/compare/kube-prometheus-stack-82.4.3...kube-prometheus-stack-82.5.0)

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
and Prometheus rules combined with documentation and scripts to provide
easy to operate end-to-end Kubernetes cluster monitoring with Prometheus
using the Prometheus Operator.

#### What's Changed

- \[kube-prometheus-stack] Update kube-prometheus-stack dependency
non-major updates by
[@&#8203;renovate](https://redirect.github.com/renovate)\[bot] in
[#&#8203;6696](https://redirect.github.com/prometheus-community/helm-charts/pull/6696)

**Full Changelog**:
<prometheus-community/helm-charts@prometheus-nginx-exporter-1.19.4...kube-prometheus-stack-82.5.0>

###
[`v82.4.3`](https://redirect.github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-82.4.3)

[Compare
Source](https://redirect.github.com/prometheus-community/helm-charts/compare/kube-prometheus-stack-82.4.2...kube-prometheus-stack-82.4.3)

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
and Prometheus rules combined with documentation and scripts to provide
easy to operate end-to-end Kubernetes cluster monitoring with Prometheus
using the Prometheus Operator.

#### What's Changed

- \[kube-prometheus-stack] Update Helm release grafana to v11.2.2 by
[@&#8203;renovate](https://redirect.github.com/renovate)\[bot] in
[#&#8203;6689](https://redirect.github.com/prometheus-community/helm-charts/pull/6689)

**Full Changelog**:
<prometheus-community/helm-charts@prometheus-yet-another-cloudwatch-exporter-0.42.0...kube-prometheus-stack-82.4.3>

###
[`v82.4.2`](https://redirect.github.com/prometheus-community/helm-charts/compare/kube-prometheus-stack-82.4.1...kube-prometheus-stack-82.4.2)

[Compare
Source](https://redirect.github.com/prometheus-community/helm-charts/compare/kube-prometheus-stack-82.4.1...kube-prometheus-stack-82.4.2)

</details>

<details>
<summary>kubernetes-sigs/controller-runtime
(sigs.k8s.io/controller-runtime)</summary>

###
[`v0.23.3`](https://redirect.github.com/kubernetes-sigs/controller-runtime/releases/tag/v0.23.3)

[Compare
Source](https://redirect.github.com/kubernetes-sigs/controller-runtime/compare/v0.23.2...v0.23.3)

#### What's Changed

- 🐛 Ensure DefaulterRemoveUnknownOrOmitableFields is still working even
if objects are equal by
[@&#8203;k8s-infra-cherrypick-robot](https://redirect.github.com/k8s-infra-cherrypick-robot)
in
[#&#8203;3469](https://redirect.github.com/kubernetes-sigs/controller-runtime/pull/3469)

**Full Changelog**:
<kubernetes-sigs/controller-runtime@v0.23.2...v0.23.3>

###
[`v0.23.2`](https://redirect.github.com/kubernetes-sigs/controller-runtime/releases/tag/v0.23.2)

[Compare
Source](https://redirect.github.com/kubernetes-sigs/controller-runtime/compare/v0.23.1...v0.23.2)

#### What's Changed

- 🐛 Fix fake client's SSA status patch resource version check by
[@&#8203;k8s-infra-cherrypick-robot](https://redirect.github.com/k8s-infra-cherrypick-robot)
in
[#&#8203;3446](https://redirect.github.com/kubernetes-sigs/controller-runtime/pull/3446)
- ✨ Reduce memory usage of default webhooks by
[@&#8203;k8s-infra-cherrypick-robot](https://redirect.github.com/k8s-infra-cherrypick-robot)
in
[#&#8203;3467](https://redirect.github.com/kubernetes-sigs/controller-runtime/pull/3467)

**Full Changelog**:
<kubernetes-sigs/controller-runtime@v0.23.1...v0.23.2>

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 8am on Friday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR is behind base branch, or you tick the
rebase/retry checkbox.

👻 **Immortal**: This PR will be recreated if closed unmerged. Get
[config
help](https://redirect.github.com/renovatebot/renovate/discussions) if
that's undesired.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/cobaltcore-dev/cortex).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My42Ni40IiwidXBkYXRlZEluVmVyIjoiNDMuNjYuNCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [github.com/sapcc/go-bits](https://redirect.github.com/sapcc/go-bits)
| require | digest | `c20f89b` → `034b497` |

---

### Configuration

📅 **Schedule**: Branch creation - "before 8am on Friday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR is behind base branch, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/cobaltcore-dev/cortex).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My42Ni40IiwidXBkYXRlZEluVmVyIjoiNDMuNjYuNCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[docker/metadata-action](https://redirect.github.com/docker/metadata-action)
| action | major | `v5` → `v6` |

---

### Release Notes

<details>
<summary>docker/metadata-action (docker/metadata-action)</summary>

###
[`v6`](https://redirect.github.com/docker/metadata-action/compare/v5...v6)

[Compare
Source](https://redirect.github.com/docker/metadata-action/compare/v5...v6)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 8am on Friday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/cobaltcore-dev/cortex).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My42Ni40IiwidXBkYXRlZEluVmVyIjoiNDMuNjYuNCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
This upgrade is necessary, because we changed the allocation and
capacity types from a generic `map[string]resource.Quantity` to
`map[hv1.ResourceName]resource.Quantity`. In addition, this upgrade also
makes use of the `hv1.ResourceName` api inside our Reservation CRD.

See:
cobaltcore-dev/openstack-hypervisor-operator#257

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Refactor**
* Standardized resource naming to a strongly-typed format across
scheduling and reservation logic, improving type safety and consistency
for capacity, allocation and reservation handling.

* **Chores**
* Bumped the OpenStack Hypervisor Operator dependency to the newer
release.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
See https://agents.md/#examples -- this file helps agents understand our
code, architecture and intentions.
Support server groups in the workload spawner, so we can spawn vms with
anti-affinity or affinity rules.
umswmayj and others added 6 commits March 17, 2026 10:14
During the pipeline refactoring we stripped away the metric
`cortex_descheduler_pipeline_vm_descheduling_duration_seconds_count`. We
now only have the metric
`cortex_detector_pipeline_run_duration_seconds_count` which however
doesn't support the `error` label. This change adds the `error` label
back in and adjusts the corresponding alert. Also, we add a unit test to
check if the label is propagated correctly.
@coderabbitai
Copy link

coderabbitai bot commented Mar 17, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9024c584-4083-4d65-beee-964be5d1ee20

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 16.22% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive No pull request description was provided by the author, making it impossible to understand the intent and scope of the changes. Add a detailed pull request description explaining the objectives, key changes, and rationale for the extensive modifications included in this changeset.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the primary change: version bumps for Postgres and Helm bundles/core components, which represent the main objective of this release PR.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch main
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
helm/dev/cortex-prometheus-operator/Chart.yaml (1)

9-13: ⚠️ Potential issue | 🟠 Major

Add the required local chart source comment in the dependency block.

Lines 9-13 define dependencies, but the mandatory # from: file://../../library/cortex-postgres comment is missing. Please add it near this dependency declaration to comply with chart policy.

As per coding guidelines, "**/Chart.yaml: Helm Chart.yaml files must include the # from: file://../../library/cortex-postgres comment pointing to the local chart path".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/dev/cortex-prometheus-operator/Chart.yaml` around lines 9 - 13, The
Chart.yaml dependencies block is missing the required local chart source
comment; edit the dependencies section in Chart.yaml near the
kube-prometheus-stack entry (the dependency with name: kube-prometheus-stack,
repository: oci://ghcr.io/prometheus-community/charts, version: 82.10.3) and add
the mandatory comment line "# from: file://../../library/cortex-postgres"
directly above or next to that dependency declaration so the file includes the
required local chart path marker.
internal/scheduling/reservations/controller/controller.go (2)

134-152: ⚠️ Potential issue | 🟠 Major

Take the merge base before mutating res.Status.

Line 138 updates res.Status.Host before Line 142 snapshots old, so client.MergeFrom(old) is built from already-mutated state and can be a no-op. The host sync will then silently never persist.

Suggested fix
-	needsStatusUpdate := false
-	if res.Spec.TargetHost != "" && res.Status.Host != res.Spec.TargetHost {
-		res.Status.Host = res.Spec.TargetHost
-		needsStatusUpdate = true
-	}
-	if needsStatusUpdate {
-		old := res.DeepCopy()
+	needsStatusUpdate := false
+	old := res.DeepCopy()
+	if res.Spec.TargetHost != "" && res.Status.Host != res.Spec.TargetHost {
+		res.Status.Host = res.Spec.TargetHost
+		needsStatusUpdate = true
+	}
+	if needsStatusUpdate {
 		patch := client.MergeFrom(old)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/controller/controller.go` around lines 134 -
152, The merge base is taken after mutating res.Status which makes
client.MergeFrom(old) a no-op; call res.DeepCopy() to capture old before
modifying res.Status (use old := res.DeepCopy(); patch :=
client.MergeFrom(old)), then set res.Status.Host = res.Spec.TargetHost and call
r.Status().Patch(ctx, &res, patch) as before, preserving the existing error
handling for client.IgnoreNotFound(err).

183-199: ⚠️ Potential issue | 🔴 Critical

Use correct units when converting resources for Nova API.

Memory and CPU are converted with incorrect scaling factors, causing Nova to receive wrong values:

  • CPU: ScaledValue(resource.Milli) returns millicores (e.g., 4000), but Nova's VCPUs field expects whole vCPUs (e.g., 4)
  • Memory: ScaledValue(resource.Mega) uses decimal scaling (divides by 1,000,000), losing ~5% of memory for 8 GiB (8589 MB instead of 8192 MB)

Meanwhile, reservation_manager.go stores resources as bytes and whole CPUs via resource.NewQuantity(), creating a unit mismatch that will cause Nova to reject valid placements.

Suggested fix
 	var memoryMB uint64
 	if memory, ok := res.Spec.Resources["memory"]; ok {
-		memoryValue := memory.ScaledValue(resource.Mega)
+		memoryValue := memory.Value()
 		if memoryValue < 0 {
 			return ctrl.Result{}, fmt.Errorf("invalid memory value: %d", memoryValue)
 		}
-		memoryMB = uint64(memoryValue)
+		memoryMB = uint64((memoryValue + 1024*1024 - 1) / (1024 * 1024))
 	}
 
 	var cpu uint64
 	if cpuQuantity, ok := res.Spec.Resources["cpu"]; ok {
-		cpuValue := cpuQuantity.ScaledValue(resource.Milli)
+		cpuValue := cpuQuantity.MilliValue()
 		if cpuValue < 0 {
 			return ctrl.Result{}, fmt.Errorf("invalid cpu value: %d", cpuValue)
 		}
-		cpu = uint64(cpuValue)
+		if cpuValue%1000 != 0 {
+			return ctrl.Result{}, fmt.Errorf("invalid cpu value: %d", cpuValue)
+		}
+		cpu = uint64(cpuValue / 1000)
 	}

Also applies to: 251-259

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/controller/controller.go` around lines 183 -
199, The memory and CPU conversions use the wrong scaling: change memory
conversion in the block that sets memoryMB (reading from
res.Spec.Resources["memory"]) to use memory.ScaledValue(resource.Mebi) so you
get MiB (8192 for 8GiB), and change CPU conversion (reading from
res.Spec.Resources["cpu"]) to compute whole vCPUs by taking
cpuQuantity.ScaledValue(resource.Milli) and dividing by 1000 (e.g., vcpus :=
uint64(cpuValue / 1000)); apply the same fixes to the other conversion block
later (lines ~251-259) so values match how reservation_manager.go stores bytes
and whole CPUs and validate the resulting integers before returning.
🧹 Nitpick comments (16)
tools/plutono/provisioning/dashboards/cortex-status.json (1)

2321-2330: Both panels now query the same metric with different aggregations.

This panel ("Triggered migrations") and the previous panel ("Deschedulings") now both query cortex_detector_pipeline_run_duration_seconds_count—one using rate() and this one using delta(). While mathematically they show related data (per-second rate vs absolute change), having two panels for the same underlying metric in the same dashboard section may be redundant.

Consider whether:

  1. The panel title "Triggered migrations" accurately reflects what cortex_detector_pipeline_run_duration_seconds_count measures
  2. Both panels are necessary or could be consolidated
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/plutono/provisioning/dashboards/cortex-status.json` around lines 2321 -
2330, The "Triggered migrations" panel is querying the same underlying metric
cortex_detector_pipeline_run_duration_seconds_count (using delta()) as the
"Deschedulings" panel (which uses rate()), which may be redundant and the title
may be inaccurate; inspect the two panels (the one titled "Triggered migrations"
with expr=sum by (error)
(delta(cortex_detector_pipeline_run_duration_seconds_count{}[2m])) and the
"Deschedulings" panel that uses
rate(cortex_detector_pipeline_run_duration_seconds_count)) and either rename
"Triggered migrations" to reflect it shows absolute count delta, change its
query to a different metric/aggregation that actually represents migrations, or
consolidate the two panels into a single panel that presents both rate and
absolute change (or remove one) so the dashboard section is not duplicating the
same underlying signal.
internal/scheduling/lib/detector_pipeline.go (1)

82-83: Narrow lock scope to reduce contention.

defer lock.Unlock() keeps the mutex held across logging. Lock only around the shared writes for better parallelism.

Suggested refactor
-			lock.Lock()
-			defer lock.Unlock()
 			if err != nil {
 				slog.Error("descheduler: failed to run step", "error", err)
+				lock.Lock()
 				metricErrLabel = "true"
+				lock.Unlock()
 				return
 			}
 			slog.Info("descheduler: finished step")
+			lock.Lock()
 			decisionsByStep[stepName] = decisions
+			lock.Unlock()

Also applies to: 85-90

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/lib/detector_pipeline.go` around lines 82 - 83, The mutex
currently held by lock.Lock() with defer lock.Unlock() spans too much work
(including logging); narrow the critical section by replacing defer unlock with
an explicit lock.Unlock() immediately after the shared-state writes so logging
happens outside the lock. Locate the lock usage (lock.Lock()/defer
lock.Unlock()) in detector_pipeline.go (the same pattern around lines 82–83 and
85–90), perform the shared-data updates while locked, call lock.Unlock(), then
do any logging or non-shared work outside the lock to reduce contention.
internal/scheduling/reservations/commitments/reservation_manager_test.go (1)

84-123: Consider adding Type field to test reservation specs.

The test reservations set the LabelReservationType label but don't set Spec.Type. While this may work for testing the manager, it's inconsistent with how real reservations would be created where both should be set.

♻️ Suggested improvement
 		{
 			ObjectMeta: metav1.ObjectMeta{
 				Name: "commitment-abc123-0",
 				Labels: map[string]string{
 					v1alpha1.LabelReservationType: v1alpha1.ReservationTypeLabelCommittedResource,
 				},
 			},
 			Spec: v1alpha1.ReservationSpec{
+				Type: v1alpha1.ReservationTypeCommittedResource,
 				Resources: map[hv1.ResourceName]resource.Quantity{
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/reservation_manager_test.go`
around lines 84 - 123, The test reservations in existingReservations include the
LabelReservationType label but omit Spec.Type; add the Type field in each
v1alpha1.ReservationSpec (e.g. set Spec.Type =
v1alpha1.ReservationTypeCommittedResource or the matching enum used with
LabelReservationType/ReservationTypeLabelCommittedResource) so the test objects
mirror real reservations; update both reservation entries in the
existingReservations slice (referencing v1alpha1.ReservationSpec and Spec.Type)
to include the correct type value.
internal/knowledge/extractor/controller.go (1)

212-212: Use any instead of interface{} for JSON comparison variables.

Modern Go uses any as the type alias for interface{}. Update line 212 to align with repo conventions.

♻️ Suggested change
-		var oldData, newData interface{}
+		var oldData, newData any
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/knowledge/extractor/controller.go` at line 212, Replace the use of
the older interface{} type with the modern alias any for the JSON comparison
variables: update the declaration of oldData and newData (currently declared as
interface{}) to use any so the code conforms to repo conventions and Go idioms.
internal/scheduling/reservations/commitments/api_report_capacity_test.go (1)

205-206: Replace interface{} with any throughout the test file.

The coding guidelines require using any instead of interface{} in modern Go code. This applies to all occurrences in the test, including the test struct definition and feature payload declarations.

♻️ Proposed fixes
-	body           interface{}
+	body           any
@@
-	emptyFeatures := []map[string]interface{}{}
+	emptyFeatures := []map[string]any{}
@@
-	features := []map[string]interface{}{
+	features := []map[string]any{
@@
-		"flavors": []map[string]interface{}{
+		"flavors": []map[string]any{
@@
-		"largestFlavor": map[string]interface{}{
+		"largestFlavor": map[string]any{

Applies to lines: 43, 205, 238, 241, 249.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_report_capacity_test.go`
around lines 205 - 206, Replace all occurrences of the old empty interface type
with the Go alias any in this test file: update test struct field types and all
feature payload declarations (e.g. the emptyFeatures variable and other
map[string]interface{} uses passed into v1alpha1.BoxFeatureList and similar
calls) to use map[string]any and any instead of interface{}, and ensure function
calls like v1alpha1.BoxFeatureList receive the updated map[string]any types.
internal/scheduling/reservations/commitments/api_change_commitments_test.go (2)

113-117: Use strings.Contains for response-body string checks.

These assertions already operate on bodyStr (string), so strings.Contains is the guideline-compliant choice.

💡 Proposed fix
 import (
 	"bytes"
 	"encoding/json"
 	"net/http"
 	"net/http/httptest"
+	"strings"
 	"testing"
 	"time"
@@
-	if !bytes.Contains([]byte(bodyStr), []byte("Version mismatch")) {
+	if !strings.Contains(bodyStr, "Version mismatch") {
 		t.Errorf("expected response to contain 'Version mismatch', got: %s", bodyStr)
 	}
-	if !bytes.Contains([]byte(bodyStr), []byte("12345")) {
+	if !strings.Contains(bodyStr, "12345") {
 		t.Errorf("expected response to contain request version '12345', got: %s", bodyStr)
 	}
As per coding guidelines: "Use `strings.Contains` to check if a string is contained in another string".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 113 - 117, Replace the bytes.Contains calls that check the response
text with strings.Contains since bodyStr is already a string: update the two
assertions that currently call bytes.Contains([]byte(bodyStr), []byte("...")) to
strings.Contains(bodyStr, "...") (referencing the bodyStr variable and the two
checks for "Version mismatch" and "12345"); also add "strings" to the test
imports and remove "bytes" if it becomes unused.

24-223: These endpoint scenarios are good candidates for one table-driven test.

The test coverage is relevant, but setup and request boilerplate repeat across cases and can be consolidated into a compact struct-based table test.

Based on learnings: "Applies to **/*_test.go : Use struct-based test cases when applicable, but limit yourself to the most relevant cases".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`
around lines 24 - 223, Tests TestHandleChangeCommitments_VersionMismatch,
TestHandleChangeCommitments_DryRun and
TestProcessCommitmentChanges_KnowledgeNotReady repeat setup and request
boilerplate; refactor them into a single table-driven test that iterates cases
(e.g., name, initial Knowledge object or nil, request JSON, expected HTTP
status, expected content-type, expected response fields) and calls
HTTPAPI.HandleChangeCommitments for each row; extract common setup into reusable
helpers (scheme creation with v1alpha1.AddToScheme, fake client builder,
building API struct) and reuse the same httptest request/recorder flow inside
the loop, using unique symbols TestHandleChangeCommitments_Table (or consolidate
into TestHandleChangeCommitments) and keep assertions per-case driven by fields
in the table to preserve the existing checks for version mismatch, dry-run
rejection and "caches not ready" responses.
internal/scheduling/reservations/commitments/state_test.go (1)

66-253: Consider consolidating the error-path cases into one table-driven test.

The current cases are valid, but several follow the same pattern and can be compacted to reduce maintenance overhead.

Based on learnings: "Applies to **/*_test.go : Use struct-based test cases when applicable, but limit yourself to the most relevant cases".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/state_test.go` around lines 66 -
253, Consolidate repetitive error-path tests into table-driven cases: replace
the separate TestFromCommitment_InvalidResourceName,
TestFromReservations_EmptyList, TestFromReservations_MixedCommitmentUUIDs,
TestFromReservations_NonCommittedResourceType, and
TestGetFlavorGroupNameFromResource_Invalid with one or two table-driven tests
that enumerate input name/payload and the function to call (FromCommitment,
FromReservations, getFlavorGroupNameFromResource) plus expected error; implement
a loop that runs each case, invokes the appropriate symbol (FromCommitment,
FromReservations, getFlavorGroupNameFromResource) with the given input, and
asserts an error was returned, keeping the existing helper inputs (e.g.,
commitment structs and reservation slices) as inline case fields for clarity.
internal/scheduling/reservations/commitments/syncer.go (1)

196-214: Consider handling NotFound errors during deletion.

When deleting reservations for expired commitments, a NotFound error could occur if the reservation was already deleted by another process. This would cause the sync to fail unnecessarily.

Proposed fix to handle NotFound gracefully
 		if !activeCommitments[commitmentUUID] {
 			// This commitment no longer exists, delete the reservation
 			if err := s.Delete(ctx, &existing); err != nil {
+				if apierrors.IsNotFound(err) {
+					log.Info("reservation already deleted", "name", existing.Name)
+					continue
+				}
 				log.Error(err, "failed to delete reservation", "name", existing.Name)
 				return err
 			}

You'll need to add the import:

apierrors "k8s.io/apimachinery/pkg/api/errors"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/syncer.go` around lines 196 -
214, The deletion loop in the syncer currently returns an error if s.Delete(ctx,
&existing) fails, which should ignore NotFound errors; update the block that
calls s.Delete (inside the loop iterating existingReservations.Items and using
extractCommitmentUUID and activeCommitments) to treat apierrors.IsNotFound(err)
as a non-fatal condition (log at info/debug and continue) and only return for
other errors, and add the import alias apierrors
"k8s.io/apimachinery/pkg/api/errors".
internal/scheduling/reservations/commitments/api_change_commitments.go (3)

225-225: Use camelCase for variable name.

Variable time_start should be timeStart to follow Go naming conventions.

Proposed fix
-		time_start := time.Now()
+		timeStart := time.Now()
 
 		if err := watchReservationsUntilReady(ctx, log, api.client, reservationsToWatch, watchTimeout); err != nil {
 			log.Info("reservations failed to become ready, initiating rollback",
 				"reason", err.Error())
 			resp.RejectionReason = fmt.Sprintf("Not all reservations can be fulfilled: %v", err)
 			requireRollback = true
 		}
 
-		log.Info("finished watching reservation", "totalSchedulingTimeSeconds", time.Since(time_start).Seconds())
+		log.Info("finished watching reservation", "totalSchedulingTimeSeconds", time.Since(timeStart).Seconds())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments.go` at
line 225, The variable name time_start violates Go camelCase conventions; rename
the variable time_start to timeStart where it is declared (e.g., in the block
containing the time_start := time.Now() statement) and update every
use/reference in the same function/scope (search for time_start) to the new
identifier timeStart so the code follows Go naming style and compiles cleanly.

163-179: Use camelCase for variable names.

Variable names all_reservations and existing_reservations use snake_case which is not idiomatic Go. Use allReservations and existingReservations instead.

Proposed fix for naming conventions
 				// List all committed resource reservations, then filter by name prefix
-				var all_reservations v1alpha1.ReservationList
-				if err := api.client.List(ctx, &all_reservations, client.MatchingLabels{
+				var allReservations v1alpha1.ReservationList
+				if err := api.client.List(ctx, &allReservations, client.MatchingLabels{
 					v1alpha1.LabelReservationType: v1alpha1.ReservationTypeLabelCommittedResource,
 				}); err != nil {
 					resp.RejectionReason = fmt.Sprintf("failed to list reservations for commitment %s: %v", commitment.UUID, err)
 					requireRollback = true
 					break ProcessLoop
 				}

 				// Filter by name prefix to find reservations for this commitment
 				namePrefix := fmt.Sprintf("commitment-%s-", string(commitment.UUID))
-				var existing_reservations v1alpha1.ReservationList
-				for _, res := range all_reservations.Items {
+				var existingReservations v1alpha1.ReservationList
+				for _, res := range allReservations.Items {
 					if len(res.Name) >= len(namePrefix) && res.Name[:len(namePrefix)] == namePrefix {
-						existing_reservations.Items = append(existing_reservations.Items, res)
+						existingReservations.Items = append(existingReservations.Items, res)
 					}
 				}

Also update references on lines 182, 190, and 196.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments.go`
around lines 163 - 179, The variables all_reservations and existing_reservations
use snake_case; rename them to idiomatic Go camelCase (allReservations and
existingReservations) wherever referenced in this function/method (including the
List call, the loop over all_reservations.Items, append to
existing_reservations.Items, and any subsequent uses such as where namePrefix is
constructed and where existing_reservations is later read), update matching
declarations and usages (e.g., replace var all_reservations
v1alpha1.ReservationList and var existing_reservations v1alpha1.ReservationList
and their subsequent references) so the code compiles with consistent camelCase
identifiers.

172-179: Use strings.HasPrefix for prefix checking.

The manual prefix check can be simplified using strings.HasPrefix for better readability.

Proposed fix using strings.HasPrefix
 				// Filter by name prefix to find reservations for this commitment
 				namePrefix := fmt.Sprintf("commitment-%s-", string(commitment.UUID))
 				var existingReservations v1alpha1.ReservationList
 				for _, res := range allReservations.Items {
-					if len(res.Name) >= len(namePrefix) && res.Name[:len(namePrefix)] == namePrefix {
+					if strings.HasPrefix(res.Name, namePrefix) {
 						existingReservations.Items = append(existingReservations.Items, res)
 					}
 				}

As per coding guidelines, use strings.Contains to check if a string is contained in another string (and similarly strings.HasPrefix for prefix checks).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments.go`
around lines 172 - 179, Replace the manual slice-based prefix check with
strings.HasPrefix: in the loop that builds existing_reservations (variable
namePrefix created from commitment.UUID and iterating over
all_reservations.Items), call strings.HasPrefix(res.Name, namePrefix) instead of
checking lengths and slicing; add the "strings" import if missing and keep
appending matching reservations to existing_reservations.Items as before
(reference symbols: namePrefix, existing_reservations, all_reservations,
v1alpha1.ReservationList).
internal/scheduling/reservations/controller/controller_test.go (1)

159-218: Consider extracting flavor group test data helper.

The inline struct definition for flavor groups is verbose. While acceptable for test clarity, you could leverage the createFlavorGroupKnowledge helper pattern used in syncer_test.go for consistency across test files.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/controller/controller_test.go` around lines
159 - 218, Extract the inline flavor-groups test data in controller_test.go into
a reusable helper function (e.g., createFlavorGroupKnowledge) that constructs
the flavorGroups payload, marshals it to JSON, wraps it into
runtime.RawExtension and returns a *v1alpha1.Knowledge (or its Spec/Status)
ready for tests; replace the inline block that builds flavorGroupKnowledge with
a call to that helper, preserving fields like Spec.Extractor.Name,
Spec.SchedulingDomain, Status.Raw/RawLength and Conditions so other tests can
reuse the same helper as in syncer_test.go.
internal/scheduling/reservations/commitments/api_info.go (1)

57-117: Verify GetAllFlavorGroups behavior when called with nil CRD.

The GetAllFlavorGroups is called with nil as the second parameter on line 61, but later on line 105, knowledge.Get(ctx) is called separately to fetch the CRD for versioning. This results in potentially fetching the Knowledge CRD twice. Consider fetching it once and reusing.

Proposed refactor to avoid duplicate CRD fetch
 func (api *HTTPAPI) buildServiceInfo(ctx context.Context, log logr.Logger) (liquid.ServiceInfo, error) {
-	// Get all flavor groups from Knowledge CRDs
 	knowledge := &reservations.FlavorGroupKnowledgeClient{Client: api.client}
-	flavorGroups, err := knowledge.GetAllFlavorGroups(ctx, nil)
+	
+	// Fetch knowledge CRD once for both flavor groups and versioning
+	knowledgeCRD, err := knowledge.Get(ctx)
+	if err != nil {
+		return liquid.ServiceInfo{
+			Version:   -1,
+			Resources: make(map[liquid.ResourceName]liquid.ResourceInfo),
+		}, err
+	}
+	
+	flavorGroups, err := knowledge.GetAllFlavorGroups(ctx, knowledgeCRD)
 	if err != nil {
-		// Return -1 as version when knowledge is not ready
 		return liquid.ServiceInfo{
 			Version:   -1,
 			Resources: make(map[liquid.ResourceName]liquid.ResourceInfo),
 		}, err
 	}
 
 	// ... build resources map ...
 
-	// Get last content changed from flavor group knowledge and treat it as version
 	var version int64 = -1
-	if knowledgeCRD, err := knowledge.Get(ctx); err == nil && knowledgeCRD != nil && !knowledgeCRD.Status.LastContentChange.IsZero() {
+	if knowledgeCRD != nil && !knowledgeCRD.Status.LastContentChange.IsZero() {
 		version = knowledgeCRD.Status.LastContentChange.Unix()
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_info.go` around lines 57 -
117, buildServiceInfo currently calls knowledge.GetAllFlavorGroups(ctx, nil)
then separately calls knowledge.Get(ctx), causing a duplicate CRD fetch; instead
call knowledge.Get(ctx) first, handle error/nil as before (returning version -1
if knowledge not ready), then pass the obtained CRD to
knowledge.GetAllFlavorGroups(ctx, crd) to build the resources and reuse the same
CRD for computing version; update references to knowledge.GetAllFlavorGroups and
knowledge.Get in buildServiceInfo accordingly and preserve existing error-return
semantics and logging.
internal/scheduling/reservations/commitments/capacity.go (2)

101-106: Consider logging when Knowledge CRD parsing fails.

Silently continuing when UnboxFeatureList fails could hide data issues. Adding a log statement would help with debugging.

Proposed fix to add logging
 		// Parse features from Raw data
 		features, err := v1alpha1.UnboxFeatureList[compute.HostDetails](knowledge.Status.Raw)
 		if err != nil {
-			// Skip if we can't parse this knowledge
+			// Log and skip if we can't parse this knowledge
+			// Note: Consider using a logger passed as parameter for better testability
 			continue
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/capacity.go` around lines 101 -
106, When UnboxFeatureList[compute.HostDetails] fails while parsing
knowledge.Status.Raw in capacity.go, add a log statement instead of silently
continuing: locate the block that calls
v1alpha1.UnboxFeatureList[compute.HostDetails] and on err log the error and
context (e.g., the CR name/UID or a snippet of knowledge.Status.Raw) via the
package's existing logger (e.g., logger, processLogger, or the surrounding
context's logger) before continue-ing so parsing failures are visible during
debugging.

60-84: Placeholder implementation noted - capacity and usage values are unset.

The TODO comments on lines 78-79 indicate that actual capacity and usage calculations are deferred. This is acceptable for an initial implementation, but ensure these TODOs are tracked for follow-up.

Do you want me to open a new issue to track the implementation of actual capacity and usage calculations?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/capacity.go` around lines 60 -
84, The calculateAZCapacity placeholder leaves AZResourceCapacityReport
Capacity/Usage unset; create a tracked issue that describes implementing
capacity calculation from Reservation CRDs/host resources and usage from VM
allocations, reference the calculateAZCapacity function and the TODOs inside it,
then update the TODO comments in calculateAZCapacity (and any mention of
AZResourceCapacityReport) to include the new issue ID and clear acceptance
criteria (how to compute capacity, sources like Reservation CRDs and host
details, expected optional semantics), so future work can link back and the code
documents the tracking ticket.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@AGENTS.md`:
- Line 45: Update the wording/typos in AGENTS.md by replacing the phrase "struct
based" with "struct-based", change any occurrence of "miscallaneous" to
"miscellaneous", and capitalize "markdown" to "Markdown" (apply these edits
where the comment notes lines around the instances such as the sentence "If
applicable, use struct based test cases, but limit yourself to the most relevant
cases" and the other occurrences referenced). Ensure hyphenation and
capitalization are consistent across the file for these three terms.

In `@helm/bundles/cortex-nova/values.yaml`:
- Line 117: The values.yaml currently allows enabling reservations-controller
independently even though it relies on novaExternalScheduler provided by
nova-pipeline-controllers; add a runtime validation in main.go that checks the
chart/values configuration and fails fast if reservations-controller is enabled
while nova-pipeline-controllers is disabled (check the flags/struct fields that
map to "reservations-controller" and "nova-pipeline-controllers" and the
novaExternalScheduler URL) OR, alternatively, update values.yaml to document the
dependency clearly by adding a comment next to reservations-controller and
novaExternalScheduler that nova-pipeline-controllers must be enabled together
and list valid controller combinations; prefer adding the validation in main.go
to prevent misconfiguration.

In `@internal/knowledge/extractor/plugins/compute/flavor_groups.sql`:
- Around line 15-16: The WHERE clause is using fragile string LIKE on
extra_specs JSON; replace it with proper jsonb extraction e.g. parse extra_specs
as jsonb and test the hypervisor type via jsonb operators: use
(extra_specs::jsonb -> 'capabilities' ->> 'hypervisor_type') IN ('qemu','ch')
(or compare LOWER(...) if values may vary in case) instead of the LIKE patterns
so the query reliably filters on hypervisor type.

In `@internal/scheduling/nova/plugins/weighers/kvm_binpack_test.go`:
- Around line 100-137: Remove the duplicated table test entry that repeats the
"zero weights should raise error" case; locate the duplicate by the test case
name "zero weights should raise error" in the KVMBinpackStepOpts table (the
second occurrence) and delete that entire test struct so the table contains only
one "zero weights should raise error" case and the other unique cases remain
unchanged.

In `@internal/scheduling/reservations/commitments/api_change_commitments.go`:
- Around line 98-99: The function processCommitmentChanges currently uses
context.Background(); change its signature to accept a context.Context (e.g.,
processCommitmentChanges(ctx context.Context, w http.ResponseWriter, log
logr.Logger, req liquid.CommitmentChangeRequest, resp
*liquid.CommitmentChangeResponse) error) and replace context.Background() with
the passed ctx so cancellations propagate; then update the caller
HandleChangeCommitments to pass r.Context() into processCommitmentChanges when
invoking it. Ensure any downstream calls inside processCommitmentChanges that
create derived contexts use the passed ctx as parent.

In `@internal/scheduling/reservations/commitments/api_report_capacity.go`:
- Around line 37-40: The JSON decoder currently swallows all errors when
decoding into req (liquid.ServiceCapacityRequest); change the error handling
after json.NewDecoder(r.Body).Decode(&req) to only tolerate io.EOF as an empty
body and treat any other error as a client error by returning HTTP 400 Bad
Request. Specifically, update the handler that decodes the request so that if
err == io.EOF you set req = liquid.ServiceCapacityRequest{}; otherwise write a
400 response (e.g., http.Error or w.WriteHeader with an explanatory message) and
stop processing; ensure you import io for io.EOF and reference the same
json.NewDecoder(...).Decode(&req) call and the liquid.ServiceCapacityRequest
type.

In `@internal/scheduling/reservations/commitments/reservation_manager_test.go`:
- Around line 31-34: The test helper testFlavorGroup() is defined in
state_test.go but used by reservation_manager_test.go; move (or copy) the
testFlavorGroup() function into reservation_manager_test.go so the helper lives
with the tests that use it, and update any imports or dependencies inside
testFlavorGroup() to match reservation_manager_test.go (ensuring the signature
and return type compute.FlavorGroupFeature remain unchanged).

In `@internal/scheduling/reservations/commitments/reservation_manager.go`:
- Around line 107-113: The code appends to
touchedReservations/removedReservations before the API calls complete, causing
callers to see phantom changes on API errors; update the logic in the
reservation loop so you only append to removedReservations or
touchedReservations after the corresponding call succeeds (i.e., move the
removedReservations = append(...) and touchedReservations = append(...) to
immediately after a successful m.Delete(ctx, &res) or successful m.Create(ctx,
newRes) return), and ensure error returns (from methods like m.Delete and
m.Create) happen before any mutation of those slices; adjust the blocks around
m.Delete and m.Create in the functions handling reservations so the append
happens only on success.
- Around line 96-117: The loop over existing reservations dereferences
res.Spec.CommittedResourceReservation without a nil check and will panic for
malformed objects; update the loop in reservation_manager.go to first check if
res.Spec.CommittedResourceReservation == nil and treat such reservations as
invalid: log a clear message (include res.Name and commitmentUUID), append the
reservation to removedReservations, add its memory to deltaMemoryBytes, attempt
m.Delete(ctx, &res) and handle/delete errors the same way as the current
mismatch branch; otherwise continue with the existing flavor-group and projectID
comparisons and append to validReservations when appropriate.
- Around line 208-229: The current patch logic only updates reservation.Spec
when state fields are non-empty/non-nil, so removals (clearing AZ or clearing
StartTime/EndTime) never propagate; update the comparison and patch application
to detect differences even when state.AvailabilityZone is "" or
state.StartTime/state.EndTime is nil and then write the desired value (including
empty string or nil) into reservation.Spec: set
reservation.Spec.AvailabilityZone = state.AvailabilityZone unconditionally when
it differs, set reservation.Spec.StartTime = nil when state.StartTime == nil
otherwise set to &metav1.Time{Time: *state.StartTime}, and likewise set
reservation.Spec.EndTime = nil when state.EndTime == nil otherwise set to
&metav1.Time{Time: *state.EndTime}; keep using the existing patch =
client.MergeFrom(reservation.DeepCopy()) and log context (reservation,
availabilityZone, startTime, endTime).

In `@internal/scheduling/reservations/commitments/state.go`:
- Around line 153-199: FromReservations currently seeds metadata from
reservations[0], which is unsafe; update FromReservations (the code building
CommitmentState/using extractCommitmentUUID and stateLog) to first validate and
collect the consistent subset of reservations: iterate all reservations, skip
those with nil Spec.CommittedResourceReservation or a different
extractCommitmentUUID(res.Name), and collect candidates keyed by ProjectID,
DomainID, ResourceGroup (FlavorGroupName), AvailabilityZone, StartTime/EndTime;
choose canonical metadata by selecting the most common/consistent values among
candidates (or fail if none), then construct CommitmentState from those
canonical values and sum TotalMemoryBytes only from reservations that match the
canonical metadata (logging and ignoring inconsistent ones via stateLog as
before); ensure you return an error if no valid reservations remain.

In `@internal/scheduling/reservations/commitments/utils_test.go`:
- Around line 79-84: Add a regression test to
TestExtractCommitmentUUID_NoSlotIndex that covers dashed UUIDs: call
extractCommitmentUUID with "commitment-550e8400-e29b-41d4-a716-446655440000" and
assert it returns the full dashed UUID unchanged
("550e8400-e29b-41d4-a716-446655440000"); update the test function (or add a new
test function) to include this assertion so extractCommitmentUUID is validated
for UUIDs containing dashes and no slot index.

In `@internal/scheduling/reservations/commitments/utils.go`:
- Around line 17-24: GetMaxSlotIndex is currently treating any resource name
with a numeric last token as a commitment slot; restrict parsing to commitment
names only by ensuring the name begins with the "commitment-" prefix (or that
the first token equals "commitment") before splitting/parsing the last token.
Update the block that uses parts := strings.Split(res.Name, "-") and the
subsequent strconv.Atoi call to first check strings.HasPrefix(res.Name,
"commitment-") (or parts[0] == "commitment") and only then parse the trailing
numeric index and compare to maxIndex.
- Around line 36-45: The extractCommitmentUUID function currently splits
withoutPrefix and always strips the last dash-delimited token; change it to only
remove the final token when that token is a numeric slot index: after computing
parts := strings.Split(withoutPrefix, "-"), check the last token
(parts[len(parts)-1]) for being an integer (e.g., via strconv.Atoi or
strconv.ParseInt) and only rejoin parts[:len(parts)-1] when that parse succeeds;
otherwise return withoutPrefix unchanged—update extractCommitmentUUID
accordingly.

In `@internal/scheduling/reservations/controller/controller.go`:
- Around line 357-360: The early return in reconcileAllocations() when
len(res.Spec.CommittedResourceReservation.Allocations) == 0 prevents the status
patch and leaves stale entries in
res.Status.CommittedResourceReservation.Allocations; modify
reconcileAllocations() to continue and explicitly clear or set
res.Status.CommittedResourceReservation.Allocations to an empty map/slice (or
nil) and invoke the existing status-patch path so the cleared status is
persisted when the Spec allocation set transitions from non-empty to empty;
ensure you reference and update the same
Status.CommittedResourceReservation.Allocations field and reuse the controller's
existing status patch/update logic rather than simply returning early.
- Around line 58-59: The controller currently treats a nil r.DB as an error and
calls reconcileAllocations() for every Ready reservation even when no
DatabaseSecretRef is provided; change the reconcile path so that if
DatabaseSecretRef is nil (i.e., DB integration disabled) you skip allocation
verification and do not error on a nil r.DB. Concretely: in the reconcile flow
that initializes r.DB (referencing DatabaseSecretRef and the DB init block),
only attempt DB initialization and return an error if DatabaseSecretRef is
present and DB init fails; likewise, guard calls to reconcileAllocations() so
they run only when DatabaseSecretRef != nil (or r.DB != nil after init), leaving
pre-allocated reservations untouched when DB creds are absent.

In `@internal/scheduling/reservations/flavor_groups.go`:
- Around line 32-34: The Get() implementation in flavor_groups.go currently
treats a Kubernetes NotFound error as a hard failure; change it to normalize
NotFound to the "not ready" semantics by returning nil, nil when
apierrors.IsNotFound(err) (or equivalent k8s NotFound check) is true, while
preserving the existing fmt.Errorf(...) return for other errors; this ensures
callers like GetAllFlavorGroups and syncer.go can rely on knowledgeCRD == nil to
indicate not-ready instead of receiving an error.

In `@tools/spawner/cli/cli.go`:
- Around line 105-108: The formatting callback f inside ChooseServerGroup uses
sg.ID[:5] which can panic on short or empty IDs; modify f to safely truncate
sg.ID by checking its length (e.g., compute n := min(len(sg.ID), 5) or use a
shortID helper) and use sg.ID[:n] (or a conditional fallback like sg.ID) in the
fmt.Sprintf call so malformed IDs cannot cause a slice out-of-range panic.

In `@tools/spawner/main.go`:
- Around line 423-444: Pre-filter getServerGroupsResponse.ServerGroups to only
include entries where strings.HasPrefix(sg.Name, prefix), then use that filtered
slice for the prompt, the deletion loop (spawn goroutines via wg.Go) and for the
final log message; if the filtered slice is empty skip prompting and deletion
entirely or print a message stating no server groups matched the prefix; update
uses of sg.Name, sg.ID, wg, and projectCompute.Delete to operate on the filtered
list so the prompt and "Deleted ..." message accurately reflect what was
actually deleted.

---

Outside diff comments:
In `@helm/dev/cortex-prometheus-operator/Chart.yaml`:
- Around line 9-13: The Chart.yaml dependencies block is missing the required
local chart source comment; edit the dependencies section in Chart.yaml near the
kube-prometheus-stack entry (the dependency with name: kube-prometheus-stack,
repository: oci://ghcr.io/prometheus-community/charts, version: 82.10.3) and add
the mandatory comment line "# from: file://../../library/cortex-postgres"
directly above or next to that dependency declaration so the file includes the
required local chart path marker.

In `@internal/scheduling/reservations/controller/controller.go`:
- Around line 134-152: The merge base is taken after mutating res.Status which
makes client.MergeFrom(old) a no-op; call res.DeepCopy() to capture old before
modifying res.Status (use old := res.DeepCopy(); patch :=
client.MergeFrom(old)), then set res.Status.Host = res.Spec.TargetHost and call
r.Status().Patch(ctx, &res, patch) as before, preserving the existing error
handling for client.IgnoreNotFound(err).
- Around line 183-199: The memory and CPU conversions use the wrong scaling:
change memory conversion in the block that sets memoryMB (reading from
res.Spec.Resources["memory"]) to use memory.ScaledValue(resource.Mebi) so you
get MiB (8192 for 8GiB), and change CPU conversion (reading from
res.Spec.Resources["cpu"]) to compute whole vCPUs by taking
cpuQuantity.ScaledValue(resource.Milli) and dividing by 1000 (e.g., vcpus :=
uint64(cpuValue / 1000)); apply the same fixes to the other conversion block
later (lines ~251-259) so values match how reservation_manager.go stores bytes
and whole CPUs and validate the resulting integers before returning.

---

Nitpick comments:
In `@internal/knowledge/extractor/controller.go`:
- Line 212: Replace the use of the older interface{} type with the modern alias
any for the JSON comparison variables: update the declaration of oldData and
newData (currently declared as interface{}) to use any so the code conforms to
repo conventions and Go idioms.

In `@internal/scheduling/lib/detector_pipeline.go`:
- Around line 82-83: The mutex currently held by lock.Lock() with defer
lock.Unlock() spans too much work (including logging); narrow the critical
section by replacing defer unlock with an explicit lock.Unlock() immediately
after the shared-state writes so logging happens outside the lock. Locate the
lock usage (lock.Lock()/defer lock.Unlock()) in detector_pipeline.go (the same
pattern around lines 82–83 and 85–90), perform the shared-data updates while
locked, call lock.Unlock(), then do any logging or non-shared work outside the
lock to reduce contention.

In `@internal/scheduling/reservations/commitments/api_change_commitments_test.go`:
- Around line 113-117: Replace the bytes.Contains calls that check the response
text with strings.Contains since bodyStr is already a string: update the two
assertions that currently call bytes.Contains([]byte(bodyStr), []byte("...")) to
strings.Contains(bodyStr, "...") (referencing the bodyStr variable and the two
checks for "Version mismatch" and "12345"); also add "strings" to the test
imports and remove "bytes" if it becomes unused.
- Around line 24-223: Tests TestHandleChangeCommitments_VersionMismatch,
TestHandleChangeCommitments_DryRun and
TestProcessCommitmentChanges_KnowledgeNotReady repeat setup and request
boilerplate; refactor them into a single table-driven test that iterates cases
(e.g., name, initial Knowledge object or nil, request JSON, expected HTTP
status, expected content-type, expected response fields) and calls
HTTPAPI.HandleChangeCommitments for each row; extract common setup into reusable
helpers (scheme creation with v1alpha1.AddToScheme, fake client builder,
building API struct) and reuse the same httptest request/recorder flow inside
the loop, using unique symbols TestHandleChangeCommitments_Table (or consolidate
into TestHandleChangeCommitments) and keep assertions per-case driven by fields
in the table to preserve the existing checks for version mismatch, dry-run
rejection and "caches not ready" responses.

In `@internal/scheduling/reservations/commitments/api_change_commitments.go`:
- Line 225: The variable name time_start violates Go camelCase conventions;
rename the variable time_start to timeStart where it is declared (e.g., in the
block containing the time_start := time.Now() statement) and update every
use/reference in the same function/scope (search for time_start) to the new
identifier timeStart so the code follows Go naming style and compiles cleanly.
- Around line 163-179: The variables all_reservations and existing_reservations
use snake_case; rename them to idiomatic Go camelCase (allReservations and
existingReservations) wherever referenced in this function/method (including the
List call, the loop over all_reservations.Items, append to
existing_reservations.Items, and any subsequent uses such as where namePrefix is
constructed and where existing_reservations is later read), update matching
declarations and usages (e.g., replace var all_reservations
v1alpha1.ReservationList and var existing_reservations v1alpha1.ReservationList
and their subsequent references) so the code compiles with consistent camelCase
identifiers.
- Around line 172-179: Replace the manual slice-based prefix check with
strings.HasPrefix: in the loop that builds existing_reservations (variable
namePrefix created from commitment.UUID and iterating over
all_reservations.Items), call strings.HasPrefix(res.Name, namePrefix) instead of
checking lengths and slicing; add the "strings" import if missing and keep
appending matching reservations to existing_reservations.Items as before
(reference symbols: namePrefix, existing_reservations, all_reservations,
v1alpha1.ReservationList).

In `@internal/scheduling/reservations/commitments/api_info.go`:
- Around line 57-117: buildServiceInfo currently calls
knowledge.GetAllFlavorGroups(ctx, nil) then separately calls knowledge.Get(ctx),
causing a duplicate CRD fetch; instead call knowledge.Get(ctx) first, handle
error/nil as before (returning version -1 if knowledge not ready), then pass the
obtained CRD to knowledge.GetAllFlavorGroups(ctx, crd) to build the resources
and reuse the same CRD for computing version; update references to
knowledge.GetAllFlavorGroups and knowledge.Get in buildServiceInfo accordingly
and preserve existing error-return semantics and logging.

In `@internal/scheduling/reservations/commitments/api_report_capacity_test.go`:
- Around line 205-206: Replace all occurrences of the old empty interface type
with the Go alias any in this test file: update test struct field types and all
feature payload declarations (e.g. the emptyFeatures variable and other
map[string]interface{} uses passed into v1alpha1.BoxFeatureList and similar
calls) to use map[string]any and any instead of interface{}, and ensure function
calls like v1alpha1.BoxFeatureList receive the updated map[string]any types.

In `@internal/scheduling/reservations/commitments/capacity.go`:
- Around line 101-106: When UnboxFeatureList[compute.HostDetails] fails while
parsing knowledge.Status.Raw in capacity.go, add a log statement instead of
silently continuing: locate the block that calls
v1alpha1.UnboxFeatureList[compute.HostDetails] and on err log the error and
context (e.g., the CR name/UID or a snippet of knowledge.Status.Raw) via the
package's existing logger (e.g., logger, processLogger, or the surrounding
context's logger) before continue-ing so parsing failures are visible during
debugging.
- Around line 60-84: The calculateAZCapacity placeholder leaves
AZResourceCapacityReport Capacity/Usage unset; create a tracked issue that
describes implementing capacity calculation from Reservation CRDs/host resources
and usage from VM allocations, reference the calculateAZCapacity function and
the TODOs inside it, then update the TODO comments in calculateAZCapacity (and
any mention of AZResourceCapacityReport) to include the new issue ID and clear
acceptance criteria (how to compute capacity, sources like Reservation CRDs and
host details, expected optional semantics), so future work can link back and the
code documents the tracking ticket.

In `@internal/scheduling/reservations/commitments/reservation_manager_test.go`:
- Around line 84-123: The test reservations in existingReservations include the
LabelReservationType label but omit Spec.Type; add the Type field in each
v1alpha1.ReservationSpec (e.g. set Spec.Type =
v1alpha1.ReservationTypeCommittedResource or the matching enum used with
LabelReservationType/ReservationTypeLabelCommittedResource) so the test objects
mirror real reservations; update both reservation entries in the
existingReservations slice (referencing v1alpha1.ReservationSpec and Spec.Type)
to include the correct type value.

In `@internal/scheduling/reservations/commitments/state_test.go`:
- Around line 66-253: Consolidate repetitive error-path tests into table-driven
cases: replace the separate TestFromCommitment_InvalidResourceName,
TestFromReservations_EmptyList, TestFromReservations_MixedCommitmentUUIDs,
TestFromReservations_NonCommittedResourceType, and
TestGetFlavorGroupNameFromResource_Invalid with one or two table-driven tests
that enumerate input name/payload and the function to call (FromCommitment,
FromReservations, getFlavorGroupNameFromResource) plus expected error; implement
a loop that runs each case, invokes the appropriate symbol (FromCommitment,
FromReservations, getFlavorGroupNameFromResource) with the given input, and
asserts an error was returned, keeping the existing helper inputs (e.g.,
commitment structs and reservation slices) as inline case fields for clarity.

In `@internal/scheduling/reservations/commitments/syncer.go`:
- Around line 196-214: The deletion loop in the syncer currently returns an
error if s.Delete(ctx, &existing) fails, which should ignore NotFound errors;
update the block that calls s.Delete (inside the loop iterating
existingReservations.Items and using extractCommitmentUUID and
activeCommitments) to treat apierrors.IsNotFound(err) as a non-fatal condition
(log at info/debug and continue) and only return for other errors, and add the
import alias apierrors "k8s.io/apimachinery/pkg/api/errors".

In `@internal/scheduling/reservations/controller/controller_test.go`:
- Around line 159-218: Extract the inline flavor-groups test data in
controller_test.go into a reusable helper function (e.g.,
createFlavorGroupKnowledge) that constructs the flavorGroups payload, marshals
it to JSON, wraps it into runtime.RawExtension and returns a *v1alpha1.Knowledge
(or its Spec/Status) ready for tests; replace the inline block that builds
flavorGroupKnowledge with a call to that helper, preserving fields like
Spec.Extractor.Name, Spec.SchedulingDomain, Status.Raw/RawLength and Conditions
so other tests can reuse the same helper as in syncer_test.go.

In `@tools/plutono/provisioning/dashboards/cortex-status.json`:
- Around line 2321-2330: The "Triggered migrations" panel is querying the same
underlying metric cortex_detector_pipeline_run_duration_seconds_count (using
delta()) as the "Deschedulings" panel (which uses rate()), which may be
redundant and the title may be inaccurate; inspect the two panels (the one
titled "Triggered migrations" with expr=sum by (error)
(delta(cortex_detector_pipeline_run_duration_seconds_count{}[2m])) and the
"Deschedulings" panel that uses
rate(cortex_detector_pipeline_run_duration_seconds_count)) and either rename
"Triggered migrations" to reflect it shows absolute count delta, change its
query to a different metric/aggregation that actually represents migrations, or
consolidate the two panels into a single panel that presents both rate and
absolute change (or remove one) so the dashboard section is not duplicating the
same underlying signal.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: be05fcb0-f1c3-4185-8275-25271ea923a2

📥 Commits

Reviewing files that changed from the base of the PR and between b460bf6 and 19ec6e6.

⛔ Files ignored due to path filters (2)
  • go.sum is excluded by !**/*.sum
  • tools/visualizer/favicon.ico is excluded by !**/*.ico
📒 Files selected for processing (85)
  • .coderabbit.yaml
  • .github/workflows/push-charts.yaml
  • .github/workflows/push-images.yaml
  • .gitignore
  • AGENTS.md
  • CODEOWNERS
  • Tiltfile
  • api/v1alpha1/knowledge_types.go
  • api/v1alpha1/reservation_types.go
  • api/v1alpha1/zz_generated.deepcopy.go
  • cmd/main.go
  • go.mod
  • helm/bundles/cortex-cinder/Chart.yaml
  • helm/bundles/cortex-cinder/values.yaml
  • helm/bundles/cortex-crds/values.yaml
  • helm/bundles/cortex-ironcore/values.yaml
  • helm/bundles/cortex-manila/Chart.yaml
  • helm/bundles/cortex-manila/values.yaml
  • helm/bundles/cortex-nova/Chart.yaml
  • helm/bundles/cortex-nova/alerts/nova.alerts.yaml
  • helm/bundles/cortex-nova/templates/knowledges_kvm.yaml
  • helm/bundles/cortex-nova/templates/pipelines_kvm.yaml
  • helm/bundles/cortex-nova/values.yaml
  • helm/bundles/cortex-pods/values.yaml
  • helm/dev/cortex-prometheus-operator/Chart.yaml
  • helm/library/cortex-postgres/Chart.yaml
  • helm/library/cortex/Chart.yaml
  • helm/library/cortex/files/crds/cortex.cloud_knowledges.yaml
  • helm/library/cortex/files/crds/cortex.cloud_reservations.yaml
  • helm/library/cortex/values.yaml
  • internal/knowledge/extractor/controller.go
  • internal/knowledge/extractor/plugins/compute/flavor_groups.go
  • internal/knowledge/extractor/plugins/compute/flavor_groups.sql
  • internal/knowledge/extractor/plugins/compute/flavor_groups_test.go
  • internal/knowledge/extractor/supported_extractors.go
  • internal/knowledge/kpis/plugins/compute/resource_capacity_kvm_test.go
  • internal/scheduling/lib/detector_monitor.go
  • internal/scheduling/lib/detector_pipeline.go
  • internal/scheduling/lib/detector_pipeline_test.go
  • internal/scheduling/nova/integration_test.go
  • internal/scheduling/nova/plugins/filters/filter_has_enough_capacity.go
  • internal/scheduling/nova/plugins/filters/filter_has_enough_capacity_test.go
  • internal/scheduling/nova/plugins/filters/filter_packed_virtqueue.go
  • internal/scheduling/nova/plugins/filters/filter_packed_virtqueue_test.go
  • internal/scheduling/nova/plugins/weighers/kvm_binpack.go
  • internal/scheduling/nova/plugins/weighers/kvm_binpack_test.go
  • internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation_test.go
  • internal/scheduling/nova/plugins/weighers/kvm_prefer_smaller_hosts.go
  • internal/scheduling/nova/plugins/weighers/kvm_prefer_smaller_hosts_test.go
  • internal/scheduling/reservations/commitments/api.go
  • internal/scheduling/reservations/commitments/api_change_commitments.go
  • internal/scheduling/reservations/commitments/api_change_commitments_test.go
  • internal/scheduling/reservations/commitments/api_info.go
  • internal/scheduling/reservations/commitments/api_info_test.go
  • internal/scheduling/reservations/commitments/api_report_capacity.go
  • internal/scheduling/reservations/commitments/api_report_capacity_test.go
  • internal/scheduling/reservations/commitments/capacity.go
  • internal/scheduling/reservations/commitments/client.go
  • internal/scheduling/reservations/commitments/client_test.go
  • internal/scheduling/reservations/commitments/reservation_manager.go
  • internal/scheduling/reservations/commitments/reservation_manager_test.go
  • internal/scheduling/reservations/commitments/state.go
  • internal/scheduling/reservations/commitments/state_test.go
  • internal/scheduling/reservations/commitments/syncer.go
  • internal/scheduling/reservations/commitments/syncer_test.go
  • internal/scheduling/reservations/commitments/utils.go
  • internal/scheduling/reservations/commitments/utils_test.go
  • internal/scheduling/reservations/controller/client.go
  • internal/scheduling/reservations/controller/client_test.go
  • internal/scheduling/reservations/controller/controller.go
  • internal/scheduling/reservations/controller/controller_test.go
  • internal/scheduling/reservations/controller/monitor.go
  • internal/scheduling/reservations/controller/monitor_test.go
  • internal/scheduling/reservations/flavor_groups.go
  • postgres/Dockerfile
  • tools/plutono/provisioning/dashboards/cortex-status.json
  • tools/spawner/cli/cli.go
  • tools/spawner/main.go
  • tools/spawner/types/server_group.go
  • tools/visualizer/Dockerfile
  • tools/visualizer/app.yaml
  • tools/visualizer/nginx.conf
  • tools/visualizer/nova.html
  • tools/visualizer/role.yaml
  • tools/visualizer/shared.css
💤 Files with no reviewable changes (14)
  • internal/scheduling/reservations/controller/client_test.go
  • internal/scheduling/nova/plugins/filters/filter_packed_virtqueue_test.go
  • Tiltfile
  • helm/library/cortex/values.yaml
  • internal/scheduling/nova/plugins/filters/filter_packed_virtqueue.go
  • internal/scheduling/reservations/controller/client.go
  • internal/scheduling/reservations/commitments/client_test.go
  • tools/visualizer/role.yaml
  • tools/visualizer/nova.html
  • tools/visualizer/Dockerfile
  • tools/visualizer/nginx.conf
  • tools/visualizer/shared.css
  • helm/bundles/cortex-nova/templates/pipelines_kvm.yaml
  • tools/visualizer/app.yaml

- nova-pipeline-controllers
- nova-deschedulings-executor
- explanation-controller
- reservations-controller
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify whether novaExternalScheduler is intended to be in-pod localhost and whether overrides exist.

# 1) Locate all references to the endpoint/controller wiring.
rg -n -C3 'novaExternalScheduler|scheduler/nova/external|reservations-controller'

# 2) Check if any manifest for this component exposes/listens on 8080 in the same pod.
rg -n -C4 'containerPort:\s*8080|localhost:8080|name:\s*.*scheduler'

# 3) Check values/docs for mandatory override guidance.
rg -n -C3 'novaExternalScheduler|override|production|reservations-controller'

Repository: cobaltcore-dev/cortex

Length of output: 50378


🏁 Script executed:

# 1. Get the full cortex-nova values.yaml to understand the deployment structure
head -150 helm/bundles/cortex-nova/values.yaml | tail -50

# 2. Find the cortex-nova deployment/statefulset manifest to see what components are co-located
fd -e yaml cortex-nova | grep -i 'manifest\|template\|deploy' | head -10

Repository: cobaltcore-dev/cortex

Length of output: 1647


🏁 Script executed:

# 3. Check if there's any documentation about the reservations-controller and its endpoint
rg -l 'reservations.*controller|localhost.*8080' --type md | head -10

# 4. Look for how cortex-nova-scheduler service is defined
rg -n 'cortex-nova-scheduler' helm/bundles/cortex-nova/templates/ -A 5 -B 2

Repository: cobaltcore-dev/cortex

Length of output: 719


🏁 Script executed:

# 5. Check if the nova external scheduler API is conditionally enabled
rg -n 'NovaExternalScheduler\|nova.*external.*scheduler' cmd/ internal/ --type go | grep -i 'enable\|init\|conditional' | head -20

Repository: cobaltcore-dev/cortex

Length of output: 47


🏁 Script executed:

# Check the cortex library manager pod template to see what containers are defined
cat helm/library/cortex/templates/manager/manager.yaml | head -60

Repository: cobaltcore-dev/cortex

Length of output: 2246


🏁 Script executed:

# Look for the cortex-nova-scheduler deployment to understand what runs in it
fd -type f 'scheduler\|manager' helm/bundles/cortex-nova/templates/ | xargs grep -l 'kind:.*Deployment\|kind:.*StatefulSet'

# Check if there's a manager deployment in cortex-nova
rg -n 'kind.*Deployment|kind.*StatefulSet' helm/bundles/cortex-nova/templates/ -A 3

Repository: cobaltcore-dev/cortex

Length of output: 235


🏁 Script executed:

# Search for how cortex external API is initialized
rg -n 'external.*[Ss]cheduler.*[Aa]pi|Init.*external' internal/scheduling/nova/ --type go -B 2 -A 5 | head -40

Repository: cobaltcore-dev/cortex

Length of output: 1438


🏁 Script executed:

# Check cmd/main.go to see if both reservations-controller and nova scheduler API are initialized
rg -n 'reservations-controller\|NovaExternalScheduler\|Init.*http' cmd/main.go -B 2 -A 2 | head -60

Repository: cobaltcore-dev/cortex

Length of output: 47


🏁 Script executed:

# Check if nova scheduler API is always initialized or conditionally enabled
rg -n 'httpAPI.Init\|NewHTTPAPI\|novaExternalScheduler' cmd/main.go -B 5 -A 5

Repository: cobaltcore-dev/cortex

Length of output: 47


🏁 Script executed:

# Look at the full manager pod to see if there are init/sidecar containers
cat helm/library/cortex/templates/manager/manager.yaml | grep -A 200 'containers:' | head -100

Repository: cobaltcore-dev/cortex

Length of output: 4078


🏁 Script executed:

# Look for the main function and how components are initialized
cat cmd/main.go | head -100

Repository: cobaltcore-dev/cortex

Length of output: 3837


🏁 Script executed:

# Search more broadly for how the HTTP API server is started
rg -n 'http.Server\|ListenAndServe\|:8080' cmd/main.go -B 3 -A 3

Repository: cobaltcore-dev/cortex

Length of output: 47


🏁 Script executed:

# Find where novaExternalScheduler config is read and used
rg -n 'novaExternalScheduler' cmd/ internal/ -B 2 -A 5 | grep -v test

Repository: cobaltcore-dev/cortex

Length of output: 824


🏁 Script executed:

# Find all deployments/statefulsets in cortex-nova templates
find helm/bundles/cortex-nova/templates -name '*.yaml' -exec grep -l 'kind.*Deployment\|kind.*StatefulSet' {} \;

Repository: cobaltcore-dev/cortex

Length of output: 47


🏁 Script executed:

# Look for what the cortex-nova-scheduler service targets
cat helm/bundles/cortex-nova/templates/scheduler-service.yaml

Repository: cobaltcore-dev/cortex

Length of output: 493


🏁 Script executed:

# Check if there's a cortex-nova-scheduler deployment defined
rg -n 'cortex-nova-scheduler\|cortex-nova-scheduling' helm/bundles/cortex-nova/ -B 2 -A 2

Repository: cobaltcore-dev/cortex

Length of output: 47


🏁 Script executed:

# Look at the main.go more to see how reservations-controller and nova scheduler are wired
sed -n '450,500p' cmd/main.go

Repository: cobaltcore-dev/cortex

Length of output: 2234


🏁 Script executed:

# Find where the nova scheduler HTTP API is initialized in main.go
rg -n 'nova.*Init\|httpAPI\|ListenAndServe\|:8080' cmd/main.go -B 3 -A 3 | head -80

Repository: cobaltcore-dev/cortex

Length of output: 47


🏁 Script executed:

# Look for setup of nova controller and where the HTTP API is registered
sed -n '100,300p' cmd/main.go | grep -A 20 -B 5 nova

Repository: cobaltcore-dev/cortex

Length of output: 1190


🏁 Script executed:

# Check the entire main function for HTTP server setup
wc -l cmd/main.go

Repository: cobaltcore-dev/cortex

Length of output: 80


🏁 Script executed:

# Check the end of main.go for HTTP server startup
tail -100 cmd/main.go

Repository: cobaltcore-dev/cortex

Length of output: 3621


🏁 Script executed:

# Search for where the nova external scheduler HTTP API mux is set up
rg -n 'mux.*=\|http.NewServeMux\|httpAPI' cmd/main.go -B 2 -A 5

Repository: cobaltcore-dev/cortex

Length of output: 47


🏁 Script executed:

# Look for manager.Start or any HTTP server initialization
rg -n 'manager\.Start\|http\.Server\|ListenAndServe\|httpext' cmd/main.go -B 2 -A 5

Repository: cobaltcore-dev/cortex

Length of output: 47


🏁 Script executed:

# Search for where the mux variable is declared and populated
rg -n 'var mux|mux\s*:=|mux\.' cmd/main.go | head -30

Repository: cobaltcore-dev/cortex

Length of output: 95


🏁 Script executed:

# Look for where nova external scheduler is initialized
sed -n '200,400p' cmd/main.go | grep -B 10 -A 10 'nova\|mux'

Repository: cobaltcore-dev/cortex

Length of output: 5102


🏁 Script executed:

# Get lines around where mux is first defined
rg -n 'mux' cmd/main.go -B 3 -A 3 | head -50

Repository: cobaltcore-dev/cortex

Length of output: 1503


🏁 Script executed:

# Check if reservations-controller has any guards that ensure nova-pipeline-controllers is enabled
rg -n 'reservations-controller\|nova-pipeline-controllers' helm/bundles/cortex-nova/values.yaml -B 2 -A 2

Repository: cobaltcore-dev/cortex

Length of output: 47


🏁 Script executed:

# Look at the reservations controller code to see how it uses novaExternalScheduler endpoint
cat internal/scheduling/reservations/controller/controller.go | head -100

Repository: cobaltcore-dev/cortex

Length of output: 3689


🏁 Script executed:

# Verify that in values.yaml both controllers are enabled together
rg -n 'enabledControllers' helm/bundles/cortex-nova/values.yaml -A 10

Repository: cobaltcore-dev/cortex

Length of output: 785


🏁 Script executed:

# Check if there's any scenario where reservations-controller is enabled without nova-pipeline-controllers
rg -n 'reservations-controller' helm/ -B 5 -A 5 | grep -E 'enabledControllers|reservations-controller'

Repository: cobaltcore-dev/cortex

Length of output: 202


🏁 Script executed:

# Confirm: nova external scheduler endpoint registration is conditional on nova-pipeline-controllers
sed -n '300,320p' cmd/main.go

Repository: cobaltcore-dev/cortex

Length of output: 1044


🏁 Script executed:

# Verify: Is the nova external scheduler API registration truly guarded by nova-pipeline-controllers?
rg -B 5 'nova.NewAPI' cmd/main.go | head -20

Repository: cobaltcore-dev/cortex

Length of output: 403


🏁 Script executed:

# Check if there's any setup/validation that ensures both controllers are enabled together
rg -n 'reservations-controller' cmd/main.go -B 10 -A 10 | head -40

Repository: cobaltcore-dev/cortex

Length of output: 1155


🏁 Script executed:

# Look for any documentation or comments about the endpoint configuration
rg -n 'localhost\|novaExternalScheduler' helm/bundles/cortex-nova/ --type yaml -B 2 -A 2

Repository: cobaltcore-dev/cortex

Length of output: 47


Ensure reservations-controller is not independently enabled without nova-pipeline-controllers or document the dependency explicitly.

The reservations-controller (line 117) calls novaExternalScheduler: "http://localhost:8080/scheduler/nova/external" (line 122), which is only available when nova-pipeline-controllers is enabled. In the default bundle, both are enabled in the same pod, so this works. However, there is no validation or documentation preventing someone from enabling reservations-controller alone in a custom configuration, which would cause failures at runtime.

Either:

  • Add a validation check in main.go that requires nova-pipeline-controllers when reservations-controller is enabled, or
  • Document this implicit dependency in the values file and provide clear guidance on valid controller combinations
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/bundles/cortex-nova/values.yaml` at line 117, The values.yaml currently
allows enabling reservations-controller independently even though it relies on
novaExternalScheduler provided by nova-pipeline-controllers; add a runtime
validation in main.go that checks the chart/values configuration and fails fast
if reservations-controller is enabled while nova-pipeline-controllers is
disabled (check the flags/struct fields that map to "reservations-controller"
and "nova-pipeline-controllers" and the novaExternalScheduler URL) OR,
alternatively, update values.yaml to document the dependency clearly by adding a
comment next to reservations-controller and novaExternalScheduler that
nova-pipeline-controllers must be enabled together and list valid controller
combinations; prefer adding the validation in main.go to prevent
misconfiguration.

Comment on lines +15 to +16
WHERE LOWER(extra_specs) LIKE '%"capabilities:hypervisor_type":"qemu"%'
OR LOWER(extra_specs) LIKE '%"capabilities:hypervisor_type":"ch"%'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Read-only check for schema/fixture evidence of extra_specs format variability.
# Expected: if values appear with spacing/format variants, current LIKE is fragile.
rg -n -C2 'openstack_flavors_v2|extra_specs|capabilities:hypervisor_type'

Repository: cobaltcore-dev/cortex

Length of output: 45691


🏁 Script executed:

# Check test assertions about which flavors should be selected
grep -A 20 "Cloud-Hypervisor flavor" internal/knowledge/extractor/plugins/compute/flavor_groups_test.go | head -30

# Verify if the query result filtering needs case-insensitive matching
grep -B5 -A10 "capabilities:hypervisor_type" internal/knowledge/extractor/plugins/compute/flavor_groups_test.go | grep -A10 "expectedFlavorCount\|len(.*Flavors)"

Repository: cobaltcore-dev/cortex

Length of output: 1168


🏁 Script executed:

# Check database/schema setup to verify jsonb support
fd -e "schema\|migration\|init.*sql" --type f | head -20

# Also check if there are existing jsonb operations in the codebase
rg -n "jsonb\|::\s*jsonb" --type sql

Repository: cobaltcore-dev/cortex

Length of output: 47


🏁 Script executed:

# Read the SQL file to see full context and database operations
cat -n internal/knowledge/extractor/plugins/compute/flavor_groups.sql

# Check if there are any database initialization or configuration files
fd -e "go" --type f | xargs grep -l "openstack_flavors" | head -5

Repository: cobaltcore-dev/cortex

Length of output: 810


🏁 Script executed:

# Check how the query results are used in the Go code
cat -n internal/knowledge/extractor/plugins/compute/flavor_groups.go | head -100

# Verify if there are any database compatibility notes
rg -n "postgres\|sqlite\|mysql" --type go internal/knowledge/extractor/plugins/compute/

Repository: cobaltcore-dev/cortex

Length of output: 3831


🏁 Script executed:

# Check if other SQL files in the same directory use jsonb operations
rg -n "jsonb" --type sql

# Check the database configuration to confirm it's PostgreSQL
rg -n "postgres\|driver.*postgres" --type go | head -10

# Verify what database package is being used
rg -n "sqlc\|database/sql\|sqlx\|pgx" internal/knowledge/extractor/plugins/compute/flavor_groups.go

Repository: cobaltcore-dev/cortex

Length of output: 708


Replace JSON string matching with proper jsonb extraction for robust hypervisor type filtering.

The current LIKE pattern matching on lowercased JSON is fragile—JSON formatting variations (whitespace, indentation) can cause the pattern to fail unexpectedly. Use PostgreSQL's jsonb operators for reliable extraction, consistent with patterns already used elsewhere in the codebase (see host_pinned_projects.sql).

Suggested fix
-WHERE LOWER(extra_specs) LIKE '%"capabilities:hypervisor_type":"qemu"%'
-   OR LOWER(extra_specs) LIKE '%"capabilities:hypervisor_type":"ch"%'
+WHERE LOWER(extra_specs::jsonb ->> 'capabilities:hypervisor_type') IN ('qemu', 'ch')
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
WHERE LOWER(extra_specs) LIKE '%"capabilities:hypervisor_type":"qemu"%'
OR LOWER(extra_specs) LIKE '%"capabilities:hypervisor_type":"ch"%'
WHERE LOWER(extra_specs::jsonb ->> 'capabilities:hypervisor_type') IN ('qemu', 'ch')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/knowledge/extractor/plugins/compute/flavor_groups.sql` around lines
15 - 16, The WHERE clause is using fragile string LIKE on extra_specs JSON;
replace it with proper jsonb extraction e.g. parse extra_specs as jsonb and test
the hypervisor type via jsonb operators: use (extra_specs::jsonb ->
'capabilities' ->> 'hypervisor_type') IN ('qemu','ch') (or compare LOWER(...) if
values may vary in case) instead of the LIKE patterns so the query reliably
filters on hypervisor type.

Comment on lines 100 to 137
{
name: "zero weights should raise error",
opts: KVMBinpackStepOpts{
ResourceWeights: map[corev1.ResourceName]float64{
corev1.ResourceMemory: 0.0,
corev1.ResourceCPU: 0.0,
ResourceWeights: map[hv1.ResourceName]float64{
hv1.ResourceMemory: 0.0,
hv1.ResourceCPU: 0.0,
},
},
wantErr: true,
},
{
name: "valid opts with only memory weight",
opts: KVMBinpackStepOpts{
ResourceWeights: map[corev1.ResourceName]float64{
corev1.ResourceMemory: 2.0,
ResourceWeights: map[hv1.ResourceName]float64{
hv1.ResourceMemory: 2.0,
},
},
wantErr: false,
},
{
name: "valid opts with only cpu weight",
opts: KVMBinpackStepOpts{
ResourceWeights: map[corev1.ResourceName]float64{
corev1.ResourceCPU: 0.5,
ResourceWeights: map[hv1.ResourceName]float64{
hv1.ResourceCPU: 0.5,
},
},
wantErr: false,
},
{
name: "zero weights should raise error",
opts: KVMBinpackStepOpts{
ResourceWeights: map[corev1.ResourceName]float64{
corev1.ResourceMemory: 0.0,
corev1.ResourceCPU: 0.0,
ResourceWeights: map[hv1.ResourceName]float64{
hv1.ResourceMemory: 0.0,
hv1.ResourceCPU: 0.0,
},
},
wantErr: true,
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Duplicate test case for zero weights.

Lines 100-109 and 128-137 both test "zero weights should raise error" with identical configurations. One instance should be removed.

Remove duplicate test case
 		{
 			name: "zero weights should raise error",
 			opts: KVMBinpackStepOpts{
 				ResourceWeights: map[hv1.ResourceName]float64{
 					hv1.ResourceMemory: 0.0,
 					hv1.ResourceCPU:    0.0,
 				},
 			},
 			wantErr: true,
 		},
-		{
-			name: "zero weights should raise error",
-			opts: KVMBinpackStepOpts{
-				ResourceWeights: map[hv1.ResourceName]float64{
-					hv1.ResourceMemory: 0.0,
-					hv1.ResourceCPU:    0.0,
-				},
-			},
-			wantErr: true,
-		},
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{
name: "zero weights should raise error",
opts: KVMBinpackStepOpts{
ResourceWeights: map[corev1.ResourceName]float64{
corev1.ResourceMemory: 0.0,
corev1.ResourceCPU: 0.0,
ResourceWeights: map[hv1.ResourceName]float64{
hv1.ResourceMemory: 0.0,
hv1.ResourceCPU: 0.0,
},
},
wantErr: true,
},
{
name: "valid opts with only memory weight",
opts: KVMBinpackStepOpts{
ResourceWeights: map[corev1.ResourceName]float64{
corev1.ResourceMemory: 2.0,
ResourceWeights: map[hv1.ResourceName]float64{
hv1.ResourceMemory: 2.0,
},
},
wantErr: false,
},
{
name: "valid opts with only cpu weight",
opts: KVMBinpackStepOpts{
ResourceWeights: map[corev1.ResourceName]float64{
corev1.ResourceCPU: 0.5,
ResourceWeights: map[hv1.ResourceName]float64{
hv1.ResourceCPU: 0.5,
},
},
wantErr: false,
},
{
name: "zero weights should raise error",
opts: KVMBinpackStepOpts{
ResourceWeights: map[corev1.ResourceName]float64{
corev1.ResourceMemory: 0.0,
corev1.ResourceCPU: 0.0,
ResourceWeights: map[hv1.ResourceName]float64{
hv1.ResourceMemory: 0.0,
hv1.ResourceCPU: 0.0,
},
},
wantErr: true,
},
{
name: "zero weights should raise error",
opts: KVMBinpackStepOpts{
ResourceWeights: map[hv1.ResourceName]float64{
hv1.ResourceMemory: 0.0,
hv1.ResourceCPU: 0.0,
},
},
wantErr: true,
},
{
name: "valid opts with only memory weight",
opts: KVMBinpackStepOpts{
ResourceWeights: map[hv1.ResourceName]float64{
hv1.ResourceMemory: 2.0,
},
},
wantErr: false,
},
{
name: "valid opts with only cpu weight",
opts: KVMBinpackStepOpts{
ResourceWeights: map[hv1.ResourceName]float64{
hv1.ResourceCPU: 0.5,
},
},
wantErr: false,
},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/nova/plugins/weighers/kvm_binpack_test.go` around lines
100 - 137, Remove the duplicated table test entry that repeats the "zero weights
should raise error" case; locate the duplicate by the test case name "zero
weights should raise error" in the KVMBinpackStepOpts table (the second
occurrence) and delete that entire test struct so the table contains only one
"zero weights should raise error" case and the other unique cases remain
unchanged.

Comment on lines +98 to +99
func (api *HTTPAPI) processCommitmentChanges(w http.ResponseWriter, log logr.Logger, req liquid.CommitmentChangeRequest, resp *liquid.CommitmentChangeResponse) error {
ctx := context.Background()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use request context instead of context.Background().

Using context.Background() ignores the request's context, which means the operation won't be cancelled if the client disconnects. Consider using r.Context() passed through to this method.

Proposed fix to propagate request context
-func (api *HTTPAPI) processCommitmentChanges(w http.ResponseWriter, log logr.Logger, req liquid.CommitmentChangeRequest, resp *liquid.CommitmentChangeResponse) error {
-	ctx := context.Background()
+func (api *HTTPAPI) processCommitmentChanges(ctx context.Context, w http.ResponseWriter, log logr.Logger, req liquid.CommitmentChangeRequest, resp *liquid.CommitmentChangeResponse) error {
 	manager := NewReservationManager(api.client)

And update the call site in HandleChangeCommitments:

-	if err := api.processCommitmentChanges(w, log, req, &resp); err != nil {
+	if err := api.processCommitmentChanges(r.Context(), w, log, req, &resp); err != nil {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/commitments/api_change_commitments.go`
around lines 98 - 99, The function processCommitmentChanges currently uses
context.Background(); change its signature to accept a context.Context (e.g.,
processCommitmentChanges(ctx context.Context, w http.ResponseWriter, log
logr.Logger, req liquid.CommitmentChangeRequest, resp
*liquid.CommitmentChangeResponse) error) and replace context.Background() with
the passed ctx so cancellations propagate; then update the caller
HandleChangeCommitments to pass r.Context() into processCommitmentChanges when
invoking it. Ensure any downstream calls inside processCommitmentChanges that
create derived contexts use the passed ctx as parent.

Comment on lines +58 to +59
// Secret ref to the database credentials for querying VM state.
DatabaseSecretRef *corev1.SecretReference `json:"databaseSecretRef,omitempty"`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Skip allocation verification when the DB integration is disabled.

DatabaseSecretRef is optional, but every Ready reservation with allocations still enters reconcileAllocations(). In the same file, Lines 428-435 only initialize r.DB when that secret exists, and Lines 441-443 turn a nil DB into an error, so pre-allocated reservations will fail and requeue forever on deployments without DB credentials.

Also applies to: 85-95, 425-443

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/controller/controller.go` around lines 58 -
59, The controller currently treats a nil r.DB as an error and calls
reconcileAllocations() for every Ready reservation even when no
DatabaseSecretRef is provided; change the reconcile path so that if
DatabaseSecretRef is nil (i.e., DB integration disabled) you skip allocation
verification and do not error on a nil r.DB. Concretely: in the reconcile flow
that initializes r.DB (referencing DatabaseSecretRef and the DB init block),
only attempt DB initialization and return an error if DatabaseSecretRef is
present and DB init fails; likewise, guard calls to reconcileAllocations() so
they run only when DatabaseSecretRef != nil (or r.DB != nil after init), leaving
pre-allocated reservations untouched when DB creds are absent.

Comment on lines +357 to +360
// Skip if no allocations to verify
if len(res.Spec.CommittedResourceReservation.Allocations) == 0 {
log.Info("no allocations to verify", "reservation", res.Name)
return nil
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Clear stale status allocations when the spec allocation set becomes empty.

This early return skips the only status-patch path in reconcileAllocations(). If Spec.CommittedResourceReservation.Allocations transitions from non-empty to empty, the previous Status.CommittedResourceReservation.Allocations map is left behind indefinitely.

Also applies to: 401-420

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/controller/controller.go` around lines 357 -
360, The early return in reconcileAllocations() when
len(res.Spec.CommittedResourceReservation.Allocations) == 0 prevents the status
patch and leaves stale entries in
res.Status.CommittedResourceReservation.Allocations; modify
reconcileAllocations() to continue and explicitly clear or set
res.Status.CommittedResourceReservation.Allocations to an empty map/slice (or
nil) and invoke the existing status-patch path so the cleared status is
persisted when the Spec allocation set transitions from non-empty to empty;
ensure you reference and update the same
Status.CommittedResourceReservation.Allocations field and reuse the controller's
existing status patch/update logic rather than simply returning early.

Comment on lines +32 to +34
if err != nil {
return nil, fmt.Errorf("failed to get flavor groups knowledge: %w", err)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Inspect all call sites to confirm they treat "not ready" as a non-fatal condition.
rg -n --type go 'GetAllFlavorGroups\s*\(' -C2
rg -n --type go 'Get\s*\(\s*ctx\s*context\.Context\s*\)\s*\(\*v1alpha1\.Knowledge,\s*error\)' -C4

Repository: cobaltcore-dev/cortex

Length of output: 4379


🏁 Script executed:

# Read the full flavor_groups.go file to understand implementation and license header
cat -n internal/scheduling/reservations/flavor_groups.go

Repository: cobaltcore-dev/cortex

Length of output: 2764


🏁 Script executed:

# Check error handling at specific call sites to understand expected behavior
sed -n '30,40p' internal/scheduling/reservations/commitments/capacity.go
sed -n '59,70p' internal/scheduling/reservations/commitments/api_info.go
sed -n '210,225p' internal/scheduling/reservations/controller/controller.go

Repository: cobaltcore-dev/cortex

Length of output: 1478


🏁 Script executed:

# Check if apierrors is already imported elsewhere in the codebase
rg -l "apierrors\|api/errors" --type go | head -20

# Check for any other direct callers of the Get method (not GetAllFlavorGroups)
rg -n "\.Get\(" internal/scheduling/reservations/flavor_groups.go -C2
rg -n "\\.Get\\(" --type go | grep -E "(FlavorGroupKnowledgeClient|knowledge\.Get)" | head -20

Repository: cobaltcore-dev/cortex

Length of output: 966


🏁 Script executed:

# Check the direct callers of Get() to understand their error handling expectations
sed -n '100,115p' internal/scheduling/reservations/commitments/api_info.go
sed -n '111,130p' internal/scheduling/reservations/commitments/api_change_commitments.go
sed -n '130,155p' internal/scheduling/reservations/commitments/syncer.go

# Also check if apierrors is already used in the codebase
rg "apierrors\|k8s.io/apimachinery/pkg/api/errors" --type go | head -10

Repository: cobaltcore-dev/cortex

Length of output: 2311


🏁 Script executed:

# Verify if apierrors is already commonly used in the k8s client import pattern
rg "k8s.io/apimachinery/pkg/api" --type go | head -20

Repository: cobaltcore-dev/cortex

Length of output: 2041


Treat NotFound as not-ready instead of a hard failure.

The Get() method's docstring states "Returns nil, nil if not ready yet," but the implementation returns an error when the Knowledge CRD is not found. This breaks the intended contract: syncer.go (line 135-144) expects nil semantics to check if knowledgeCRD == nil, but currently receives an error instead. Additionally, during bootstrap phases, NotFound is a transient condition, not a hard failure.

The fix normalizes NotFound to nil, allowing GetAllFlavorGroups to return the consistent "not ready" error message (line 55).

Proposed fix
 import (
 	"context"
 	"errors"
 	"fmt"
 
 	"github.com/cobaltcore-dev/cortex/api/v1alpha1"
 	"github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute"
+	apierrors "k8s.io/apimachinery/pkg/api/errors"
 	"k8s.io/apimachinery/pkg/api/meta"
 	"k8s.io/apimachinery/pkg/types"
 	"sigs.k8s.io/controller-runtime/pkg/client"
 )
 
 	if err != nil {
+		if apierrors.IsNotFound(err) {
+			return nil, nil
+		}
 		return nil, fmt.Errorf("failed to get flavor groups knowledge: %w", err)
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if err != nil {
return nil, fmt.Errorf("failed to get flavor groups knowledge: %w", err)
}
import (
"context"
"errors"
"fmt"
"github.com/cobaltcore-dev/cortex/api/v1alpha1"
"github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
)
if err != nil {
if apierrors.IsNotFound(err) {
return nil, nil
}
return nil, fmt.Errorf("failed to get flavor groups knowledge: %w", err)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/reservations/flavor_groups.go` around lines 32 - 34, The
Get() implementation in flavor_groups.go currently treats a Kubernetes NotFound
error as a hard failure; change it to normalize NotFound to the "not ready"
semantics by returning nil, nil when apierrors.IsNotFound(err) (or equivalent
k8s NotFound check) is true, while preserving the existing fmt.Errorf(...)
return for other errors; this ensures callers like GetAllFlavorGroups and
syncer.go can rely on knowledgeCRD == nil to indicate not-ready instead of
receiving an error.

Comment on lines +105 to +108
func (c *cli) ChooseServerGroup(sgs []types.ServerGroup) types.ServerGroup {
f := func(sg types.ServerGroup) string {
return fmt.Sprintf("%s (%s) id:%s", sg.Name, sg.Policy, sg.ID[:5])
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard short IDs before slicing to avoid panic.

Line 107 uses sg.ID[:5]; malformed/empty IDs would panic. Add a safe truncation guard before formatting.

Suggested fix
 func (c *cli) ChooseServerGroup(sgs []types.ServerGroup) types.ServerGroup {
 	f := func(sg types.ServerGroup) string {
-		return fmt.Sprintf("%s (%s) id:%s", sg.Name, sg.Policy, sg.ID[:5])
+		idPreview := sg.ID
+		if len(idPreview) > 5 {
+			idPreview = idPreview[:5]
+		}
+		return fmt.Sprintf("%s (%s) id:%s", sg.Name, sg.Policy, idPreview)
 	}
 	return choose(c.defaults, "WS_SERVER_GROUP", "📂 Server Groups", sgs, f)
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (c *cli) ChooseServerGroup(sgs []types.ServerGroup) types.ServerGroup {
f := func(sg types.ServerGroup) string {
return fmt.Sprintf("%s (%s) id:%s", sg.Name, sg.Policy, sg.ID[:5])
}
func (c *cli) ChooseServerGroup(sgs []types.ServerGroup) types.ServerGroup {
f := func(sg types.ServerGroup) string {
idPreview := sg.ID
if len(idPreview) > 5 {
idPreview = idPreview[:5]
}
return fmt.Sprintf("%s (%s) id:%s", sg.Name, sg.Policy, idPreview)
}
return choose(c.defaults, "WS_SERVER_GROUP", "📂 Server Groups", sgs, f)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/spawner/cli/cli.go` around lines 105 - 108, The formatting callback f
inside ChooseServerGroup uses sg.ID[:5] which can panic on short or empty IDs;
modify f to safely truncate sg.ID by checking its length (e.g., compute n :=
min(len(sg.ID), 5) or use a shortID helper) and use sg.ID[:n] (or a conditional
fallback like sg.ID) in the fmt.Sprintf call so malformed IDs cannot cause a
slice out-of-range panic.

Comment on lines +423 to +444
if len(getServerGroupsResponse.ServerGroups) > 0 {
fmt.Printf("❓ Delete existing server groups with name prefix %s [y/N, default: \033[1;34my\033[0m]: ", prefix)
reader = bufio.NewReader(os.Stdin)
input = must.Return(reader.ReadString('\n'))
input = strings.TrimSpace(input)
if input == "" {
input = "y"
}
if input == "y" {
var wg sync.WaitGroup
for _, sg := range getServerGroupsResponse.ServerGroups {
if strings.HasPrefix(sg.Name, prefix) {
wg.Go(func() {
fmt.Printf("🧨 Deleting server group %s\n", sg.Name)
_ = must.Return(projectCompute.Delete(ctx, projectCompute.Endpoint+"/os-server-groups/"+sg.ID, nil))
fmt.Printf("💥 Deleted server group %s\n", sg.Name)
})
}
}
wg.Wait()
fmt.Println("🧨 Deleted all existing server groups")
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Prefix-delete prompt/result messaging is currently misleading.

At Line 423 the prompt is shown when any server groups exist, but deletion only applies to groups matching prefix (Line 434). Then Line 443 prints “Deleted all existing server groups” even if none matched the prefix. Pre-filter first, then prompt/log against that filtered set.

Suggested adjustment
- if len(getServerGroupsResponse.ServerGroups) > 0 {
+ var serverGroupsWithPrefix []types.ServerGroup
+ for _, sg := range getServerGroupsResponse.ServerGroups {
+     if strings.HasPrefix(sg.Name, prefix) {
+         serverGroupsWithPrefix = append(serverGroupsWithPrefix, sg)
+     }
+ }
+ if len(serverGroupsWithPrefix) > 0 {
      fmt.Printf("❓ Delete existing server groups with name prefix %s [y/N, default: \033[1;34my\033[0m]: ", prefix)
      ...
      if input == "y" {
          var wg sync.WaitGroup
-         for _, sg := range getServerGroupsResponse.ServerGroups {
-             if strings.HasPrefix(sg.Name, prefix) {
-                 wg.Go(func() {
-                     ...
-                 })
-             }
+         for _, sg := range serverGroupsWithPrefix {
+             wg.Go(func() {
+                 ...
+             })
          }
          wg.Wait()
-         fmt.Println("🧨 Deleted all existing server groups")
+         fmt.Println("🧨 Deleted all matching server groups")
      }
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if len(getServerGroupsResponse.ServerGroups) > 0 {
fmt.Printf("❓ Delete existing server groups with name prefix %s [y/N, default: \033[1;34my\033[0m]: ", prefix)
reader = bufio.NewReader(os.Stdin)
input = must.Return(reader.ReadString('\n'))
input = strings.TrimSpace(input)
if input == "" {
input = "y"
}
if input == "y" {
var wg sync.WaitGroup
for _, sg := range getServerGroupsResponse.ServerGroups {
if strings.HasPrefix(sg.Name, prefix) {
wg.Go(func() {
fmt.Printf("🧨 Deleting server group %s\n", sg.Name)
_ = must.Return(projectCompute.Delete(ctx, projectCompute.Endpoint+"/os-server-groups/"+sg.ID, nil))
fmt.Printf("💥 Deleted server group %s\n", sg.Name)
})
}
}
wg.Wait()
fmt.Println("🧨 Deleted all existing server groups")
}
var serverGroupsWithPrefix []types.ServerGroup
for _, sg := range getServerGroupsResponse.ServerGroups {
if strings.HasPrefix(sg.Name, prefix) {
serverGroupsWithPrefix = append(serverGroupsWithPrefix, sg)
}
}
if len(serverGroupsWithPrefix) > 0 {
fmt.Printf("❓ Delete existing server groups with name prefix %s [y/N, default: \033[1;34my\033[0m]: ", prefix)
reader = bufio.NewReader(os.Stdin)
input = must.Return(reader.ReadString('\n'))
input = strings.TrimSpace(input)
if input == "" {
input = "y"
}
if input == "y" {
var wg sync.WaitGroup
for _, sg := range serverGroupsWithPrefix {
wg.Go(func() {
fmt.Printf("🧨 Deleting server group %s\n", sg.Name)
_ = must.Return(projectCompute.Delete(ctx, projectCompute.Endpoint+"/os-server-groups/"+sg.ID, nil))
fmt.Printf("💥 Deleted server group %s\n", sg.Name)
})
}
wg.Wait()
fmt.Println("🧨 Deleted all matching server groups")
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/spawner/main.go` around lines 423 - 444, Pre-filter
getServerGroupsResponse.ServerGroups to only include entries where
strings.HasPrefix(sg.Name, prefix), then use that filtered slice for the prompt,
the deletion loop (spawn goroutines via wg.Go) and for the final log message; if
the filtered slice is empty skip prompting and deletion entirely or print a
message stating no server groups matched the prefix; update uses of sg.Name,
sg.ID, wg, and projectCompute.Delete to operate on the filtered list so the
prompt and "Deleted ..." message accurately reflect what was actually deleted.

@umswmayj umswmayj changed the title Release: Bump Postgres Release: Bump bundels to 0.0.39, core to 0.0.26, postgres 0.5.13 Mar 17, 2026
@umswmayj umswmayj changed the title Release: Bump bundels to 0.0.39, core to 0.0.26, postgres 0.5.13 Release: Bump bundlss to 0.0.39, core to 0.0.26, postgres 0.5.13 Mar 17, 2026
@umswmayj umswmayj changed the title Release: Bump bundlss to 0.0.39, core to 0.0.26, postgres 0.5.13 Release: Bump bundles to 0.0.39, core to 0.0.26, postgres 0.5.13 Mar 17, 2026
github-actions bot and others added 8 commits March 17, 2026 12:10
This alert has been around for some time, and so far never reported
false positives or flapped. So we can consider it stable. Since we're
going onto the critical path with cortex for nova kvm, it's crucial that
we escalate this alert. There's also an actionable playbook for this
alert already.
In
cobaltcore-dev/openstack-hypervisor-operator#257
we introduced a new `effectiveCapacity` field of the hypervisor crd. In
cobaltcore-dev/kvm-node-agent#70 we populate
this field. Now we can upgrade cortex so it performs scheduling on the
overcommitted capacity.
This change adds a controller to the nova scheduling controller manager,
which finds hypervisors with specific traits and assigns desired
overcommit ratios to them.
## Changes
- Adjusted the kvm resource capacity kpi to use the new
`EffectiveCapacity` from the hypervisor CRD, see:
cobaltcore-dev/openstack-hypervisor-operator#257
@github-actions
Copy link
Contributor

Test Coverage Report

Test Coverage 📊: 68.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/monitor.go:21:							NewMonitor				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/monitor.go:39:							Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/monitor.go:45:							Collect					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_api.go:37:			NewCinderAPI				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_api.go:45:			Init					81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_api.go:68:			GetAllStoragePools			73.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_sync.go:27:			Init					83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_sync.go:40:			Sync					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_sync.go:51:			SyncAllStoragePools			53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_types.go:46:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_types.go:49:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_types.go:52:			UnmarshalJSON				93.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_types.go:131:			MarshalJSON				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/controller.go:60:				Reconcile				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/controller.go:239:				SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_api.go:35:			NewIdentityAPI				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_api.go:39:			Init					80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_api.go:59:			GetAllDomains				66.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_api.go:83:			GetAllProjects				72.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_sync.go:26:			Init					85.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_sync.go:41:			Sync					83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_sync.go:54:			SyncDomains				53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_sync.go:74:			SyncProjects				53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_types.go:16:		TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_types.go:19:		Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_types.go:47:		TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_types.go:50:		Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_api.go:45:			NewLimesAPI				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_api.go:50:			Init					81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_api.go:74:			GetAllCommitments			90.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_api.go:124:			getCommitments				86.4%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_sync.go:28:			Init					83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_sync.go:41:			Sync					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_sync.go:52:			SyncCommitments				63.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_types.go:69:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_types.go:72:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_api.go:41:			NewManilaAPI				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_api.go:46:			Init					81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_api.go:69:			GetAllStoragePools			75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_sync.go:28:			Init					83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_sync.go:41:			Sync					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_sync.go:52:			SyncAllStoragePools			53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_types.go:47:			UnmarshalJSON				87.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_types.go:137:			MarshalJSON				72.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_types.go:234:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_types.go:237:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:54:				NewNovaAPI				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:59:				Init					81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:85:				GetAllServers				69.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:145:				GetDeletedServers			69.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:200:				GetAllHypervisors			69.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:254:				GetAllFlavors				68.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:291:				GetAllMigrations			69.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:344:				GetAllAggregates			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:29:				Init					90.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:53:				Sync					50.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:75:				SyncAllServers				57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:98:				SyncDeletedServers			64.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:128:			SyncAllHypervisors			57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:152:			SyncAllFlavors				57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:174:			SyncAllMigrations			57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:196:			SyncAllAggregates			57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:36:			UnmarshalJSON				77.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:59:			MarshalJSON				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:79:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:82:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:119:			UnmarshalJSON				77.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:142:			MarshalJSON				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:162:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:165:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:198:			UnmarshalJSON				80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:234:			MarshalJSON				85.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:266:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:269:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:289:			UnmarshalJSON				54.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:312:			MarshalJSON				55.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:333:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:336:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:360:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:363:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:384:			TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:387:			Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:48:		NewPlacementAPI				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:53:		Init					81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:77:		GetAllResourceProviders			66.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:105:		GetAllTraits				90.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:155:		getTraits				90.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:179:		GetAllInventoryUsages			71.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:229:		getInventoryUsages			77.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:28:		Init					62.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:46:		Sync					71.4%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:62:		SyncResourceProviders			53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:83:		SyncTraits				57.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:112:		SyncInventoryUsages			57.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:17:		TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:20:		Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:31:		TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:34:		Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:74:		TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:77:		Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/supported_syncers.go:22:			getSupportedSyncer			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/controller.go:51:				Reconcile				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/controller.go:201:				SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:32:					newTypedSyncer				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:100:					fetch					79.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:205:					getSyncWindowStart			81.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:245:					sync					68.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:295:					Sync					70.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/triggers.go:7:				TriggerMetricAliasSynced		0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/triggers.go:12:				TriggerMetricTypeSynced			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:42:					TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:43:					Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:44:					GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:45:					GetTimestamp				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:46:					GetValue				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:47:					With					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:90:					TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:91:					Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:92:					GetName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:93:					GetTimestamp				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:94:					GetValue				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:95:					With					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:145:				TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:146:				Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:147:				GetName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:148:				GetTimestamp				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:149:				GetValue				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:150:				With					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:170:				TableName				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:171:				Indexes					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:172:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:173:				GetTimestamp				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:174:				GetValue				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:175:				With					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:211:				TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:212:				Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:213:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:214:				GetTimestamp				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:215:				GetValue				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:216:				With					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:242:				TableName				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:243:				Indexes					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:244:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:245:				GetTimestamp				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:246:				GetValue				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:247:				With					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:282:				TableName				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:285:				Indexes					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:286:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:287:				GetTimestamp				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:288:				GetValue				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:289:				With					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:50:								FromSecretRef				6.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:131:								SelectTimed				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:140:								CreateTable				70.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:157:								AddTable				66.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:168:								TableExists				58.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:197:								ReplaceAll				62.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:224:								BulkInsert				86.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/monitor.go:21:								newMonitor				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/monitor.go:63:								Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/monitor.go:73:								Collect					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/testing/containers/postgres.go:21:					GetPort					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/testing/containers/postgres.go:25:					Init					70.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/testing/containers/postgres.go:69:					Close					50.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/testing/env.go:24:							SetupDBEnv				59.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/controller.go:46:							Reconcile				54.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/controller.go:237:						SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:26:							NewMonitor				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:44:							Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:50:							Collect					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:69:							Init					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:78:							monitorFeatureExtractor			100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:97:							Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/base.go:28:						Init					87.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/base.go:45:						ExtractSQL				83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/base.go:58:						Extracted				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/flavor_groups.go:67:				Extract					77.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_az.go:31:					Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_capabilities.go:35:				Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_details.go:59:				Extract					87.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_pinned_projects.go:45:			Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_utilization.go:47:				Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/libvirt_domain_cpu_steal_pct.go:35:		Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vm_host_residency.go:53:				Extract					85.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vm_life_span.go:52:				extractHistogramBuckets			89.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vm_life_span.go:97:				Extract					88.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vrops_hostsystem_contention_long_term.go:39:	Extract					82.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vrops_hostsystem_contention_short_term.go:39:	Extract					82.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vrops_hostsystem_resolver.go:33:			Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vrops_project_noisiness.go:33:			Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/storage/storage_pool_cpu_usage.go:35:			Extract					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:42:							Reconcile				77.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:95:							findDependentKnowledge			96.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:142:							triggerKnowledgeReconciliation		100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:175:							enqueueKnowledgeReconciliation		81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:201:							getResourceType				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:213:							mapDatasourceToKnowledge		100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:234:							mapKnowledgeToKnowledge			100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:255:							SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:54:							Reconcile				42.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:108:							InitAllKPIs				83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:148:							getJointDB				27.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:186:							handleKPIChange				52.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:311:							handleDatasourceChange			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:336:							handleDatasourceCreated			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:346:							handleDatasourceUpdated			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:364:							handleDatasourceDeleted			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:376:							handleKnowledgeChange			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:401:							handleKnowledgeCreated			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:411:							handleKnowledgeUpdated			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:429:							handleKnowledgeDeleted			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:439:							SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/logger.go:21:								Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/logger.go:26:								Collect					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/logger.go:32:								Init					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/logger.go:37:								GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/base.go:24:							Init					80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/flavor_running_vms.go:32:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/flavor_running_vms.go:36:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/flavor_running_vms.go:54:				Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/flavor_running_vms.go:58:				Collect					71.4%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_contention.go:28:					GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_contention.go:32:					Init					80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_contention.go:49:					Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_contention.go:54:					Collect					82.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_running_vms.go:42:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_running_vms.go:46:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_running_vms.go:69:				Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_running_vms.go:73:				Collect					66.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/project_noisiness.go:27:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/project_noisiness.go:31:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/project_noisiness.go:43:				Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/project_noisiness.go:47:				Collect					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:24:				getBuildingBlock			75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:42:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:46:				Init					87.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:138:				Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:146:				Collect					90.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:223:				exportCapacityMetricKVM			100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:29:			GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:33:			Init					80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:75:			Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:80:			Collect					67.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:153:			exportCapacityMetricVMware		92.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:30:					GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:34:					Init					85.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:81:					Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:89:					convertLimesMemory			100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:106:					Collect					89.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:29:					GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:33:					Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:46:					Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:50:					Collect					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:57:					collectVMBuckets			71.4%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_migration_statistics.go:28:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_migration_statistics.go:32:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_migration_statistics.go:45:				Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_migration_statistics.go:49:				Collect					69.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/datasource_state.go:32:				GetName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/datasource_state.go:35:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/datasource_state.go:49:				Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/datasource_state.go:52:				Collect					92.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/decision_state.go:32:				GetName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/decision_state.go:35:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/decision_state.go:49:				Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/decision_state.go:52:				Collect					94.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/knowledge_state.go:32:				GetName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/knowledge_state.go:35:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/knowledge_state.go:49:				Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/knowledge_state.go:52:				Collect					85.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/kpi_state.go:32:					GetName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/kpi_state.go:35:					Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/kpi_state.go:49:					Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/kpi_state.go:52:					Collect					92.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/pipeline_state.go:32:				GetName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/pipeline_state.go:35:				Init					75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/pipeline_state.go:49:				Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/pipeline_state.go:52:				Collect					92.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/storage/storage_pool_cpu.go:28:				GetName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/storage/storage_pool_cpu.go:32:				Init					80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/storage/storage_pool_cpu.go:49:				Describe				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/storage/storage_pool_cpu.go:54:				Collect					82.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/math/histogram.go:7:							Histogram				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/decisions_cleanup.go:30:						DecisionsCleanup			76.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/e2e_checks.go:21:							RunChecks				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/e2e_checks.go:26:							checkCinderSchedulerReturnsValidHosts	0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:42:					NewAPI					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:50:					Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:57:					canRunScheduler				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:80:					inferPipelineName			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:90:					CinderExternalScheduler			68.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:49:				PipelineType				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:54:				Reconcile				83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:74:				ProcessNewDecisionFromAPI		88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:113:				process					80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:143:				InitPipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:156:				SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/pipeline_webhook.go:15:						NewPipelineWebhook			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/controller.go:47:						shouldReconcileDecision			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/controller.go:65:						Reconcile				81.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/controller.go:85:						reconcileHistory			80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/controller.go:147:						reconcileExplanation			64.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/controller.go:171:						StartupCallback				77.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/controller.go:194:						SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:28:						NewExplainer				75.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:41:						Explain					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:46:						getResourceType				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:62:						calculateScoreGap			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:80:						fetchDecisionChain			92.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:120:						buildHostSegments			95.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:171:						detectLoop				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:183:						findWinner				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:212:						calculateScoresFromSteps		93.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:257:						calculateScoresWithoutStep		83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:271:						findCriticalSteps			78.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:315:						calculateStepImpacts			97.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:397:						buildContextData			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:419:						buildHistoryData			94.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:461:						buildWinnerData				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:496:						buildInputData				95.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:548:						buildCriticalStepsData			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:566:						buildDeletedHostsData			90.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:616:						buildChainData				86.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/explainer.go:655:						ExplainWithTemplates			87.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/templates.go:18:						NewTemplateManager			81.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/templates.go:59:						RenderExplanation			80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/explanation/templates.go:68:						formatTemplateDuration			90.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/activation.go:12:							NoEffect				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/activation.go:15:							Norm					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/activation.go:21:							Apply					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:22:							NewSchedulerMonitor			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:32:							Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:36:							Collect					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:50:							Callback				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:56:							Respond					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector.go:53:							Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector.go:64:							Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector.go:75:							CheckKnowledges				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:26:						NewDetectorPipelineMonitor		100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:46:						SubPipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:52:						Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:58:						Collect					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:76:						monitorDetector				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:99:						Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:107:						Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:112:						Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_pipeline.go:33:						Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_pipeline.go:63:						Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_pipeline.go:98:						Combine					97.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_step_opts.go:15:						Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter.go:31:								Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_monitor.go:23:							monitorFilter				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_monitor.go:36:							Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_monitor.go:41:							Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_monitor.go:46:							Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_validation.go:22:						Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_validation.go:28:						Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_validation.go:33:						validateFilter				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_validation.go:38:						Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:45:						InitNewFilterWeigherPipeline		86.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:138:					runFilters				73.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:166:					runWeighers				81.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:206:					normalizeInputWeights			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:215:					applyWeights				80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:251:					sortHostsByWeights			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:261:					Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:36:					NewPipelineMonitor			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:90:					SubPipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:97:					observePipelineResult			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:118:				Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:130:				Collect					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step.go:48:					Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step.go:63:					Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step.go:75:					IncludeAllHostsFromRequest		100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step.go:85:					PrepareStats				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step_monitor.go:42:				monitorStep				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step_monitor.go:65:				RunWrapped				48.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step_monitor.go:215:				impact					94.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step_opts.go:15:				Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:36:						InitAllPipelines			93.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:61:						handlePipelineChange			77.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:174:						HandlePipelineCreated			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:188:						HandlePipelineUpdated			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:201:						HandlePipelineDeleted			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:213:						handleKnowledgeChange			71.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:246:						HandleKnowledgeCreated			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:259:						HandleKnowledgeUpdated			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:281:						HandleKnowledgeDeleted			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:38:						ValidateCreate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:47:						ValidateUpdate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:56:						ValidateDelete				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:65:						validatePipeline			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:149:						SetupWebhookWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/scaling.go:7:								clamp					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/scaling.go:22:								MinMaxScale				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher.go:35:								Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher.go:40:								Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher.go:45:								CheckKnowledges				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_monitor.go:23:							monitorWeigher				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_monitor.go:36:							Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_monitor.go:41:							Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_monitor.go:46:							Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_validation.go:22:						Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_validation.go:28:						Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_validation.go:33:						validateWeigher				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_validation.go:38:						Run					81.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:53:				PipelineType				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:57:				Reconcile				83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:77:				ProcessNewMachine			89.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:134:			process					70.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:185:			InitPipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:198:			handleMachine				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:238:			SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/pipeline_webhook.go:15:						NewPipelineWebhook			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/plugins/filters/filter_noop.go:21:				Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/plugins/filters/filter_noop.go:25:				Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/plugins/filters/filter_noop.go:34:				Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/plugins/filters/filter_noop.go:44:				init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/decisions_cleanup.go:32:						DecisionsCleanup			77.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/e2e_checks.go:34:							RunChecks				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/e2e_checks.go:39:							checkManilaSchedulerReturnsValidHosts	0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:42:					NewAPI					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:50:					Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:57:					canRunScheduler				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:80:					inferPipelineName			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:90:					ManilaExternalScheduler			68.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:49:				PipelineType				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:54:				Reconcile				83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:74:				ProcessNewDecisionFromAPI		88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:113:				process					80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:143:				InitPipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:156:				SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/pipeline_webhook.go:15:						NewPipelineWebhook			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/plugins/weighers/netapp_cpu_usage_balancing.go:35:			Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/plugins/weighers/netapp_cpu_usage_balancing.go:53:			Init					60.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/plugins/weighers/netapp_cpu_usage_balancing.go:64:			Run					88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/plugins/weighers/netapp_cpu_usage_balancing.go:110:			init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/candidate_gatherer.go:29:						MutateWithAllCandidates			94.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/decisions_cleanup.go:30:						DecisionsCleanup			79.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_cleanup.go:23:						Start					82.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_cleanup.go:62:						Reconcile				70.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_cleanup.go:94:						SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_executor.go:45:						Reconcile				68.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_executor.go:256:					SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_cycle_breaker.go:17:						Filter					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:43:					PipelineType				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:48:					InitPipeline				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:66:					CreateDeschedulingsPeriodically		0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:127:					Reconcile				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:132:					SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:61:							getHypervisors				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:104:							prepare					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:257:							randomRequest				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:330:							checkNovaSchedulerReturnsValidHosts	0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:360:							RunChecks				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:50:						NewAPI					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:59:						Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:66:						canRunScheduler				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:89:						inferPipelineName			96.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:149:					limitHostsToRequest			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:171:					NovaExternalScheduler			68.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:52:				PipelineType				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:57:				Reconcile				91.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:77:				ProcessNewDecisionFromAPI		88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:116:				process					80.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:168:				InitPipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:181:				SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:48:				Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:82:				Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:111:				Reconcile				93.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:178:				handleRemoteHypervisor			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:207:				predicateRemoteHypervisor		100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:220:				SetupWithManager			33.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:57:							NewNovaClient				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:61:							Init					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:99:							Get					75.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:108:							LiveMigrate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:119:							GetServerMigrations			74.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/pipeline_webhook.go:16:						NewPipelineWebhook			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/detectors/avoid_high_steal_pct.go:26:				Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/detectors/avoid_high_steal_pct.go:39:				Init					80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/detectors/avoid_high_steal_pct.go:49:				Run					86.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/detectors/avoid_high_steal_pct.go:85:				init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_allowed_projects.go:22:			Run					87.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_allowed_projects.go:54:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_capabilities.go:25:				hvToNovaCapabilities			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_capabilities.go:48:				Run					81.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_capabilities.go:119:				init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_correct_az.go:21:				Run					83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_correct_az.go:65:				init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_exclude_hosts.go:28:				Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_exclude_hosts.go:30:				Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_exclude_hosts.go:43:				init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_external_customer.go:23:			Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_external_customer.go:36:			Run					94.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_external_customer.go:86:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_accelerators.go:21:			Run					91.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_accelerators.go:55:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_enough_capacity.go:24:			Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_enough_capacity.go:44:			Run					76.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_enough_capacity.go:304:			init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_requested_traits.go:24:			Run					95.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_requested_traits.go:89:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_host_instructions.go:21:			Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_host_instructions.go:44:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_instance_group_affinity.go:19:			Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_instance_group_affinity.go:54:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_instance_group_anti_affinity.go:22:		Run					88.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_instance_group_anti_affinity.go:99:		init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_live_migratable.go:22:				checkHasSufficientFeatures		100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_live_migratable.go:51:				Run					94.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_live_migratable.go:112:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_requested_destination.go:30:			Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_requested_destination.go:43:			Run					98.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_requested_destination.go:135:			init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_status_conditions.go:23:			Run					93.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_status_conditions.go:88:			init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/vm_detection.go:17:						GetResource				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/vm_detection.go:18:						GetReason				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/vm_detection.go:19:						GetHost					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/vm_detection.go:20:						WithReason				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_binpack.go:29:					Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_binpack.go:72:					Run					90.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_binpack.go:141:					calcVMResources				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_binpack.go:154:					init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:26:			Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:30:			GetFailoverHostWeight			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:37:			GetDefaultHostWeight			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:54:			Run					93.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:116:			init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_instance_group_soft_affinity.go:29:		Run					94.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_instance_group_soft_affinity.go:85:		init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_prefer_smaller_hosts.go:29:			Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_prefer_smaller_hosts.go:60:			Run					92.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_prefer_smaller_hosts.go:157:			init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_anti_affinity_noisy_projects.go:29:		Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_anti_affinity_noisy_projects.go:44:		Init					80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_anti_affinity_noisy_projects.go:55:		Run					81.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_anti_affinity_noisy_projects.go:93:		init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_long_term_contended_hosts.go:35:	Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_long_term_contended_hosts.go:53:	Init					80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_long_term_contended_hosts.go:64:	Run					88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_long_term_contended_hosts.go:111:	init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_short_term_contended_hosts.go:35:	Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_short_term_contended_hosts.go:53:	Init					80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_short_term_contended_hosts.go:64:	Run					88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_short_term_contended_hosts.go:111:	init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:32:				Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:75:				Init					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:88:				Run					80.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:165:				calcHostCapacity			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:177:				calcHostAllocation			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:187:				calcVMResources				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:200:				init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:52:				PipelineType				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:56:				Reconcile				83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:76:				ProcessNewPod				89.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:133:				process					71.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:196:				InitPipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:209:				handlePod				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:249:				SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/helpers/resources.go:12:						GetPodResourceRequests			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/helpers/resources.go:31:						AddResourcesInto			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/helpers/resources.go:41:						MaxResourcesInto			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/pipeline_webhook.go:15:						NewPipelineWebhook			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:22:				Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:26:				Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:30:				Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:43:				matchesNodeAffinity			88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:62:				matchesNodeSelectorTerm			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:71:				matchesNodeSelectorRequirement		90.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:124:				init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:21:				Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:25:				Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:29:				Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:42:				isNodeHealthy				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:70:				isNodeSchedulable			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:74:				init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:22:				Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:26:				Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:30:				Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:45:				hasCapacityForPod			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:60:				init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_noop.go:21:					Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_noop.go:25:					Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_noop.go:34:					Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_noop.go:44:					init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:21:					Init					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:25:					Validate				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:29:					Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:42:					canScheduleOnNode			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:53:					hasToleration				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:67:					init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/weighers/binpack.go:21:					Validate				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/weighers/binpack.go:34:					Run					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/weighers/binpack.go:48:					calculateBinpackScore			85.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/weighers/binpack.go:83:					init					50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api.go:21:					NewAPI					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api.go:27:					Init					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_change_commitments.go:40:			HandleChangeCommitments			76.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_change_commitments.go:98:			processCommitmentChanges		20.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_change_commitments.go:264:			watchReservationsUntilReady		0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_info.go:22:					HandleInfo				72.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_info.go:58:					buildServiceInfo			22.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_report_capacity.go:19:			HandleReportCapacity			78.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/capacity.go:24:					NewCapacityCalculator			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/capacity.go:29:					CalculateCapacity			91.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/capacity.go:60:					calculateAZCapacity			71.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/capacity.go:86:					getAvailabilityZones			55.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:46:					NewCommitmentsClient			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:50:					Init					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:109:					ListProjects				90.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:128:					ListCommitmentsByID			79.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:172:					listCommitments				90.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/messages.go:135:					UnmarshalJSON				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/messages.go:158:					MarshalJSON				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/reservation_manager.go:25:			NewReservationManager			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/reservation_manager.go:46:			ApplyCommitmentState			84.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/reservation_manager.go:201:			syncReservationMetadata			15.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/reservation_manager.go:242:			newReservation				94.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/state.go:23:					getFlavorGroupNameFromResource		100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/state.go:51:					FromCommitment				75.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/state.go:95:					FromChangeCommitmentTargetState		0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/state.go:148:					FromReservations			86.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/syncer.go:40:					NewSyncer				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/syncer.go:47:					Init					66.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/syncer.go:54:					getCommitmentStates			62.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/syncer.go:124:					SyncReservations			60.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/utils.go:13:					GetMaxSlotIndex				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/utils.go:30:					GetNextSlotIndex			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/utils.go:36:					extractCommitmentUUID			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:76:					Reconcile				53.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:347:					reconcileAllocations			17.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:426:					Init					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:440:					listServersByProjectID			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:470:					SetupWithManager			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/monitor.go:32:					NewControllerMonitor			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/monitor.go:47:					Describe				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/monitor.go:53:					Collect					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/flavor_groups.go:25:						Get					85.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/flavor_groups.go:46:						GetAllFlavorGroups			85.7%
total:																(statements)				68.1%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants