Skip to content

Flow cover cuts#2518

Open
Opt-Mucca wants to merge 79 commits intolatestfrom
flow-cover-cuts
Open

Flow cover cuts#2518
Opt-Mucca wants to merge 79 commits intolatestfrom
flow-cover-cuts

Conversation

@Opt-Mucca
Copy link
Copy Markdown
Collaborator

This PR adds flow cover cuts to HiGHS. All logic related to the cut can be followed from the HighsCutGeneration::tryGenerateFLowCoverCut function. Currently the cuts are only generated based on rows in the LP, i.e., "aggregations" of size 1.
The performance results are extremely finicky, but seem to be promising, and are especially so for some network based energy problems I've been testing on. This shouldn't be merged before any additional computational experiments are done.
@galabovaa I changed the two tests because (1) the instance now solved at the root node and therefore has the optimal solution but has status interrupted (2) the primal heuristics now jumped quicker to the optimal solution, so I had to up the objective limit to still catch the event.

@jajhall
Copy link
Copy Markdown
Member

jajhall commented Feb 8, 2026

Is this PR still active @Opt-Mucca ?

@jajhall jajhall assigned Opt-Mucca and unassigned Opt-Mucca Feb 8, 2026
@Opt-Mucca
Copy link
Copy Markdown
Collaborator Author

@jajhall It's still active, but I've been failing to make them anything but performance neutral on-and-off for months now. @fwesselm has been benchmarking them whenever I try something new and I think the change is significant enough.
I'm happy to close the PR if you don't want clutter. I'm hesitant to merge them with the parameter off by default because I don't want to introduce dead code. I also firmly believe this PR is a couple of lines away from improving general MILP performance by a couple of percent and being properly helpful for certain problem classes. I just can't seem to tune them correctly...... (or there's some logic error and I'm weakening them by accident)

@jajhall
Copy link
Copy Markdown
Member

jajhall commented Feb 8, 2026

@jajhall It's still active, but I've been failing to make them anything but performance neutral on-and-off for months now. @fwesselm has been benchmarking them whenever I try something new and I think the change is significant enough. I'm happy to close the PR if you don't want clutter. I'm hesitant to merge them with the parameter off by default because I don't want to introduce dead code. I also firmly believe this PR is a couple of lines away from improving general MILP performance by a couple of percent and being properly helpful for certain problem classes. I just can't seem to tune them correctly...... (or there's some logic error and I'm weakening them by accident)

I'm happy to keep it open, but it would be good to merge latest into it so that conflicts don't build up and have CI tests pass

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants