What if I set the constraint weight to zero in OptaPlanner? - optaplanner

If I set the weight of a constraint to zero, does that mean that this constraint does not take effect? look like below:
#ConstraintWeight("Speaker conflict")
private HardMediumSoftScore speakerConflict = HardMediumSoftScore.ofHard(0);

Functionally: yes, the constraint has no score impact so is effectively ignored.
Implementation wise:
In DRL score calculation in 7.20.0.Final that constraint rule still eats CPU power because Drools doesn't support disabling rules after building the KieBase at the moment.
In the ConstraintStreams prototype (long-term work in progress, not yet released) that constraint already automatically takes no CPU power because it isn't be added to the KieBase.

Related

Rewards instead of penalty in optaplanner

So I have lectures and time periods and some lectures need to be taught in a specific time period. How do i do that?
Does scoreHolder.addHardConstraintMatch(kcontext, 10); solve this as a hard constraint? Does the value of positive 10 ensure the constraint of courses being in a specific time period?
I'm aware of the Penalty pattern but I don't want to make a lot of CoursePeriodPenalty objects. Ideally, i'd like to only have one CoursePeriodReward object to say that CS101 should be in time period 9:00-10:00
Locking them with Immovable planning entities won't work as I suspect you still want OptaPlanner to decide the room for you - and currently optaplanner only supports MovableSelectionFilter per entity, not per variable (vote for the open jira for that).
A positive hard constraint would definitely work. Your score will be harder to interpret for your users though, for example a solution with a hard score of 0 won't be feasible (either it didn't get that +10 hard points or it lost 10 hard points somewhere else).
Or you could add a new negative hard constraint type that says if != desiredTimeslot then loose 10 points.

Score calculation performance using shadow variables

From a score calculation perspective, is it correct to assume that shadow variable helps to arrive at the solution faster than without using shadow variable.
Making use of shadow variable allows VariableListner to reset the values of dependent entities closer to the final/optimal value.
It depends. Both shadow variables and score calculation use delta's to create incremental calculation. Those delta's are the key to scaling out and getting a high score calculation speed per second (see last INFO logging line and the benchmark report). Whatever you do, keep an eye on that value for at least datasets of different sizes.
In theory, it shouldn't matter much if there is a shadow variable to simplify the score rules, or if there is no shadow variable and the score rules are more complex (and might use insertLogicals etc).
In practice, it often does matter: for vehicle routing, IIRC I 've seen that the shadow variable arrivalTime noticeably improved performance and scalability.
My advice is to use a shadow variable when it makes sense to have that on the domain. For example the arrivalTime etc. But use a simple calculated getter (without loops) when that suffices: for example departureTime (= arrivalTime + duration). And use the score rules for the rest.
In the end, it's a design choice: do the score rules need to figure out the departureTime or arrivalTime themselves or can we abstract away over that - by putting them in the model - and make the rules more natural to read?

z3 minimization and timeout

I try to use the z3 solver for a minimization problem. I was trying to get a timeout, and return the best solution so far. I use the python API, and the timeout option "smt.timeout" with
set_option("smt.timeout", 1000) # 1s timeout
This actually times out after about 1 second. However a larger timeout does not provide a smaller objective. I ended up turning on the verbosity with
set_option("verbose", 2)
And I think that z3 successively evaluates larger values of my objective, until the problem is satisfiable:
(opt.maxres [0:6117664])
(opt.maxres [175560:6117664])
(opt.maxres [236460:6117664])
(opt.maxres [297360:6117664])
...
(opt.maxres [940415:6117664])
(opt.maxres [945805:6117664])
...
I thus have the two questions:
Can I on contrary tell z3 to start with the upper bound, and successively return models with a smaller value for my objective function (just like for instance Minizinc annotations indomain_max http://www.minizinc.org/2.0/doc-lib/doc-annotations-search.html)
It still looks like the solver returns a satisfiable instance of my problem. How is it found? If it's trying to evaluates larger values of my objective successively, it should not have found a satisfiable instance yet when the timeout occurs...
edit: In the opt.maxres log, the upper bound never shrinks.
For the record, I found a more verbose description of the options in the source here opt_params.pyg
Edit Sorry to bother, I've beed diving into this recently once again. Anyway I think this might be usefull to others. I've been finding that I actually have to call the Optimize.upper method in order to get the upper bound, and the model is still not the one that corresponds to this upper bound. I've been able to add it as a new constraint, and call a solver (without optimization, just SAT), but that's probably not the best idea. By reading this I feel like I should call Optimize.update_upper after the solver times out, but the python interface has no such method (?). At least I can get the upper bound, and the corresponding model now (at the cost of unneccessary computations I guess).
Z3 finds solutions for the hard constraints and records the current values for the objectives and soft constraints. The last model that was found (the last model with the so-far best value for the objectives) is returned if you ask for a model. The maxres strategy mainly improves the lower bounds on the soft constraints (e.g., any solution must have cost at least xx) and whenever possible improves the upper bound (the optional solution has cost at most yy). The lower bounds don't tell you too much other than narrowing the range of possible optimal values. The upper bounds are available when you timeout.
You could try one of the other strategies, such as the one called "wmax", which
performs a branch-and-prune. Typically maxres does significantly better, but you may have better experience (depending on the problems) with wmax for improving upper bounds.
I don't have a mode where you get a stream of models. It is in principle possible, but it would require some (non-trivial) reorganization. For Pareto fronts you make successive invocations to Optimize.check() to get the successive fronts.

Hard and medium constraints and relationship between them in OptaPlanner

Is it possible that OptaPlanner will decide to satisfy number of medium constraints instead of hard constraint? For example we have situation when planner has two ways to choose, one to violate one hard constraint but satisfy x medium constraints and second to violate x medium but satisfy one hard constraint. Is there any possibility that planner will choose the first option? Or contraints with higher priority can't be changed by lower priority constraints under any circumstances?
No. If you want to have that behavior, use the same score level (hard I presume) for both constraints and use score weights to determine when it's ok the violate 1 to satisfy x of the other.
Also see docs between difference score level and score weight.

How to use min-conflict algorithm with optaplanner

Is there a min-conflict algorithm in optaplanner? or how to do it?
What about using it as part of the neighborhood selection like:
Define custom swap factory that construct neighborhood as follow
Get all violations per variable to optimize, thus requires a call to scoreDirector.calculateScore then parse/process constraintMatches
Order by variables lowest score or highest violations
Construct neighborhood via swapping those variables first
If that's viable, is there a way to get the constraintMatches without the need to re-call the calculateScore in order to speed up the process
This algorithm isn't supported out of the box yet by OptaPlanner. I 'd call it Guided Local Search. But it's not that hard to add yourself. In fact, it's not a matter of changing the algorithm, but changing the entity selectors.
Something like this should work:
<swapMoveSelector>
<entitySelector>
<cacheType>STEP</cacheType>
<probabilityWeightFactoryClass>...MyProbabilityWeightFactory</probabilityWeightFactoryClass>
</entitySelector>
</swapMoveSelector>
Read about the advanced swapMoveSelector configuration, entity selector, sorted selection and probability selection.
The callback class you implement for the probabilistic selection or sorted selection should prioritize entities that are part of a conflict.
I would definitely use sorted or probabilistic selection on the entity selector, not the entire swapMoveSelector because that is overkill, cpu hungry and memory hungry.
I would prefer probabilistic selection over sorted selection. Even though sorted selection better reflects your pseudo code, I believe (but haven't proven) that probabilistic selection will do better, given the nature of Metaheuristics. Try both, run some benchmarks with the Benchmarker and let us know what works best ;)
Not sure about how to solve your overall problem but for your last point:
You can create a PhaseLifecycleListener and attach it via ((DefaultSolver) solver).addPhaseLifecycleListener
In the stepStarted or stepEnded(depending on your need) you can then call
stepScope.getScoreDirector().getConstraintMatchTotals()
to get the constraint totals.
Hope this somewhat helps.

Resources