Optaplanner: Termination strategy when all constraints not satisfied - optaplanner

I am using optaplanner for solving a problem similar to employee assignment. All of my constraints may not be satisfied - I just want the best possible solution.
What is a good termination strategy for this?
If I use unimprovedSecondsSpentLimit, it compromises the reproducibility of the solution.
I am thinking of using unimprovedSecondsSpentLimit but don't know what value to use with it? Also, I see in documentation that this can also be used for a phase. What does that mean? I am not defining any phases myself.

You can configure a solver to exist out of multiple layers, these are typically called phases. Each phase is a seperate optimization step and you can configure it how you desire (e.g. stop after 30 seconds, stop after 200 unimprovedSteps, etc.). Hence each phase can have its own termination criteria. (This is aside from the sovler termination criteria)
Regarding finding a good termination strategy, you should use Optaplanner's benchmarker module. Load multiple datasets and try different values for unimprovedSecondsSpentLimit. See what configuration returns the most desired solutions consistently. Hope this helps.

Related

How can one incorporate heuristic algorithms in MiniZinc?

Suppose I have an order batching problem (in a warehouse context) that I will like to solve with the aid of heuristics. In particular, I want to implement some well-known heuristics for warehouses with multiple cross aisles, such as the S-shape and Largest gap heuristics.
How can I implement them in MiniZinc? Is it possible to do so?
I looked up its documentation, but I could only find MiniSearch, which is a language for specifying meta-search in a MiniZinc model. (http://www.minizinc.org/minisearch/documentation.html)
Some insight into this will be deeply appreciated.
The answer to your question heavily depends on the nature of your heuristic. From the MiniZinc aspect I would identify three kinds of heuristics:
Solving heuristics: heuristic algorithms that solve a model instance, but might not give the optimal solution.
Search heuristics: heuristic algorithms that provide (good) indications of what is best to search next.
Partial heuristics: heuristics that can solve part of the model instance, but can't solve the full model instance.
There is no straightforward MiniZinc-way of dealing with heuristics and you might need some creativity to implement your heuristic in a useable way. Here are some pointers to possible solutions:
In case you are dealing with a solving heuristics, you might not need to do any work; it already give you an solution. However, if you want to verify the solution or ensure an optimal solution, then you can consider running the model with the solution or using the solution as a warm start (respectively). (You could even implement the heuristic as a FlatZinc solver if it's broad enough, but consider the time investment vs. it's usability.)
In the other two cases the well known solution is to pre-compute the heuristics and include them within the model data. In case of a search heuristic it might be possible to compute the order in which the variables should be searched. You can then use this order in the input_order search heuristic. For a partial heuristic it is possible to precompute the partial model and include this directly in the model. This is often too constraining for the problem. Instead if you can compute multiple partial solutions, these can be included as a table constraint.
The previous solutions would only be possible if the heuristic algorithm does not depend on the domains of the variables within the search. When they do, we generally talk about "meta-search". This is were implementations like MiniSearch come in. In MiniSearch you can for example reflect on the last solution or last assignment and base new search behaviour on those values. This allows these more dynamic heuristics to be implemented.
Even MiniSearch does not generally run at every node. So in some situations you might not be able to use your heuristic in MiniZinc directly. An option in that case would be to add your heuristic to a FlatZinc solver and then call it using an designated annotation.

Is there a way to use Drools for Entity weighting in Optaplanner?

I'm using Optaplanner for event planning (~courseschedule example).
Optaplanner requires weight comparator/factory to weight courses, however certain properties might be easier to express via Drools insertLogical expressions.
For instance: the course is harder to plan if there are lots of votes to visit it.
That is I have Votes as a fact.
Of course I can rearrange the votes and assign them to the Course entity, however it seems awkward to have "extra computed elsewhere properties on my entity", and it seems to be way easier to express certain computations via rule+insertLogical.
Is it something that is just missing in Optaplanner? Is it intentionally omitted?
This might be a good jira, to support DRL for entity difficulty comparison too.
However, it can't be part of the scoreDrl's as that should be a separate kie session. The difficulty comparison runs once at the beginning (and the the future we might support running it at every step). On the other hand, the score calculation DRL runs at every move.
Personally, I think it might be overkill as weightFactory's are pretty versatile. Create a jira and try to illustrate the use case with the example as well as possible, to change our minds.

resource allocation with penalties in choco

For my model I have about 120 people and 650 tasks. I now want to allocate those tasks with choco 3.3.3. For that I have a boolMatrix "assignment" 120x650 where there is a 1 if the task is assigned to the person and a 0 otherwise. But now I have to optimize with different criteria, for example minimize overtime, abide to wishes from the people and so on. What is the best way to do that?
My intuition: I don't see a way to just accumulate penalties, so my intuition is having a matrix where for every person there is an array of "penalties" so that if person i has overtime, penalties[i][0] has penalty 5 for example and if he doesn't want to do the task penalties[i][1] has penalty 4. Then I have an IntVar score, that is the sum of penalties and I optimize over score.
Is the penalty matrix the way to go?
And how can I initialize these Variables?
Is that optimizable (every feasible solution has a score) in a reasonable time with choco?
In the nurse scheduling example this strategy was used:
solver.set(IntStrategyFactory.domOverWDeg(ArrayUtils.flatten(assignment), System.currentTimeMillis()));
What strategy should I use? Reading the choco user guide didn't help me get a good idea...
It seems from your questions that you have not tried yet to implement and test your model so we cannot help much. Anyway:
Q1) I did not understand clearly your approach but there may be many ways to go. It is by testing it that you will know whether it solves your problem or not. Maybe you could also use and integer variable x where x=k means task x is done by resource k. You could also use a set variable to collect all the tasks of each resource.
Regarding penalties, you should formalize how they should be computed in a mathematical way (from the problem specifications) before wondering how to encode it with the library. More generally, you must make very clear and formal what you want to get before working on how to get it.
Q2) To create variables, you shall use VariableFactory. Initial domains should contain all possible values.
Q3) It depends of the precise problem and of your model. Presumably, yes you can get very good solutions in a very short time. If you want a mathematically optimal solution, with a proof it is optimal, then this could be long.
Q4) It is not mandatory to specify a search strategy. Choosing the best one requires experience and benchmarking so you should try some of them to figure out yourself which one is best in your case. You can also add LNS (a kind of local search) to boost optimization...
Hope this helps

Solving with multiple initial solutions in parallel

I just started using Optaplanner. Usually in local search metaheuristics, it is common to start from multiple initial solutions in the search space and try to improve them in parallel. That way we decrease the risk of falling in a local optimum and we choose the final solution with the best score.
Is there a similar feature in Optaplanner where I could say, for example, start solving using those 100 initial solutions?
Thanks,
Antoine
Not out-of-the-box, but it's trivial too add (and I have done so in the past). Just start n threads, using their own Solver. At the end, take the solution of the thread with the overall best score.
To have each Solver try something different, either use environmentMode PRODUCTION (which uses a random randomSeed), or configure alternative solver configurations (with different TS or LA parameters etc)
It's not a good idea to take an n higher than your number of CPU cores (or even half of them with some technologies).

minisat how to find all the SAT solutions efficiently

I found a way to finding all the solutions using the way described in this link.
This is working fine, but it is slow. As it recalculates the constraints from the start i_e doesn't take advantage of the previous computations.
Now, I saw in this link, that there is a more efficient way to find all the solutions using MiniSat as library. But the way is not described there.
Can you point me to right documentation for finding all the SAT solutions efficiently?
Thanks.
A more efficient method of finding all SAT solutions is described in the paper "All-SAT using Minimal Blocking Clauses" by Yu, Subramanyan, Tsiskaridze and Malik.
The basic strategy of iteratively finding solutions and adding blocking clauses is the same, but the blocking clauses are generated using a novel idea, which reduces their size. The blocking clauses produced are smaller than the usual naive partial assignments and therefore encompass more satisfying assignments per iteration, speeding the enumeration process.
As far as I know, there is no public implementation of the ideas contained in this paper that you can download and run.

Resources