Repeated Planning : entity always getting assigned to overconstrained object - optaplanner

Trying to implement repeated planning by following the below steps:
Start the solver with initial input data and let it run for sometime
Terminate the solver - before terminating save the bestsolution in scoreDirectorFactory.buildScoreDirector()
Adding new customers using solver.addProblemFactChange(AddCustomerProblemFactChange instance)
Followed optaplanner guide 7.10 section 16.5.1 for the approach.
We use overconstrained planning in our project
Method to terminate (using rest call):
VehicleRoutingSolution sol = solver.getBestSolution();
scoreDirectorFactory = solver.getScoreDirectorFactory();
director = scoreDirectorFactory.buildScoreDirector();
CustomerList - List of customer that I want to add after solver has started
for (Customer customer : customerList) {
AddCustomerProblemFactChange add = new
if (!solver.isSolving()) {
VehicleRoutingSolution solution = solver.solve(director.getWorkingSolution());
public class AddCustomerProblemFactChange implements ProblemFactChange<VehicleRoutingSolution> {
private Customer customer;
public AddCustomerProblemFactChange1(Customer customer) {
this.customer = customer;
public void doChange(ScoreDirector<VehicleRoutingSolution> scoreDirector) {
VehicleRoutingSolution solution = scoreDirector.getWorkingSolution();
solution.setCustomerList( new ArrayList<>(solution.getCustomerList()));
//solution.buildConnectedCustomer(this.distances, this.customer);
13:31:01.738 [-8080-exec-7] INFO Solving restarted: time spent (3), best score (-1init/0hard/0medium/-39840000soft), environment mode (REPRODUCIBLE), random (JDK with seed 0).
13:31:01.749 [-8080-exec-7] DEBUG CH step (0), time spent (14), score (0hard/-70medium/-39840000soft), selected move count (11), picked move (TimeWindowedCustomer-123{null -> Vehicle-2}).
Problem with the above approach is while adding new customers, newly added entity is always getting assigned to over constrained object(Vehicle-2). I am not able to understand how optalanner picks Vehicle to assign to the customer in construction heuristic phase.


OptaPlanner - The entity was never added to this ScoreDirector error

I am implementing an algorithm similar to the NurseRoster one in OptaPlanner. I need to implement a rule in drools that check if the Employee cannot work more days than the number of days in his contract. Since i couldn't figure out how to make this in drools, i decided to write it as a method in a class, and then use it in drools to check if the constraint has been broken. Since i needed a List of ShiftAssignments in the Employee class, i needed to use an #InverseRelationShadowVariable that updated that list automatically an Employee got assigned to a Shift. Since my Employee now has to be a PlanningEntity, the error The entity was never added to this ScoreDirector appeared. I believe the error is caused by my ShiftAssignment entity, which has a #ValueRangeProvider of employees that can work in that Shift. I think this is due to the fact that ScoreDirector.beforeEntityAdded and ScoreDirector.afterEntityAdded were never called, hence the error. For some reason when i removed that range provider from ShiftAssignment and put it on NurseRoster which is the #PlanningSolution, it worked.
Here is the code:
#InverseRelationShadowVariable(sourceVariableName = "employee")
public List<ShiftAssignment> getEmployeeAssignedToShiftAssignments() {
return employeeAssignedToShiftAssignments;
#PlanningVariable(valueRangeProviderRefs = {
"employeeRange" }, strengthComparatorClass = EmployeeStrengthComparator.class,nullable = true)
public Employee getEmployee() {
return employee;
// the value range for this planning entity
#ValueRangeProvider(id = "employeeRange")
public List<Employee> getPossibleEmployees() {
return getShift().getEmployeesThatCanWorkThisShift();
#ValueRangeProvider(id = "employeeRange")
public List<Employee> getEmployeeList() {
return employeeList;
And this is the method i use to update that listOfEmployeesThatCanWorkThisShift:
public static void checkIfAnEmployeeCanBelongInGivenShiftAssignmentValueRange(NurseRoster nurseRoster) {
List<Shift> shiftList = nurseRoster.getShiftList();
List<Employee> employeeList = nurseRoster.getEmployeeList();
for (Shift shift : shiftList) {
List<Employee> employeesThatCanWorkThisShift = new ArrayList<>();
String shiftDate = shift.getShiftDate().getDateString();
ShiftTypeDefinition shiftTypeDefinitionForShift = shift.getShiftType().getShiftTypeDefinition();
for (Employee employee : employeeList) {
AgentDailySettings agentDailySetting = SearchThroughSolution.findAgentDailySetting(employee, shiftDate);
List<ShiftTypeDefinition> shiftTypeDefinitions = agentDailySetting.getShiftTypeDefinitions();
if (shiftTypeDefinitions.contains(shiftTypeDefinitionForShift)) {
And the rule that i use:
rule "maxDaysInPeriod"
$shiftAssignment : ShiftAssignment(employee != null)
int differentDaysInPeriod = MethodsUsedInScoreCalculation.employeeMaxDaysPerPeriod($shiftAssignment.getEmployee());
int maxDaysInPeriod = $shiftAssignment.getEmployee().getAgentPeriodSettings().getMaxDaysInPeriod();
if(differentDaysInPeriod > maxDaysInPeriod)
scoreHolder.addHardConstraintMatch(kcontext, differentDaysInPeriod - maxDaysInPeriod);
How can i fix this error?
This has definitely something to do with the solution cloning that is happening when a "new best solution" is created.
I encountered the same error when i implemented custom solution cloning. In my project i have multiple planning entity classes and all of them have references to each other (either a single value or a List). So when solution cloning is happening the references need to be updated so they can point to the cloned values. This is something that the default cloning process is doing without a problem, and thus leaving the solution in a consistent state. It even updates the Lists of planning entity instances in the parent planning entities correctly (covered by the method "cloneCollectionsElementIfNeeded" from the class "FieldAccessingSolutionCloner" from the OptaPlanner core).
Just a demonstration what i have when it comes to the planning entity classes:
public class ParentPlanningEntityClass{
List<ChildPlanningEntityClass> childPlanningEntityClassList;
public class ChildPlanningEntityClass{
ParentPlanningEntityClass parentPlanningEntityClass;
At first i did not update any of the references and got the error even for "ChildPlanningEntityClass". Then i have written the code that updates the references. When it comes to the planning entity instances that were coming from the class "ChildPlanningEntityClass" everything was okay at this point because they were pointing to the cloned object. What i did wrong in the "ParentPlanningEntityClass" case was that i did not create the "childPlanningEntityClassList" list from scratch with "new ArrayList();", but instead i just updated the elements of the list (using the "set" method) to point at the cloned instances of the "ChildPlanningEntityClass" class. When creating a "new ArrayList();", filling the elements to point to the cloned objects and setting the "childPlanningEntityClassList" list everything was consistent (tested with FULL_ASSERT).
So just connecting it to my issue maybe the list "employeeAssignedToShiftAssignments" is not created from scratch with "new ArrayList();" and elements instead just get added or removed from the list. So what could happen (if the list is not created from scratch) here is that both the working and the new best solution (the clone) will point to the same list and when the working solution would continue to change this list it would corrupt the best solution.

Using a #PlanningId notation on an Instant #PlanningVariable

I am currently working on setting up a gantt-based planification problem, where a user can choose which tasks they want to plan, and OptaPlanner would do it for them.
I use incremental Java score calculation, and not the drools engine.
My issue is that OptaPlanner won't take an Instant as a planning variable, as it isn't able to find a PlanningId for it.
I've been stuck on getting OptaPlanner to use multiple threads.
My current model seems to be flawed, or I am not understanding how to use OptaPlanner properly.
I tried masking the Instant behind another class, but it still did not help.
My model uses only one PlanningEntity, which is a task.
Here's a simplified version of my #PlanningEntity :
#PlanningEntity(difficultyComparatorClass = TaskDifficultyComparator.class)
public class Task extends AbstractTask {
private Machine machine;
private Instant start;
private Integer id;
#PlanningVariable(valueRangeProviderRefs = {"machineRange"}, nullable = true, strengthComparatorClass = MachineStrengthComparator.class)
public Machine getMachine() {
return machine;
#PlanningVariable(valueRangeProviderRefs = {"timeRange"}, nullable = true, strengthComparatorClass = StartStengthComparator.class)
public Instant getStart() {
return start;
In my config, I have this added to the solver tag :
This gives me an exception:
Exception in thread "Thread-6" java.lang.IllegalStateException: The move thread with moveThreadIndex (0) has thrown an exception. Relayed here in the parent thread.
at org.optaplanner.core.impl.heuristic.thread.OrderByMoveIndexBlockingQueue.take(
at org.optaplanner.core.impl.localsearch.decider.MultiThreadedLocalSearchDecider.forageResult(
at org.optaplanner.core.impl.localsearch.decider.MultiThreadedLocalSearchDecider.decideNextStep(
at org.optaplanner.core.impl.localsearch.DefaultLocalSearchPhase.solve(
at org.optaplanner.core.impl.solver.AbstractSolver.runPhases(
Caused by: java.lang.IllegalArgumentException: The externalObject (2019-04-16T20:31:17.162Z) cannot be looked up.
Maybe give the class (class java.time.Instant) a PlanningId annotation or change the PlanningSolution annotation's LookUpStrategyType or don't rely on functionality that depends on ScoreDirector.lookUpWorkingObject().
at org.optaplanner.core.impl.domain.lookup.NoneLookUpStrategy.lookUpWorkingObject(
at org.optaplanner.core.impl.domain.lookup.LookUpManager.lookUpWorkingObject(
I expected OptaPlanner to use the tasks' ID, but it seems like it wants an id on each of the PlanningVariables. I am able to add an ID on the Machine, but not on the Instant.
A java.time.Instant is immutable, so any lookup can just return the same object instance. Just like Integer, Double, LocalDate, etc, there is no need for a #PlanningId to begin with. This exposes 3 issues in OptaPlanner:'s build-in decision cache must also include Instant. I've fixed this issue in this PR for 7.20.
It should be possible to configure extra immutable classes.
It should be possible to configure #PlanningId externally on 3th party classes.
Please create a jira for 2. and 3. on project PLANNER.

Optaplanner : View intermediate score

Is there a way keep track of the score from time to time while the solver is running?
I currently instantiate my solver as follow
SolverFactory solverFactory = SolverFactory.createFromXmlResource("solver/SolverConfig.xml");
Solver solver = solverFactory.buildSolver();
solver.addEventListener(new SolverEventListener() {
public void bestSolutionChanged(BestSolutionChangedEvent event) {"New best score : " + event.getNewBestScore().toShortString());
This way I am able to see the logs every time the best score changes.
However, I would like to view the score after every 100 steps or after every 10 seconds. Is that possible?
If you turn on DEBUG (or TRACE) logging, you'll see it.
If you want to listen to it in java, that's not supported in the public API, but there's PhaseLifecycleListener in the internal implementation that has no backward compatibility guarantees...

Spring batch: alternative to JpaPagingItemReader which causes ORA-01555

I have a batch job with two steps
Step 1: Go to external database, call stored proc, compose jpa entities and persist them into internal database with the flag NOT_PROCESSED.
Step 2: Loop through just saved entities with flag NOT_PROCESSED, process them and write updated entity back (doesn't update the flag)
Once all of them are processed the flag for all of them is updated to PROCESSED. I.e. update all or nothing.
The step 1 is ok and works pretty smooth.
The step 2 is basically JpaPagingItemReader with pageSize=4, set of processors(mostly http calls) and JpaItemWriter with commit-interval=1. (I know that it is recommended to have pageSize equal to commit-interval, it's just what I have) It is also a multithreaded step with 10 threads doing the job.
That said on the step 2 I have two kind of queries:
Read: select * from ENTITY where processed=false order by id nested into two queries for paging select ... from (select .. where rownum < M) where rownum >= N
Write: update ENTITY set .. where id = ID
For some reason when I have enough entities I get infamous:
Ora-01555, snapshot too old: rollback segment with name ""
too small
I don't know exact reason of that error (undo stat doesn't show anything bad, so hopefully DBAs will find the culprit soon), but in the meantime I think that what read query does is terribly bad. Such paging queries are hard for a database anyway, but I guess when you read and at the same time update the entries which you read it may cause that kind of errors.
I would like to change the approach taken in the step 2. Instead of reading in pages. I would like to read all the ids into memory only once(i.e. give me ids of all entities I need to process) and then give each thread the id from that list. The first processor in chain will get the entity by the id through JPA. That way I continue to update and write entities one-by-one and at the same time I read the ids I need only once.
My problem is I couldn't find out-of the box solution for such reader. Is there anything I can use for that?
Well, I implemented the solution by myself and it is based on this and this. In fact I didn't use those directly, but my implementation is quite close.
Basically, this is how it looks (I don't have the code, so using my memory)
public class MyUnprocessedIdReader extends AbstractItemCountingItemStreamItemReader<Long> {
private final Object lock = new Object();
private initialized = false;
private final MyObjectsRepository repo;
private List<Long> ids;
private int current = -1;
public MyUnprocessedIdReader(MyObjectsRepository repo) {
this.repo = repo;
public void doOpen() {
synchronized(lock) {
Assert.state(!initialized, "Cannot open an already opened ItemReader, call close first");
this.initialized = true;
this.ids = ImmutableList.copyOf(repo.findAllUnprocessed());
public Long doRead() {
synchronized(lock) {
if (ids == null || !initialized) {
throw new IllegalStateException("Have you opened the reader?");
if (current < ids.size()) {
return ids.get(current);
} else {
return null;
public void doClose() {
synchronized(lock) {
this.initialized = false;
this.current = -1;
this.ids = null;
My repository is using JPA so under the hood it uses something like entityManager.createQuery("select from Objects where obj.processed = false order by asc", Long.class).executeSelect()
Also I have added one more processor to the chain:
public class LoadProcessor implements ItemProcessor<Long, MyObject> {
private final MyObjectsRepository repo;
public LoadProcessor(MyObjectsRepository repo) {
this.repo = repo;
public MyObject process(Long id) {
return repo.findById(id);
Someone may say that it is less scalable than using cursor, also there is a contention on read method, however it is very simple solution which do its job well until the number of unprocessed ids is too huge. Also processing threads are spending a lot of time in calling external REST services, so the contention on read won't be a bottleneck ever.
P.s. later I will give an update on whether it solved the issue with ORA-01555 or not.

Score corruption when using computed values to calculate score

I have a use case where:
A job can be of many types, says A, B and C.
A tool can be configured to be a type: A, B and C
A job can be assigned to a tool. The end time of the job depends on the current configured type of the tool. If the tool's current configured type is different from the type of the job, then time needs to be added to change the current tool configuration.
My #PlanningEntity is Allocation, with startTime and tool as #PlanningVariable. I tried to add the currentConfiguredToolType in the Allocation as the #CustomShadowVariable and update the toolType in the shadowListener's afterVariableChanged() method, so that I have the correct toolType for the next job assigned to the tool. However, it is giving me inconsistent results.
[EDIT]: I did some debugging to see if the toolType is set correctly. I found that the toolType is being set correctly in afterVariableChanged() method. However, when I looked at the next job assigned to the tool, I see that the toolType has not changed. Is it because of multiple threads executing this flow? One thread changing the toolType of the tool the first time and then a second thread simultaneously assigning the times the second time without taking into account the changes done by the first thread.
[EDIT]: I was using 6.3.0 Final earlier (till yesterday). I switched to 6.5.0 Final today. There too I am seeing similar results, where the toolType seems to be set properly in afterVariableChanged() method, but is not taken into account for the next allocation on that tool.
[EDIT]: Domain code looks something like below:
public class Allocation {
private Job job;
// planning variables
private LocalDateTime startTime;
private Tool tool;
//shadow variable
private ToolType toolType;
private LocalDateTime endTime;
#PlanningVariable(valueRangeProviderRefs = TOOL_RANGE)
public Tool getTool() {
return this.tool;
#PlanningVariable(valueRangeProviderRefs = START_TIME_RANGE)
public LocalDateTime getStartTime() {
return this.startTime;
#CustomShadowVariable(variableListenerClass = ToolTypeVariableListener.class,
sources = {#CustomShadowVariable.Source(variableName = "tool")})
public ToolType getCurrentToolType() {
return this.toolType;
private void setToolType(ToolType type) {
this.toolType = type;
private setStartTime(LocalDateTime startTime) {
this.startTime = startTime;
this.endTime = getTimeTakenForJob() + getTypeChangeTime();
private LocalDateTime getTypeChangeTime() {
//typeChangeTimeMap is available and is populated with data
return typeChangeTimeMap.get(tool.getType);
public class Tool {
private ToolType toolType;
getter and setter for this.
public void setToolType() { ...}
public ToolType getToolType() { ...}
public class ToolTypeVariableListener implements VariableListener<Allocation> {
public void afterVariableChanged(ScoreDirector scoreDirector, Allocation entity) {
scoreDirector.afterVariableChanged(entity, "currentToolType");
if (entity.getTool() != null && entity.getStartTime() != null) {
scoreDirector.afterVariableChanged(entity, "currentToolType");
[EDIT]: When I did some debugging, looks like the toolType set in the machine for one allocation is used in calculating the type change time for a allocation belonging to a different evaluation set. Not sure how to avoid this.
If this is indeed the case, what is a good way to model problems like this where the state of a item affects the time taken? Or am I totally off. I guess i am totally lost here.
[EDIT]: This is not an issue with how Optaplanner is invoked, but score corruption, when the rule to penalize it based on endTime is added. More details in comments.
[EDIT]: I commented out the rules specified in rules one-by-one and saw that the score corruption occurs only when the score computed depends on the computed values: endTime and toolTypeChange. It is fine when the score depends on the startTime, which is a planningVariable alone. However, that does not give me the best results. It gives me a solution which has a negative hard score, which means it violated the rule of not assigning the same tool during the same time to different jobs.
Can computed values not be used for score calculations?
Any help or pointer is greatly appreciated.
The ToolTypeVariableListener seems to lack class to the before/after methods, which can cause score corruption. Turn on FULL_ASSERT to verify.