Using a #PlanningId notation on an Instant #PlanningVariable - optaplanner

I am currently working on setting up a gantt-based planification problem, where a user can choose which tasks they want to plan, and OptaPlanner would do it for them.
I use incremental Java score calculation, and not the drools engine.
My issue is that OptaPlanner won't take an Instant as a planning variable, as it isn't able to find a PlanningId for it.
I've been stuck on getting OptaPlanner to use multiple threads.
My current model seems to be flawed, or I am not understanding how to use OptaPlanner properly.
I tried masking the Instant behind another class, but it still did not help.
My model uses only one PlanningEntity, which is a task.
Here's a simplified version of my #PlanningEntity :
#PlanningEntity(difficultyComparatorClass = TaskDifficultyComparator.class)
public class Task extends AbstractTask {
private Machine machine;
private Instant start;
private Integer id;
#PlanningVariable(valueRangeProviderRefs = {"machineRange"}, nullable = true, strengthComparatorClass = MachineStrengthComparator.class)
public Machine getMachine() {
return machine;
#PlanningVariable(valueRangeProviderRefs = {"timeRange"}, nullable = true, strengthComparatorClass = StartStengthComparator.class)
public Instant getStart() {
return start;
In my config, I have this added to the solver tag :
This gives me an exception:
Exception in thread "Thread-6" java.lang.IllegalStateException: The move thread with moveThreadIndex (0) has thrown an exception. Relayed here in the parent thread.
at org.optaplanner.core.impl.heuristic.thread.OrderByMoveIndexBlockingQueue.take(
at org.optaplanner.core.impl.localsearch.decider.MultiThreadedLocalSearchDecider.forageResult(
at org.optaplanner.core.impl.localsearch.decider.MultiThreadedLocalSearchDecider.decideNextStep(
at org.optaplanner.core.impl.localsearch.DefaultLocalSearchPhase.solve(
at org.optaplanner.core.impl.solver.AbstractSolver.runPhases(
Caused by: java.lang.IllegalArgumentException: The externalObject (2019-04-16T20:31:17.162Z) cannot be looked up.
Maybe give the class (class java.time.Instant) a PlanningId annotation or change the PlanningSolution annotation's LookUpStrategyType or don't rely on functionality that depends on ScoreDirector.lookUpWorkingObject().
at org.optaplanner.core.impl.domain.lookup.NoneLookUpStrategy.lookUpWorkingObject(
at org.optaplanner.core.impl.domain.lookup.LookUpManager.lookUpWorkingObject(
I expected OptaPlanner to use the tasks' ID, but it seems like it wants an id on each of the PlanningVariables. I am able to add an ID on the Machine, but not on the Instant.

A java.time.Instant is immutable, so any lookup can just return the same object instance. Just like Integer, Double, LocalDate, etc, there is no need for a #PlanningId to begin with. This exposes 3 issues in OptaPlanner:'s build-in decision cache must also include Instant. I've fixed this issue in this PR for 7.20.
It should be possible to configure extra immutable classes.
It should be possible to configure #PlanningId externally on 3th party classes.
Please create a jira for 2. and 3. on project PLANNER.


How to model immutable entities with static factory method

Hello there i have a question concerning the right way of modelling immutable entities:
Consider this entity (edited as of the suggestion by Jens Schauder):
#RequiredArgsConstructor(staticName = "of", access = AccessLevel.PACKAGE)
public final class Student {
private final #Id #Wither
long studentId;
#Size(min = 4, max = 20)
private final String userId;
private final int matriculationNumber;
private final String eMail;
So this entity should be immutable and offers a static of creation method. Also the RequiredArgsConstructor builds a private constructor although it should create a package visible one for all final/non null fields per definition. In short i did an AllArgsConstructor so to speak.
This document over here in detail the section about "Object creation internals" states 4 aspects for improved handling - "the constructor to be used by Spring Data must not be private" amongst others which are fulfilled in my opinion.
So my question:
Is this pictured entity done right in both ways concerning immutabillity and spring data jdbc internals optimum mapping?
There seems to be a bug with lombok plugin in intellij, hindering the access = AccessLevel.PACKAGE doing the right stuff. See here:
Although the issue is already closed a new version of the plugin is not available ...
This depends on your definition of "optimum mapping".
It should work, so this is already something.
But the optimization described in the docs cannot be applied because your constructor is private.
Therefore you lose the 10% performance boost of that which it probably does not make it "optimal".
But the 10% boost is about the object instantiation.
It is not about the roundtrip to the database which involves:
extraction of data from your entities
construction (or lookup) of SQL to use
sending both to the database
performing the query in the database
returning the result
This makes it very likely that the gain from that optimization is well below 10% and in most cases nothing to worry about.
Of course, you will never really know until you made your own benchmarks with real data.
For this, you would need to create an all args constructor which has at least package scope.

Versioning of rows in database oracle in application on spring+jpa?

There is application on spring+jpa+oracle.
Needs to store history of rows and versions.
1) by rest I need to get last n rows with there version number.
2) There is no delete or update operations - only write\readFresh\getAllFromHistory
Which way to implement is the best?
It really depends but the thing I'd suggest you is to utilize one of more of the features/annotations such as:
#Version for the auto-incrementing feature/property to know how many times a version has changed,
#CreationTimestamp for a property that is signifying the date an object/entry was made
#UpdateTimestamp for a property that signifies the date when an entry was last updated
A good way to implement i.e. versioning for all your entities is via utilization of #MappedSuperclass in an AbstractJpa class, i.e.:
public abstract class AbstractJpa {
// some code
private Long version;
// getters and setters
#Column(name = "version_number")
public Long getVersion() {
return version;
public void setVersion(Long version) {
this.version = version;
Then you inherit/extend it in your other entities, i.e.
#Table(name = "my_entity")
public class MyEntity extends AbstractJpa {
// some code, properties, getters/setters ...
If you intend to save all previous entries then it's logical to conclude that you're generally going to have the "soft delete" implementation of some sort as well. I.e. you intend to have some is_deleted boolean value in your database and your #MappedSuperclass may also have such column/property defined as well. At that same time, you should utilize insert option for each intended "update", i.e. you're going to insert new rows (persist(myObject)) each time you issue an update instead of performing a merge(myObject).
It all depends on your specific scenario and use case, but these would be some general things to look for on the internet, I hope that helps.

Score corruption when using computed values to calculate score

I have a use case where:
A job can be of many types, says A, B and C.
A tool can be configured to be a type: A, B and C
A job can be assigned to a tool. The end time of the job depends on the current configured type of the tool. If the tool's current configured type is different from the type of the job, then time needs to be added to change the current tool configuration.
My #PlanningEntity is Allocation, with startTime and tool as #PlanningVariable. I tried to add the currentConfiguredToolType in the Allocation as the #CustomShadowVariable and update the toolType in the shadowListener's afterVariableChanged() method, so that I have the correct toolType for the next job assigned to the tool. However, it is giving me inconsistent results.
[EDIT]: I did some debugging to see if the toolType is set correctly. I found that the toolType is being set correctly in afterVariableChanged() method. However, when I looked at the next job assigned to the tool, I see that the toolType has not changed. Is it because of multiple threads executing this flow? One thread changing the toolType of the tool the first time and then a second thread simultaneously assigning the times the second time without taking into account the changes done by the first thread.
[EDIT]: I was using 6.3.0 Final earlier (till yesterday). I switched to 6.5.0 Final today. There too I am seeing similar results, where the toolType seems to be set properly in afterVariableChanged() method, but is not taken into account for the next allocation on that tool.
[EDIT]: Domain code looks something like below:
public class Allocation {
private Job job;
// planning variables
private LocalDateTime startTime;
private Tool tool;
//shadow variable
private ToolType toolType;
private LocalDateTime endTime;
#PlanningVariable(valueRangeProviderRefs = TOOL_RANGE)
public Tool getTool() {
return this.tool;
#PlanningVariable(valueRangeProviderRefs = START_TIME_RANGE)
public LocalDateTime getStartTime() {
return this.startTime;
#CustomShadowVariable(variableListenerClass = ToolTypeVariableListener.class,
sources = {#CustomShadowVariable.Source(variableName = "tool")})
public ToolType getCurrentToolType() {
return this.toolType;
private void setToolType(ToolType type) {
this.toolType = type;
private setStartTime(LocalDateTime startTime) {
this.startTime = startTime;
this.endTime = getTimeTakenForJob() + getTypeChangeTime();
private LocalDateTime getTypeChangeTime() {
//typeChangeTimeMap is available and is populated with data
return typeChangeTimeMap.get(tool.getType);
public class Tool {
private ToolType toolType;
getter and setter for this.
public void setToolType() { ...}
public ToolType getToolType() { ...}
public class ToolTypeVariableListener implements VariableListener<Allocation> {
public void afterVariableChanged(ScoreDirector scoreDirector, Allocation entity) {
scoreDirector.afterVariableChanged(entity, "currentToolType");
if (entity.getTool() != null && entity.getStartTime() != null) {
scoreDirector.afterVariableChanged(entity, "currentToolType");
[EDIT]: When I did some debugging, looks like the toolType set in the machine for one allocation is used in calculating the type change time for a allocation belonging to a different evaluation set. Not sure how to avoid this.
If this is indeed the case, what is a good way to model problems like this where the state of a item affects the time taken? Or am I totally off. I guess i am totally lost here.
[EDIT]: This is not an issue with how Optaplanner is invoked, but score corruption, when the rule to penalize it based on endTime is added. More details in comments.
[EDIT]: I commented out the rules specified in rules one-by-one and saw that the score corruption occurs only when the score computed depends on the computed values: endTime and toolTypeChange. It is fine when the score depends on the startTime, which is a planningVariable alone. However, that does not give me the best results. It gives me a solution which has a negative hard score, which means it violated the rule of not assigning the same tool during the same time to different jobs.
Can computed values not be used for score calculations?
Any help or pointer is greatly appreciated.
The ToolTypeVariableListener seems to lack class to the before/after methods, which can cause score corruption. Turn on FULL_ASSERT to verify.

Allowing overlap with OptaPlanner and TSPTW

I currently have OptaPlanner solving a TSPTW problem. For the following it will help to think of the destinations as tasks.
Each task currently has a chained planning variable called previousTask. The tasks can be categorized as Type A or Type B. What I now want to do is allow Type B tasks to optionally overlap Type A tasks, letting OptaPlanner decide whether overlap is the right choice.
For example, given the tasks A1, A2, B1, OptaPlanner may decide that A1 -> B1 -> A2 is best or that A1 -> (A2 with B1 overlapping) is best.
The way I thought I could achieve this is:
Give each Type A task a second (non-chained) planning variable called overlappingTask.
Split the current "tasks" ValueRangeProvider into two ValueRangeProviders, "typeATasks" and "typeBTasks".
Annotate previousTask's ValueRangeProviders to be both typeATasks and typeBTasks.
Annotate overlappingTask's ValueRangeProviders to only be typeBTasks.
The problem I am solving will always have at least one Type A task but may not have any Type B tasks. This caused a problem with my proposed solution because the "typeBTasks" ValueRangeProvider is sometimes empty, which throws an IllegalStateException for the previousTask planning variable.
Is there a better way to approach this problem? Is there a way to get around the empty ValueRangeProvider issue? The empty complaint against previousTask seems odd given that the combination of the ValueRangeProviders isn't empty. It seems like it would be better for OptaPlanner to check whether the combination is empty, rather than each input separately.
Here are some code snippets to clarify the current design:
public Solution
#ValueRangeProvider(id = "typeATasks")
public List<TypeA> getTypeATasks)
#ValueRangeProvider(id = "typeBTasks")
public List<TypeB> getTypeBTasks()
public class Task
#PlanningVariable(valueRangeProviderRefs = { "typeATasks", "typeBTasks" },
graphType = PlanningVariableGraphType.CHAINED)
public Task getPreviousTask()
public class TaskB extends Task {}
public class TaskA extends Task
#PlanningVariable(valueRangeProviderRefs = { "typeBTasks" }, nullable = true)
public TaskB getOverlappingTask()
Not that your model is bad, let's call yours proposal C). See my comment above, it's a bug that optaplanner 6.4.0.Beta2 fails fast on that model.
But I was thinking of a model like this, proposal A):
#PlanningEntity class TaskAssignment {
TaskDef taskDef;
#PlanningVariable TaskAssignment previousTaskAssignment;
#PlanningVariable Boolean overlapPreviousIfPossible;
boolean isOverLappingPrevious {
return taskDef.isTypeB() && overlapPreviousIfPossible;
In this case, the value range provider would just return all TaskAssignments.
But I am also thinking of another model, let's call that proposal B), like in the examination example: planning entity AbstractTaskAssignment, extended by TypeATaskAssigement and TaskBTaskAssignment. That is a better domain model (no overlap vars for type A assignements), but a far more painful config (especially moves are harder).

Drools planner: changing SimpleScore to HardAndSoftScore

I am toying with drools planner as eventually I need to shape a rostering solution. I started from Drools Planner user guide, I succeeded in running the examples in Eclipse.
Trying to understand the differences between the simple and the HardAndSoft score types, I am trying to modify the NQueens example, changing the score from Simple to HardAndSoft.
I did as follows:
In the nqueensSolverConfig.xml I set <scoreDefinitionType>HARD_AND_SOFT</scoreDefinitionType>.
In the
I set public class NQueens extends AbstractPersistable implements Solution<HardAndSoftScore> {...}
I changed SimpleScore related property and methods into corresponding HardAndSoftScore members:
private HardAndSoftScore hsScore;
public HardAndSoftScore getScore() {
return hsScore;
public void setScore(HardAndSoftScore score) {
this.hsScore = score;
But when I run the solution I receive the following message:
"The scoreString (0) doesn't follow the 999hard/999soft pattern."
What is wrong?
You 'll still have this in your solver config:
Any score written in the solver config must be in the format of the score definition, so something like this:
Note: you're probably off by looking at one of the "real" examples, such as course scheduling or nurse rostering, instead of N Queens.