Optaplanner : View intermediate score - optaplanner

Is there a way keep track of the score from time to time while the solver is running?
I currently instantiate my solver as follow
SolverFactory solverFactory = SolverFactory.createFromXmlResource("solver/SolverConfig.xml");
Solver solver = solverFactory.buildSolver();
solver.addEventListener(new SolverEventListener() {
#Override
public void bestSolutionChanged(BestSolutionChangedEvent event) {
logger.info("New best score : " + event.getNewBestScore().toShortString());
}
});
solver.solve(planningSolution);
This way I am able to see the logs every time the best score changes.
However, I would like to view the score after every 100 steps or after every 10 seconds. Is that possible?

If you turn on DEBUG (or TRACE) logging, you'll see it.
If you want to listen to it in java, that's not supported in the public API, but there's PhaseLifecycleListener in the internal implementation that has no backward compatibility guarantees...

Related

Job-Scheduling in Play! Framework

I am currently into the development of a play-powered (v. 1.2.4) application where users can perform certain tasks and gain rewards for doing so. These tasks require some energy, which will be refilled over time. The basic setting is as follows:
public class User extends Model {
public Long energy;
public Long maxenergy;
public Long cooldown = Long.valueOf(300);
}
public class Task extends Controller {
public static void perform(Long id) {
User user = User.findById(id).first();
// do some complex task here...
user.energy--;
user.save();
Task.list();
}
}
Now I want to refill the energy of the user after the cooldown (5 min). Assuming the user has 10/10 energy points and I want to refill a point 5 minutes after it has been used, I could easily use a job for this:
public class EnergyHealer extends Job {
public Long id;
public EnergyHealer(Long id) {
this.id = id;
}
public void doJob() throws Exception {
User user = User.findById(id);
user.energy++;
if (user.energy > user.maxenergy) {
user.energy = user.maxenergy;
}
user.save()
}
}
... and call it in my controller right after the task has been competed:
new EnergyHealer(user.id).in(user.cooldown);
My problem here is, that in this case jobs are scheduled concurrently, thus if the user performs a task 2 seconds after he performed a previous task the first energy point is refilled after 5 mins, while the subsequent point is refilled only 2 seconds later.
So, I need the jobs to be serialized, e.g., assuming I have 8 of 10 energy points, it should take exactly 10 minutes until all energy points are refilled.
On a related note: users have different levels and gain experience for completing tasks, once a certain threshold is reached. Their level is increased and all energy points are refilled, no matter how many of them have been used in the previous level, hence some jobs may become obsolete by the time they are executed.
Considering some thousand users, jobs may not be the perfect choice at all, so if someone has an idea on how to achieve the described scenario, I'm glad for any help!
I think you just have your Job scheduling wrong. Rather than kick off your job every time they perform an action, you should simply have a facade or something that will only kick off a job if one does not already exist for the user.
If the job does not already exist, create it, otherwise do nothing
then this should add 1 energy,
check if energy is full, if it is, end
if not, pause for 5 minutes
go back to 2

Repeated Planning : entity always getting assigned to overconstrained object

Approach
Trying to implement repeated planning by following the below steps:
Start the solver with initial input data and let it run for sometime
Terminate the solver - before terminating save the bestsolution in scoreDirectorFactory.buildScoreDirector()
Adding new customers using solver.addProblemFactChange(AddCustomerProblemFactChange instance)
Followed optaplanner guide 7.10 section 16.5.1 for the approach.
We use overconstrained planning in our project
Code
Method to terminate (using rest call):
VehicleRoutingSolution sol = solver.getBestSolution();
scoreDirectorFactory = solver.getScoreDirectorFactory();
director = scoreDirectorFactory.buildScoreDirector();
director.setWorkingSolution(sol);
solver.terminateEarly();
CustomerList - List of customer that I want to add after solver has started
for (Customer customer : customerList) {
AddCustomerProblemFactChange add = new
AddCustomerProblemFactChange(customer);
solver.addProblemFactChange(add);
}
if (!solver.isSolving()) {
VehicleRoutingSolution solution = solver.solve(director.getWorkingSolution());
}
AddCustomerProblemFactChange
public class AddCustomerProblemFactChange implements ProblemFactChange<VehicleRoutingSolution> {
private Customer customer;
public AddCustomerProblemFactChange1(Customer customer) {
this.customer = customer;
}
#Override
public void doChange(ScoreDirector<VehicleRoutingSolution> scoreDirector) {
VehicleRoutingSolution solution = scoreDirector.getWorkingSolution();
solution.setCustomerList( new ArrayList<>(solution.getCustomerList()));
//solution.buildConnectedCustomer(this.distances, this.customer);
scoreDirector.beforeEntityAdded(customer);
solution.getCustomerList().add(customer);
scoreDirector.afterEntityAdded(customer);
scoreDirector.triggerVariableListeners();
}
}
Logs
13:31:01.738 [-8080-exec-7] INFO Solving restarted: time spent (3), best score (-1init/0hard/0medium/-39840000soft), environment mode (REPRODUCIBLE), random (JDK with seed 0).
13:31:01.749 [-8080-exec-7] DEBUG CH step (0), time spent (14), score (0hard/-70medium/-39840000soft), selected move count (11), picked move (TimeWindowedCustomer-123{null -> Vehicle-2}).
Question
Problem with the above approach is while adding new customers, newly added entity is always getting assigned to over constrained object(Vehicle-2). I am not able to understand how optalanner picks Vehicle to assign to the customer in construction heuristic phase.

OptaPlanner result is always empty

I have started to study Optaplanner since yesterday.
First, i have analyzed TSP problem provided in examples, created an empty Java project from scratch and rewrited all the code.
The concepts and documentation of OptaPlanner are (for the moment !) clear for me, but the locationList in TspSolution after solving is always empty.
I have read 10x the doc / code, and I become crazy !
Have you some ideas ? The insertion in list must be done manually in easy / increment score calculator ?
Thanks you very much, any help is welcome !
You will find an example if logs below.
START SOLVING !
INFO Solving started: time spent (17), best score (-4init/0), environment mode (REPRODUCIBLE), random (JDK with seed 0).
INFO Construction Heuristic phase (0) ended: time spent (46), best score (-6100), score calculation speed (227/sec), step total (4).
INFO Local Search phase (1) ended: time spent (5000), best score (-6100), score calculation speed (303348/sec), step total (1502180).
INFO Solving ended: time spent (5000), best score (-6100), score calculation speed (300437/sec), phase total (2), environment mode (REPRODUCIBLE).
SOLVED !!!!
TspSolution{locationList=[], domicile=DOMICILE, visitList=[VISIT_01, VISIT_02, VISIT_03, VISIT_04], score=-6100}
Main class :
public static void main(String[] args)
{
System.out.println("START SOLVING !");
// Build the Solver
// SolverFactory<TspSolution> solverFactory = SolverFactory.createFromXmlResource("test.xml", TSPTest.class.getClassLoader());
SolverFactory<TspSolution> solverFactory = SolverFactory.createEmpty();
SolverConfig solverConfig = solverFactory.getSolverConfig();
ScanAnnotatedClassesConfig scanAnnotatedClassesConfig = new ScanAnnotatedClassesConfig();
scanAnnotatedClassesConfig.setPackageIncludeList(Arrays.asList("com.coursierprive.streams.job.utils.tsp"));
solverConfig.setScanAnnotatedClassesConfig(scanAnnotatedClassesConfig);
ScoreDirectorFactoryConfig scoreDirectorFactoryConfig = new ScoreDirectorFactoryConfig();
// scoreDirectorFactoryConfig.setEasyScoreCalculatorClass(CloudBalancingEasyScoreCalculator.class);
scoreDirectorFactoryConfig.setIncrementalScoreCalculatorClass(TspIncrementalScoreCalculator.class);
scoreDirectorFactoryConfig.setInitializingScoreTrend("ONLY_DOWN");
solverConfig.setScoreDirectorFactoryConfig(scoreDirectorFactoryConfig);
TerminationConfig terminationConfig = new TerminationConfig();
terminationConfig.setSecondsSpentLimit(5L);
solverConfig.setTerminationConfig(terminationConfig);
Solver<TspSolution> solver = solverFactory.buildSolver();
TspSolution initialSolution = new TspSolution();
initialSolution.setDomicile(new Domicile(new Location("DOMICILE", 42.0, 2.0)));
initialSolution.getVisitList().add(new Visit("V1", new Location("VISIT_01", 42.0, 3.0)));
initialSolution.getVisitList().add(new Visit("V2", new Location("VISIT_02", 43.0, 3.0)));
initialSolution.getVisitList().add(new Visit("V3", new Location("VISIT_03", 44.0, 3.0)));
initialSolution.getVisitList().add(new Visit("V4", new Location("VISIT_04", 45.0, 3.0)));
// Solve the problem
TspSolution optimizedSolution = solver.solve(initialSolution);
System.out.println("SOLVED !!!!");
System.out.println(optimizedSolution);
}
In the examples, as implemented, adding a location to a Visit or Docimicile instance doesn't automatically add it to the solution's location list.
The reason that a solution has a location list in the first place, is to give pass those instances as problem facts into drools (if you're using such calculation). But the DRL score rules probably don't match Location anyway, so it may be unneeded for TspSolution to even have a locationList.

Score corruption when using computed values to calculate score

I have a use case where:
A job can be of many types, says A, B and C.
A tool can be configured to be a type: A, B and C
A job can be assigned to a tool. The end time of the job depends on the current configured type of the tool. If the tool's current configured type is different from the type of the job, then time needs to be added to change the current tool configuration.
My #PlanningEntity is Allocation, with startTime and tool as #PlanningVariable. I tried to add the currentConfiguredToolType in the Allocation as the #CustomShadowVariable and update the toolType in the shadowListener's afterVariableChanged() method, so that I have the correct toolType for the next job assigned to the tool. However, it is giving me inconsistent results.
[EDIT]: I did some debugging to see if the toolType is set correctly. I found that the toolType is being set correctly in afterVariableChanged() method. However, when I looked at the next job assigned to the tool, I see that the toolType has not changed. Is it because of multiple threads executing this flow? One thread changing the toolType of the tool the first time and then a second thread simultaneously assigning the times the second time without taking into account the changes done by the first thread.
[EDIT]: I was using 6.3.0 Final earlier (till yesterday). I switched to 6.5.0 Final today. There too I am seeing similar results, where the toolType seems to be set properly in afterVariableChanged() method, but is not taken into account for the next allocation on that tool.
[EDIT]: Domain code looks something like below:
#PlanningEntity
public class Allocation {
private Job job;
// planning variables
private LocalDateTime startTime;
private Tool tool;
//shadow variable
private ToolType toolType;
private LocalDateTime endTime;
#PlanningVariable(valueRangeProviderRefs = TOOL_RANGE)
public Tool getTool() {
return this.tool;
}
#PlanningVariable(valueRangeProviderRefs = START_TIME_RANGE)
public LocalDateTime getStartTime() {
return this.startTime;
}
#CustomShadowVariable(variableListenerClass = ToolTypeVariableListener.class,
sources = {#CustomShadowVariable.Source(variableName = "tool")})
public ToolType getCurrentToolType() {
return this.toolType;
}
private void setToolType(ToolType type) {
this.toolType = type;
this.tool.setToolType(type);
}
private setStartTime(LocalDateTime startTime) {
this.startTime = startTime;
this.endTime = getTimeTakenForJob() + getTypeChangeTime();
...
}
private LocalDateTime getTypeChangeTime() {
//typeChangeTimeMap is available and is populated with data
return typeChangeTimeMap.get(tool.getType);
}
}
public class Tool {
...
private ToolType toolType;
getter and setter for this.
public void setToolType() { ...}
public ToolType getToolType() { ...}
}
public class ToolTypeVariableListener implements VariableListener<Allocation> {
...
#Override
public void afterVariableChanged(ScoreDirector scoreDirector, Allocation entity) {
scoreDirector.afterVariableChanged(entity, "currentToolType");
if (entity.getTool() != null && entity.getStartTime() != null) {
entity.setCurrentToolType(entity.getJob().getType());
}
scoreDirector.afterVariableChanged(entity, "currentToolType");
}
[EDIT]: When I did some debugging, looks like the toolType set in the machine for one allocation is used in calculating the type change time for a allocation belonging to a different evaluation set. Not sure how to avoid this.
If this is indeed the case, what is a good way to model problems like this where the state of a item affects the time taken? Or am I totally off. I guess i am totally lost here.
[EDIT]: This is not an issue with how Optaplanner is invoked, but score corruption, when the rule to penalize it based on endTime is added. More details in comments.
[EDIT]: I commented out the rules specified in rules one-by-one and saw that the score corruption occurs only when the score computed depends on the computed values: endTime and toolTypeChange. It is fine when the score depends on the startTime, which is a planningVariable alone. However, that does not give me the best results. It gives me a solution which has a negative hard score, which means it violated the rule of not assigning the same tool during the same time to different jobs.
Can computed values not be used for score calculations?
Any help or pointer is greatly appreciated.
best,
Alice
The ToolTypeVariableListener seems to lack class to the before/after methods, which can cause score corruption. Turn on FULL_ASSERT to verify.

DataNucleus Memory/Cache Handling for large update/insert

We are running application in Spring context using DataNucleus as our ORM mapping and mysql as our database.
Our application have a daily import job of some data feed into our database. The size of the data feed translate into around 1 millions row of insert/update. The performance of the import start out to be very good but then it degrade overtime (as the number of executed query increase) and at some point the application freeze or stop responding. We will have to wait for the whole job to complete before the application response again.
This behavior looks very like a memory leak to us and we have been looking hard at our code to catch any potential problem, however the problem didn't go away. One interesting thing we found from the heap dump is that org.datanucleus.ExecutionContextThreadedImpl (or the HashSet/HashMap) hold 90% of our memory (5GB) during the import. (I have attahed the screenshot of the dump below). My research on the internet said this reference is the Level1 Cache (not sure am i correct). My question is during a large import, how can i limit/control the size of the level1 cache. May be ask DN to not cache during my import?
If that's not the L1 cache, what's the possible cause of my memory issue?
Our code use a transaction for every insert to prevent locking of large chunk of data in the database. It's call the flush method every 2000 insert
As a temporary fix, we moved our import process to run overnight when no one is using our app. Obviously, this cannot go on forever. Please could someone at least point us in the right direction so that we can do more research and hoping we can find a fixes.
Would be good if someone have knowledge of decoding the heap dump
Your help would be very much appreciated by all of us here. Many thanks!
https://s3-ap-southeast-1.amazonaws.com/public-external/datanucleus_heap_dump.png
https://s3-ap-southeast-1.amazonaws.com/public-external/datanucleus_dump2.png
Code Below - Caller of this method does not have a transaction. This method will process one import object per call, and we need to process around 100K of these object daily
#Override
#PreAuthorize("(hasUserRole('ROLE_ADMIN')")
#Transactional(propagation = Propagation.REQUIRED)
public void processImport(ImportInvestorAccountUpdate account, String advisorCompanyKey) {
ImportInvestorAccountDescriptor invAccDesc = account
.getInvestorAccount();
InvestorAccount invAcc = getInvestorAccountByImportDescriptor(
invAccDesc, advisorCompanyKey);
try {
ParseReportingData parseReportingData = ctx
.getBean(ParseReportingData.class);
String baseCCY = invAcc.getBaseCurrency();
Date valueDate = account.getValueDate();
ArrayList<InvestorAccountInformationILAS> infoList = parseReportingData
.getInvestorAccountInformationILAS(null, invAcc, valueDate,
baseCCY);
InvestorAccountInformationILAS info = infoList.get(0);
PositionSnapshot snapshot = new PositionSnapshot();
ArrayList<Position> posList = new ArrayList<Position>();
Double totalValueInBase = 0.0;
double totalQty = 0.0;
for (ImportPosition importPos : account.getPositions()) {
Asset asset = getAssetByImportDescriptor(importPos
.getTicker());
PositionInsurance pos = new PositionInsurance();
pos.setAsset(asset);
pos.setQuantity(importPos.getUnits());
pos.setQuantityType(Position.QUANTITY_TYPE_UNITS);
posList.add(pos);
}
snapshot.setPositions(posList);
info.setHoldings(snapshot);
log.info("persisting a new investorAccountInformation(source:"
+ invAcc.getReportSource() + ") on " + valueDate
+ " of InvestorAccount(key:" + invAcc.getKey() + ")");
persistenceService.updateManagementEntity(invAcc);
} catch (Exception e) {
throw new DataImportException(invAcc == null ? null : invAcc.getKey(), advisorCompanyKey,
e.getMessage());
}
}
Do you use the same pm for the entire job?
If so, you may want to try to close and create new ones once in a while.
If not, this could be the L2 cache. What setting do you have for datanucleus.cache.level2.type? It think it's a weak map by default. You may want to try none for testing.

Resources