Optaplanner - soft scoring rule not working as expected - optaplanner

I built an application which implements a similar function as task assignment. I thought it works well until recently I noticed the solutions are not optimal. In details, there is a score table for each possible pair between machines and tasks, and usually the number of machines is much less than the number of tasks. I used hard/medium/soft rules, where the soft rule is incremental based on the score of each assignment from the score table.
However, when I reviewed the results after 1-2 hours run, I found out of the unassigned tasks there are many better choices (would achieve higher soft score if assigned) than current assignments. The benchmark reports indicate that the total soft score reached plateau within a hour and then stuck at that score level.
I checked the logic of rules - if the soft rule working perfectly, it should eventually find a way of allocation which achieves the highest overall soft score, whereas meeting the other hard/medium rules, isn't it?
I've been trying various things such as tuning algorithm parameters, scaling the score table, etc. but none delivers the optimal solution.

One problem is that you might be facing a score trap (see docs). In that case, make your constraint score more fine grained to deal with that.
If that's not the case, and you're stuck in a local optima, then I wouldn't play too much with the algorithm parameters - they will probably fix it, but you'll be overfitting on that dataset.
Instead, figure out the smallest possible move that gets you of that local optima and a step closer to the global optimum. Add that kind of moves as a custom move. For example if a normal swap move can't help, but you see a way of getting there by doing a 3-swap move, then implement that move.

Related

ai: Determining what tests to run to get most useful data

This is for http://cssfingerprint.com
I have a system (see about page on site for details) where:
I need to output a ranked list, with confidences, of categories that match a particular feature vector
the binary feature vectors are a list of site IDs & whether this session detected a hit
feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit)
categories are a large, non-closed set (user IDs)
my total feature space is approximately 50 million items (URLs)
for any given test, I can only query approx. 0.2% of that space
I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc)
getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries
I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id)
I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one).
This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs).
There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far.
Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved.
I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms.
What's a good way to approach this problem?
If you know nothing about the features you have not sampled, then you have little to go on when deciding whether to explore or exploit your data. If you can express your ROI as a single number following every query, then there is an optimal way of making this choice by keeping track of the upper confidence bounds. See the paper Finite-time Analysis of Multiarmed Bandit Problem.

Simple tool for shipping dates estimates without uncertainty [closed]

What I'm looking for is very simple: I want a tool that computes the calculated, as opposed to estimated based on confidence intervals, shipping date given a list of tasks with total estimates and current progress each without introducing further uncertainty as I want to handle that externally.
I want it to take workdays duration and user input holidays into account, etc.
I know Fogbugz's Evidence Base Scheduling does something very close to that but I would like it without the statistical aspect and associated confidence intervals. I'm aware it's a drastic simplification and that statistical estimation is the essence of EBS but I'm not looking for a subjective discussion here, I just want to be able to access this simple information (the supposedly exact shipping date) at any given time during the project without having to calculate it myself.
So I'm looking for one of three things : 1) a way to customize Fogbugz (6.0) to show me the information I want besides confidence intervals 2) a way to customize Fogbugz to set estimates uncertainty to 0 3) another tool (free) that does what I want exactly.
EDIT: By "supposedly exact" or "calculated", I don't mean with respect to what is actually going to happen, that would indeed be trying to predict the future. I mean with respect to the information that was input, together with its obvious uncertainty. In that case, I guess estimates for individual tasks should be more seen as spending limits or upper bounds. The information I would like to be able to compute is really very simple : if everything goes exactly as specified, where does it take us ? Then, with information about how the estimates were made, such as the ability of each individual developper to make good estimates, I can derive the confidence interval. EBS does this automatically and, undoubtebly, very well which is why I use it. What I would like is to obtain is one more little piece of information, ie the same starting point EBS uses and try to play with my own asumptions as to how the statistical estimation should be made.
FogBugz will show you the sum of estimates at the bottom of the LIST page, labelled "Total estimated time remaining". This is the raw sum of estimates, without any EBS calculations.
You can't predict the future. So any calculated shipping date can only be a guess. That guess depends on the confidence intervals around each individual number that went into it. This is a matter of definition -- even though you may not like it.
You may want to have a "100% confident" date, but such a thing (by definition) cannot exist. You cannot have an uncertainty of zero unless you want a date infinitely far in the future. It's the nature of statistics: the distribution is actually infinite, but data is considerably more likely to cluster around the mean.
The only thing you can do is pick a really big confidence interval (99.7%). You are free to ignore the supporting statistical facts about the confidence interval and pretend it has zero uncertainty. For all practical purposes 0.3% uncertainty is small enough that you're not going to be unhappy with that date.
However, all statistically-based predictions of the future must have uncertainty. It's a law.

Optimising table assignment to guests for an event based on a criteria

66 guests at an event, 8 tables. Each table has a "theme". We want to optimize various criteria: e.g., even number of men/women at the table, people get to discuss the topic they selected, etc.
I formulated this as a gradient-free optimisation problem: I wrote a function that calculates the goodness of the arrangement (i.e., cost of difference of men women, cost of non-preferred theme, etc.) and I am basically randomly perturbing the arrangement by swapping tables and keeping the "best so far" arrangement. This seems to work, but cannot guarantee optimality.
I am wondering if there is a more principled way to go about this. There (intuitively) seems to be no useful gradient in the operation of "swapping" people between tables, so random search is the best I came up with. However, brute-forcing by evaluating all possibilities seems to be difficult; if there are 66 people, there are factorial(66) possible orders, which is a ridiculously large number (10^92 according to Python). Since swapping two people at the same table is the same, there are actually fewer, which I think can be calculated by dividing out the repeats, e.g. fact(66)/(fact(number of people at table 1) * fact(number of people at table 2) * ...), which in my problem still comes out to about 10^53 possible arrangements, way too many to consider.
But is there something better that I can do than random search? I thought about evolutionary search but I don't know if it would provide any advantages.
Currently I am swapping a random number of people on each evaluation and keeping it only if it gives a better value. The random number of people is selected from an exponential distribution to make it more probable to swap 1 person than 6, for example, to make small steps on average but to keep the possibility of "jumping" a bit further in the search.
I don't know how to prove it but I have a feeling this is an NP-hard problem; if that's the case, how could it be reformulated for a standard solver?
Update: I have been comparing random search with a random "greedy search" and a "simulated annealing"-inspired approach where I have a probability of keeping swaps based on the measured improvement factor, that anneals over time. So far the greedy search surprisingly strongly outperforms the probabilistic approach. Adding the annealing schedule seems to help.
What I am confused by is exactly how to think about the "space" of the domain. I realize that it is a discrete space, and that distance are best described in terms of Levenshtein edit distance, but I can't think how I could "map" it to some gradient-friendly continuous space. Possibly if I remove the exact number of people-per-table and make this continuous, but strongly penalize it to incline towards the number that I want at each table -- this would make the association matrix more "flexible" and possibly map better to a gradient space? Not sure. Seating assignment could be a probability spread over more than one table..

Number of simulation per node in Monte Carlo tree search

In the mcts algorithm described in Wikipedia, it performs exactly one playout(simulation) in each node selection. Now, I am experimenting this algorithm in a simple connect-k game. I wonder, in practice, do we perform more playouts to reduce the variance?
I tried the original algorithm with exactly one random playout (non-biased). The result is bad compared to my heuristic search with alpha-beta pruning. It converges very slowly. When I perform 500 playouts instead, the noise is a lot less. However, each node simulation is too slow for the algorithm to explore other parts of the tree in the given time hence missing the most critical move sometimes.
I then added the AMAF (in particular with RAVE transition) heuristic to the basic MCTS. I don't notice too much difference with 500 playouts perhaps because the variance is already low. I haven't analyzed the result with 1 playout yet.
Could anyone give me any insights?
Typically, you'd do exactly one play-out per selection step. However, subsequent selection steps can go through the same node multiple times.
Consider, for example, a case where there are only two moves available in the root node. If you then run, let's say, 10,000 complete iterations of MCTS (where one iteration = Selection + Expansion + Play-out + Backpropagation), each of the two nodes below the root node will get selected roughly 5,000 times (or maybe one gets selected 9,000 times and the other 1,000 times if the first is clearly a better option than the seocnd, but still, both get selected more than once).
Does this match what you are currently doing in your implementation? If not, try providing some code that you currently have so that we can see where it goes wrong. But if this is how you implemented it (which is how it should be), then there should be no problems with doing only one play-out per selection step

Optaplanner take fastest path

How can we optimize Optaplanner to select the fastest route? See the highlighted point in the below image. It is taking the long route.
Note: Vehicles does not need to come back depot. I think i cannot use CVRPTW as arrivalAfterDueTimeAtDepot is a build-in hard constraint (and besides i do not have any time constraints).
How can we write a constraint to select the less capacity vehicle?
For example, A customer needs only 3 items and we have two vehicles with 4 and 9 capacities. Seems like Optaplanner is selecting the first vehicle from the order of input by default.
I presume it's taking the blue vehicle for the center of Bengaluru because the green in is already at full capacity.
Check what the score is (calculated through Solver.getScoreDirectorFactory()) if you manually put that location in the green trip and swap the vehicles of the green and blue trip. If it's worse (or breaks a hard constraint), then it's normal that OptaPlanner selects the other solution. In that case, either your score function has bug (or you realize don't want that solution at all). But if it has indeed a better score, OptaPlanner's <localSearch> (such as Late Acceptance) should find it (especially when scaling out because ironically local optima are a bigger problem when scaling down). You can try to add <subchainSwapMoveSelector> etc to escape local optima faster.
If you want to guide the search more (which is often not a good idea), you can define a planning value strength comparator to sort small vehicles before big vehicles and use the Construction Heuristic WEAKEST_FIT(_DECREASING).

Resources