Optaplanner - VRP number of vehicles optimization - optaplanner

How to optimize on number of vehicles utilized and optimize on vehicles with best fit for CVRP, with no Timewindow constraints.
For example --
I am running 10990 Kg load with 15 vehicles (5 vehicles each of capacities 3000Kg, 750Kg and 7500Kg). I have disabled rule for distanceFromLastCustomerToDepot.
When I run it with optaplanner examples as is, it chooses, 3 vehicles of 7500 kgs each.
Since load is 10990Kg, I expect it to fit in 2 vehicles with 7500kg or may be with 3 vehicles of 7500 + 3000 + 750?
How to optimize on this along with distance traveled?

Add a hard constraint or a heavy weighted soft constraint that penalize the number of vehicles used.
That being said, there is research that suggests that in some case, even with that constraint, local search can have trouble cutting down the number of vehicles, especially that last vehicle. Custom, course-grained moves should overcome that decently. But in practice, for convenience, people often just run a second solver which enforces one vehicle less by simply having less vehicles to start with.

Related

Unlimited vehicles in VRP

How to allow Optaplanner to use an unlimited or dynamic number of vehicles in the VRP problem?
The number of vehicles is minimized during score calculation, as each vehicle has a base cost. The solver should initialize as many vehicles as it thinks it is comvenient
#PlanningEntityCollectionProperty
#ValueRangeProvider(id = "vehicleRange")
public List<Vehicle> getVehicleList() {
return vehicleList;
}
Currently I just initialize the vehicle list with a predefined number of vehicles, such as 100 000, but I am not sure about the performance implications of that, as the search space is much bigger than necessary.
Out-of-the-box, this is the only way. You figure out the minimum maximum number of vehicles for a dataset and use that to determine the number of vehicles. For one, the minimum maximum number of vehicles is never bigger than the number of visits. But usually you can prove it to be far less than that.
That being said, the OptaPlanner architecture does support Move's that create or delete Vehicles, at least in theory. No out-of-the-box moves do that, so you'd need to build custom moves to do that - and it will get complex fast. One day we intend to support generic create/delete moves out-of-the-box.

Getting the optimal number of employees for a month (rostering)

Is it possible to get the optimal number of employees in a month for a given number of shifts?
I'll explain myself a little further taking the nurse rostering as an example.
Imagine that we don't know the number of nurses to plan in a given month with a fixed number of shifts. Also, imagine that each time you insert a new nurse in the planification it decreases your score and each nurse has a limited number of normal hours and a limited number of extra hours. Extra hours decrease more the score than normal ones.
So, the problem consists on getting the optimal number of nurses needed and their planification. I've come up with two possible solutions:
Fix the number of nurses clearly above of the ones needed and treat the problem as an overconstrained one, so there will be some nurses not assigned to any shifts.
Launching multiple instances of the same problem in parallel with an incremental number of nurses for each instance. This solution has the problem that you have to estimate more or less an approximate range of nurses under and above the nurses needed beforehand.
Both solutions are a little bit inefficient, is there a better approach to tackle with this problem?
I call option 2 doing simulations. Typically in simulations, they don't just play with the number of employees, but also the #ConstraintWeights etc. It's useful for strategic "what if" decisions (What if we ... hire more people? ... focus more on service quality? ... focus more on financial gain? ...)
If you really just need to minimize the number of employees, and you can clearly weight that versus all the other hard and soft constraint (probably as a weight in between both, similar to overconstrained planning), then option 1 is good enough - and less cpu costly.

Hard score calculation in vehicle routing

Currently using OptaPlanner for calculating score in a CVRP problem.
if (demand > capacity)
hardScore -= (demand - capacity);
If there is a heterogeneous fleet, how can I calculate a hard score?
I want to use a vehicle with small capacity if number of demand is less than the vehicle assigned by OptaPlanner.
Don't mix 2 constraints. These are 2 different constraints:
Each vehicle must have enough capacity (usually a hard constraint) - already implemented in the OptaPlanner example.
Prefer using smaller vehicles over bigger ones (usually a soft constraint). Normally there's a price per km per vehicle type, so this factors in the distance driven too in the soft score penalty.
Just implement the second constraint, starting from the OptaPlanner VRP example.

Optaplanner VRP, support multiple fuel consumption values based on vehicle type?

I have a VRP in which I would like to include fuel consumption as soft constraint and that it is different between vehicles based on type. So I would want the engine to select the vehicles with the least fuel consumption.
I thought about adding a multiplier to the vehicle type so that it is multiplied with distance as soft constraint, is it possible? and would it affect the result negatively?
Thanks,
Yes, that's possible.
You distances can be in km. Then your score rule just multiplies each distance (= km) driven by a vehicle by that vehicle's vehicle.getCostPerKm().
You can even also keep track of driving time in seconds for each distance and build one big weighted function:
addSoft(..., - ($distanceInKm * $vehicle.getCostPerKm() + $distanceInSeconds * $vehicle.getDriverWagePerSecond()));

Is there a prdefined name for the following solution search/optimization algorithm?

Consider a problem whose solution maximizes an objective function.
Problem : From 500 elements, 15 needs to be selected (candidate solution), Value of Objective function depends on the pairwise relationships between the elements in a candidate solution and some more.
The steps for solving such a problem is described here:
1. Generate a set of candidate solutions in guided random manner(population) //not purely random the direction is given to generate the population
2. Evaluating the objective function for current population
3. If the current_best_solution exceeds the global_best_solution, then replace the global_best with current_best
4. Repeat steps 1,2,3 for N (arbitrary number) times
where size of population and N are smaller (approx 50)
After N iterations it returns a candidate solution stored in global_best_solution
Is this the description of a well-known algorithm?
If it is, what is the name of that algorithm or if not under which category these type of algorithms fit?
What you have sounds like you are just fishing. Note that you might as well get rid of steps 3 and 4 since running the loop 100 times would be the same as doing it once with an initial population 100 times as large.
If you think of the objective function as a random variable which is a function of random decision variables then what you are doing would e.g. give you something in the 99.9th percentile with very high probability -- but there is no limit to how far the optimum might be from the 99.9th percentile.
To illustrate the difficulty, consider the following sort of Travelling Salesman Problem. Imagine two clusters of points A and B, each of which has 100 points. Within the clusters, each point is arbitrarily close to every other point (e.g. 0.0000001). But -- between the clusters the distance is say 1,000,000. The optimal tour would clearly have length 2,000,000 (+ a negligible amount). A random tour is just a random permutation of those 200 decision points. Getting an optimal or near optimal tour would be akin to shuffling a deck of 200 cards with 100 read and 100 black and having all of the red cards in the deck in a block (counting blocks that "wrap around") -- vanishingly unlikely (It can be calculated as 99 * 100! * 100! / 200! = 1.09 x 10^-57). Even if you generate quadrillions of tours it is overwhelmingly likely that each of those tours would be off by millions. This is a min problem, but it is also easy to come up with max problems where it is vanishingly unlikely that you will get a near-optimal solution by purely random settings of the decision variables.
This is an extreme example, but it is enough to show that purely random fishing for a solution isn't very reliable. It would make more sense to use evolutionary algorithms or other heuristics such as simulated annealing or tabu search.
why do you work with a population if the members of that population do not interact ?
what you have there is random search.
if you add mutation it looks like an Evolution Strategy: https://en.wikipedia.org/wiki/Evolution_strategy

Resources