Is there a function for aborting routing calculation in optaplanner? - optaplanner

I want to have a function like if the calculation time get too long, we abort routing calculation and submit the best solution at the point of time. Is there such a function in optaplanner ?

For example in a GUI application, you would start solving on a background (worker) thread. In this scenario you can stop solver asynchronously by calling solver.terminateEarly() from another thread, typically the UI thread when you click a stop button.
If this is not what you're looking for, read on.
Provided that by calculation you actually mean the time spent solving, you have several options how to stop solver. Besides asynchronous termination described in the first paragraph, you can use synchronous termination:
Use time spent termination if you know how much time you want dedicate to solving beforehand.
Use unimproved time spent termination if you want to stop solving if the solution doesn't improve for a specified amount of time.
Use best score termination if you want to stop solving after a certain score has been reached.
Synchronous termination is defined before starting the solver and it's done either by XML solver configuration or using the SolverConfig API. See OptaPlanner documentation for other termination conditions.
Lastly, in case you're talking about score calculation and it takes too long to calculate score for a single move (solution change) then you're most certainly doing something wrong. For OptaPlanner to be able to search the solution space effectively, the score calculation must be fast (at least 1000 calculations per second).
For example in vehicle routing problem, driving time or road distances must be known at the time when you start solving. You shouldn't slow down score calculation with a heavy computation that can be done beforehand.

Related

Optaplanner : Optimal count of number of steps for termination

I am setting my solver local search phase termination strategy based on total number of steps. I see that when I set number of steps as 80,000 it terminates inside 30 mins. So I set it around 200,000 and expect it to terminate within 2-3 hours.
However, even after a day it hasn't terminated. I then ran it with DEBUG logs and found that after around 90,000 steps, the time taken per steps starts increasing and around 100,000 mark it just does not take any new steps.
What could be causing this? If it has exhausted all the steps does it not terminate automatically?
In Late Acceptance (the default algo) and Simulated Annealing the number of steps per minute varies greatly depending on how long it's been running. In the beginning they are fast stepping, because they are far more likely to accept a move (which triggers going to the next step) and over time they become slow stepping, because they are far less like to see a move that they like as they become near optimal.
Tabu Search is pretty consistent n in the number of steps per minute.

Optaplanner termination strategy

In the opta planner configuration ,there is a provision to specify the termination time out.
Is there a better way to handle the termination time out strategy? For example , my problem size is small and I have set the termination time out as 10 sec.
But I can see from the logs that the best score is obtained well within 2 - 3 seconds. Is there any means to exit once the best score is reached ?
Or should the program always run till the timeout is reached and then output the best score.
Take a look at the Termination chapter in the OptaPlanner documentation.
What you are referring to is called BestScoreTermination but it might not be what you actually want -- do note that OptaPlanner has no way of knowing if the score is "the optimal score"... unless you configure Exhaustive Search (which doesn't scale well).
Therefore, if you misjudge your problem and set the BestScoreTermination to something "better" than the optimal value, OptaPlanner will run until it tries out all combinations (which might take effectively forever on big problems). If you're looking for a compromise, take a look at "termination composition"

DSP32C doesn't have a timer but need a timer implementation

I have this old DSP32C with me and wanted to implement a timer based control. Eventually when I started reading the datasheet, I found out that there is no timer register in the DSP32C.
Is there any possible way I could implement a function similar to timer.. say 'after 3seconds, do this...'?
Thanks,
Since you cannot solve this "internally", you'll need to find a suitable Real-Time Clock (RTC) component that can interface with the DSP32C. Which one and how to connect it is probably a question for the Electrical Engineering StackExchange site.
Depending on what you want to do, a busy loop might also be a solution: you simply loop a certain number of times. If you don't have precise timing information and can calculate the number of loops, you'll have to try out with a stop watch and tweak your loop(s) until it's taking long enough.

can anyone explain pathfinder algorithm used in Fpga routing?

How does the pathfinder algorithm work in fpga routing? I have an oral exam on this topic next week... so Can anyone explain the two iterations clearly with an example may be ...thanks in advance
Here's what I understood after reading multiple research papers.
Algorithm runs in iterations
first iteration:
route every connection with minimum delay, even if there is
congestion
Second iteration
iterate as long as congestion exists
rip-up and re-route every net in the circuit
the cost of using a congested routing resource is increased from iteration to
iteration
at the end of an iteration, we have a complete routing (but maybe with
congestion); determine the delays and slacks of all connections

How to generate requests at a “requests/sec” target rate?

Say I have a target of x requests/sec that I want to generate continuously. My goal is to start these requests at roughly the same interval, rather than just generating x requests and then waiting until 1 second has elapsed and repeating the whole thing over and over again. I'm not making any assumptions about these requests, some might take much longer than others, which is why my scheduler thread will not perform the requests (or wait for them to finish), but hand them over to a sufficiently sized Thread Pool.
Now if x is in the range of hundreds or less, I might get by with .net's Timers or Thread.Sleep and checking actually elapsed time using Stopwatch.
But if I want to go into the thousands or tens of thousands, I could try going high-resolution timer to maintain my roughly the same interval approach. But this would (in most programming environments on a general OS) imply some amount of hand-coding with spin waiting and so forth, and I'm not sure it's worthwhile to take this route.
Extending the initial approach, I could instead use a Timer to sleep and do y requests on each Timer event, monitor the actual requests per second achieved doing this and fine-tune y at runtime. The effect is somewhere in between "put all x requests and wait until 1 second elapsed since start", which I'm trying not to do, and "wait more or less exactly 1/x seconds before starting the next request".
The latter seems like a good compromise, but is there anything that's easier while still spreading the requests somewhat evenly over time? This must have been implemented hundreds of times by different people, but I can't find good references on the issue.
So what's the easiest way to implement this?
One way to do it:
First find (good luck on Windows) or implement a usleep or nanosleep function. As a first step, this could be (on .net) a simple Thread.SpinWait() / Stopwatch.Elapsed > x combo. If you want to get fancier, do Thread.Sleep() if the time span is large enough and only do the fine-tuning using Thread.SpinWait().
That done, just take the inverse of the rate and you have the time interval you need to sleep between each event. Your basic loop, which you do on one dedicated thread, then goes
Fire event
Sleep(sleepTime)
Then every, say, 250ms (or more for faster rates), check the actually achieved rate and adjust the sleepTime interval, perhaps with some smoothing to dampen wild temporary swings, like this
newRate = max(1, sleepTime / targetRate * actualRate)
sleepTime = 0.3 * sleepTime + 0.7 * newRate
This adjusts to what is actually going on in your program and on your system, and makes up for the time spent to invoke the event callback, and whatever the callback is doing on that same thread etc. Without this, you will probably not be able to get high accuracy.
Needless to say, if your rate is so high that you cannot use Sleep but always have to spin, one core will be spinning continuously. The good news: We get ever more cores on our machines, so one core matters less and less :) More serious though, as you mentioned in the comment, if your program does actual work, your event generator will have less time (and need) to waste cycles.
Check out https://github.com/EugenDueck/EventCannon for a proof of concept implementation in .net. It's implemented roughly as described above and done as a library, so you can embed that in your program if you use .net.

Resources