OptaPlanner: Gaps in Chained Through Time Pattern - optaplanner

I'm just starting learning to use OptaPlanner recently. Please pardon me if there is any technically inaccurate description below.
Basically, I have a problem to assign several tasks on a bunch of machines. Tasks have some precedence restrictions such that some task cannot be started before the end of another task. In addition, each task can only be run on certain machines. The target is to minimize the makespan of all these tasks.
I modeled this problem with Chained Through Time Pattern in which each machine is the anchor. But the problem is that tasks on certain machine might not be executed sequentially due to the precedence restriction. For example, Task B can only be started after Task A completes while Tasks A and B are executed on machines I and II respectively. This means during the execution of Task A on machine I, if there is no other task that can be run on machine II, then machine II can only keep idle until Task A completes at which point Task B could be started on it. This kind of gap is not deterministic as it depends on the duration of Task A with respect to this example. According to the tutorial of OptaPlanner, it seems that additional planning variable gaps should be introduced for this kind of problem. But I have difficulty in modeling this gap variable now. In general, how to integrate the gap variable in the model using Chained Through Time Pattern? Some detailed explanation or even a simple example would be highly appreciated.
Moreover, I'm actually not sure whether chained through time pattern is suitable for modeling this kind of task assigning problem or I just used an entirely inappropriate method. Could someone please shed some light on this? Thanks in advance.

I'am using chained through time pattern to solve the same question as yours.And to solve the precedence restriction you can write drools rules.


How do we find out why priority inheritance happened in VxWorks?

We have one task who State is Ready+I . Can we find which task is it waiting for to release all semaphores? This is pre-6.0 vxworks
If you can get a backtrace from the task, you should see it blocked on some kind of system entity, e.g., a semaphore. You can look at the arg list printed in the backtrace, and then use semShow from the C shell to get information about that semaphore. Other system synchronization entities offer similar *Show routines.
Presuming that the entity supports the concept of an "owner", semShow should display the TID of the owner.
Under the older, Tornado-based systems, the WindView tool will allow you to see the relationship between tasks over time. WindView can show all your task state transitions, interrupts, semaphore operations, etc.
For newer, Workbench-based systems, the same tool is now called System Viewer.
WindView/System Viewer is the deluxe way to investigate any problem you are having with task states and how they got that way.
If I understand your question, you have a task that is inheriting the priority of some other task and you are having trouble identifying this other task. I don't recall if the i WindSh command prints the inherited priority but if it does that might give you a clue about which of the pended tasks you should look at. Once you've narrowed it down to a couple tasks you should be able to use the tw command to print information on what object a task is pended upon.
On a side note, why are you concerned about priority inheritance? After all priority inheritance isn't a problem, rather it is the solution to priority inversion.
If your task is READY+I, i don't think it is waiting for semaphores anymore. It is waiting to access the CPU. You must have a higher priority task running that is preventing your READY+I task from running.

Can I implement a cooperative multi-tasking system in VxWorks?

A legacy embedded system is implemented using a cooperative multi-tasking scheduler.
The system essentially works along the following lines:
Task A does work
When Task A is done, it yields the processor.
Task B gets the processor and does work.
Task B yields
Task n yields
Task A gets scheduled and does work
One big Circular Queue: A -> B -> C -> ... -> n -> A
We are porting the system to a new platform and want to minimize system redesign.
Is there a way to implement that type of cooperative multi-tasking in vxWorks?
While VxWorks is a priority based OS, it is possible to implement this type of cooperative multi-tasking.
Simply put all the tasks at the same priority.
In your code, where you do your yield, simply insert a 'taskDelay(0);'
Note that you have to make sure the kernel time slicing is disabled (kernelTimeSlice(0)).
All tasks at the same priority are in a Queue. When a task yields, it gets put at the end of the queue. This would implement the type of algorithm described.
I once worked on a relatively large embedded product which did this. Time slicing was disabled and threads would explicitly taskDelay when they wanted to allow another thread to run.
I have to conclude: disabling vxWorks slicing leads to madness. Avoid it, if it is within your power to do so.
Because tasks were entirely non-preemptive (and interrupt handlers were only allowed to enqueue a message for a regular task to consume), the system had dispensed with any sort of locking for any of its data structures. Tasks were expected to only release the scheduler to another task if all data structures were consistent.
Over time the original programmers moved on and were replaced by fresh developers to maintain and extend the product. As it grew more features the system as a whole became less responsive. When faced with a task which took too long the new developers would take the straightforward solution: insert taskDelay in the middle. Sometimes this was fine, and sometimes it wasn't...
Disabling task slicing effectively makes every task in your system into a dependency on every other task. If you have more than three tasks, or you even think you might eventually have more than three tasks, you really need to construct the system to allow for it.
This isn't specific to VxWorks, but the system you have described is a variant of Round Robin Scheduling (I'm assuming you are using priority queues, otherwise it is just Round Robin Scheduling).
The wiki article provides a bit of background and then you could go from there.
Good Luck
What you describe is essentially:
void scheduler()
while (1)
int st = microseconds();
sleep(microseconds() - st);
However if you don't already have a scheduler, now is a good time to implement one. In the simplest case, each entry point can be either multiply inherited from a Task class, or implement a Task interface (depending on the language).
you can make all the tasks of same priority and use task delay(0) or you can use tasklock and taskunlock inside your low priority tasks where you need to make non-premptive working.

Optaplanner: Reproducible solution

I am trying to solve a problem similar to employee rostering. The problem I am facing is every time I run the solver, it generates a different assignment. This makes it harder to debug why a particular case was picked over another. Why is this the case?
P.S. My assignment has many hard constraint and all of them may not be satisfied (most cases I still see some negative hard score). So my termination strategy is based on unimprovedSecondsSpentLimit. Could this be the reason?
Yes, it's likely the termination. OptaPlanner's default environmentMode guarantees the exact same solution at the exact same step (*). But CPU cycles differ a lot from run to run, so that means you get more or less steps per run. Use DEBUG logging to see that.
Use stepCountLimit or unimprovedStepCountLimit termination.
(*) Unless specified otherwise in the docs. Simulated Annealing for example will be different even in the exact same step if used with time bound terminations.

VRP with team building

I am using optaplanner with drools score calculation and chained variables.
An additional requirement for my optimization problem is, that some tasks may need more than one worker (e.g., someone has to hold the ladder).
Since this case is not covered by the simple VRP example from the docs, I have to come up with my own implementation (this is were things start to go out of hand :)).
What follows is the description of my idea and a picture of it. My question is, if this kind of chain freezing is possible for OptaPlanner (with multithreading).
If yes, where can I find resources for this?
If not, what are other possibilities to cover team building processes?
Two separate workers complete their first tasks. After that, a task requires a team of two. Both suppliers build a team (a team object will inherit from the worker and is also an anchor). As soon as they build a team, they get assigned to blocking tasks (on their initial chains).
Blocking tasks have the same start and duration as the team task. When the team task is completed, the initial chains unfreeze, and the workers continue working on their own again.
!! Not shown in the picture: A team chain needs to be frozen after the team breaks up.
I hope that could explain what I am trying to do.

OptaPlanner immediately produces better solution after terminating and restarting the solver

I created a solution based on the task assigning example of OptaPlanner and observe one specific behavior in both the original example and my own solution:
Solving the 100tasks-5employees problem does hardly produce new better scores after half a minute or so, but terminating the solver and restarting it again does immediately bring up better solutions.
Why does this happen? In my understanding the repeated construction heuristic does not change any planning entity as all of them are already initialized. Then local search is started again. Why does it immediately find new better solutions, while just continuing the first execution without interruption does not or at least much slower?
By terminating and restarting the solver, you're effectively causing Late Acceptance to do a reheating. OptaPlanner will do automatic reheating once this jira is prioritized and implemented.
This occurs on a minority of the use cases. But if it occurs on a use case, it tends to occur on all datasets.
I've some cases workaround it by configuring multiple <localSearch> phases with <unimprovedSecondsSpentLimit> terminations, but I don't like that. Fixing that jira is the only real solution.