Twilio Taskrouter: how to implement a “do not contact” list of WorkerSids in Workflow Configuration? - node.js

This question is similar to this one I previously asked, in that I want the task to perform a Target Worker Expression check on a list of WorkerSids that I've added as one of the task's attributes. But I think this problem is different enough to warrant its own question.
My goal is to associate a "do not contact" list of WorkerSids with a Task; these are workers who should not be assigned the task (maybe the customer previously had a bad interaction with them).
I have the following workflow configuration:
{
"task_routing":{
"filters":[
{
"filter_friendly_name":"don't call self",
"expression":"1==1",
"targets":[
{
"queue":queueSid,
"expression":"(task.caller!=worker.contact_uri) and (worker.sid not in task.do_not_contact)",
"skip_if": "workers.available == 0"
},
{
"queue":automaticQueueSid
}
]
}
],
"default_filter":{
"queue":queueSid
}
}
}
When I create a task, checking the Twilio Console, I can see that the task has the following attributes:
{"from_country":"US","do_not_contact":["WORKER_SID1_HERE","WORKER_SID_2_HERE"],
... bunch of other attributes...
}
So I know that the task has successfully been assigned the array of WorkerSids as one of its attributes.
There is only one worker who is Idle and whose attributes match the queueSid TaskQueue. That worker's SID is WORKER_SID1_HERE, so the only available worker is ineligible to receive the task reservation. So what should happen is that the first target expression worker.sid not in task.do_not_contact returns false, and the task falls through to the automaticQueueSid TaskQueue.
Instead, the task remains in queueSid unassigned. The following sequence of Taskrouter events are logged:
task-queue.entered
Task TASK_SID entered TaskQueue QUEUESID_QUEUENAME
task.created
Task TASK_SID created
workflow.target-matched
Task TASK_SID matched a workflow target
workflow.entered
Task TASK_SID entered Workflow WORKFLOW_NAME
What do I need to change to get the desired workflow behavior?

Changing the skip_if to
"skip_if": "1==1"
solved the problem.
Per Twilio developer support, the worker.sid not in task.do_not_contact returns true for workers who are unavailable but are also not in do_not_contact, so the target expression still returns a set of workers, and then the "skip_if": "workers.available==0" returns false because technically there is one "available" worker--the one who is ineligible due to the do_not_contact list.
What's needed is for the skip_if to always return true, so when the first target processes the task without assigning it, the skip_if then passes it to the next target, as discussed in Taskrouter Workflow documentation:
"TaskRouter will only skip a routing step in a Workflow if:
No Reservations are immediately created when a Task enters the routing step
The Skip Timeout expression evaluates to true"

Related

Relation between command handlers, aggregates, the repository and the event store in CQRS

I'd like to understand some details of the relations between command handlers, aggregates, the repository and the event store in CQRS-based systems.
What I've understood so far:
Command handlers receive commands from the bus. They are responsible for loading the appropriate aggregate from the repository and call the domain logic on the aggregate. Once finished, they remove the command from the bus.
An aggregate provides behavior and an internal state. State is never public. The only way to change state is by using the behavior. The methods that model this behavior create events from the command's properties, and apply these events to the aggregate, which in turn call an event handlers that sets the internal state accordingly.
The repository simply allows loading aggregates on a given ID, and adding new aggregates. Basically, the repository connects the domain to the event store.
The event store, last but not least, is responsible for storing events to a database (or whatever storage is used), and reloading these events as a so-called event stream.
So far, so good.
Now there are some issues that I did not yet get:
If a command handler is to call behavior on a yet existing aggregate, everything is quite easy. The command handler gets a reference to the repository, calls its loadById method and the aggregate is returned. But what does the command handler do when there is no aggregate yet, but one should be created? From my understanding the aggregate should later-on be rebuilt using the events. This means that creation of the aggregate is done in reply to a fooCreated event. But to be able to store any event (including the fooCreated one), I need an aggregate. So this looks to me like a chicken-and-egg problem: I can not create the aggregate without the event, but the only component that should create events is the aggregate. So basically it comes down to: How do I create new aggregates, who does what?
When an aggregate triggers an event, an internal event handler responses to it (typically by being called via an apply method) and changes the aggregate's state. How is this event handed over to the repository? Who originates the "please send the new events to the repository / event store" action? The aggregate itself? The repository by watching the aggregate? Someone else who is subscribed to the internal events? ...?
Last but not least I have a problem understanding the concept of an event stream correctly: In my imagination, it's simply something like an ordered list of events. What's of importance is that it's "ordered". Is this right?
The following is based on my own experience and my experiments with various frameworks like Lokad.CQRS, NCQRS, etc. I'm sure there are multiple ways to handle this. I'll post what makes most sense to me.
1. Aggregate Creation:
Every time a command handler needs an aggregate, it uses a repository. The repository retrieves the respective list of events from the event store and calls an overloaded constructor, injecting the events
var stream = eventStore.LoadStream(id)
var User = new User(stream)
If the aggregate didn't exist before, the stream will be empty and the newly created object will be in it's original state. You might want to make sure that in this state only a few commands are allowed to bring the aggregate to life, e.g. User.Create().
2. Storage of new Events
Command handling happens inside a Unit of Work. During command execution every resulting event will be added to a list inside the aggregate (User.Changes). Once execution is finished, the changes will be appended to the event store. In the example below this happens in the following line:
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
3. Order of Events
Just imagine what would happen, if two subsequent CustomerMoved events are replayed in the wrong order.
An Example
I'll try to illustrate the with a piece of pseudo-code (I deliberately left repository concerns inside the command handler to show what would happen behind the scenes):
Application Service:
UserCommandHandler
Handle(CreateUser cmd)
stream = store.LoadStream(cmd.UserId)
user = new User(stream.Events)
user.Create(cmd.UserName, ...)
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
Handle(BlockUser cmd)
stream = store.LoadStream(cmd.UserId)
user = new User(stream.Events)
user.Block(string reason)
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
Aggregate:
User
created = false
blocked = false
Changes = new List<Event>
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
Create(userName, ...)
if (this.created) throw "User already exists"
this.Apply(new UserCreated(...))
Block(reason)
if (!this.created) throw "No such user"
if (this.blocked) throw "User is already blocked"
this.Apply(new UserBlocked(...))
Apply(userCreatedEvent)
this.created = true
this.Changes.Add(userCreatedEvent)
Apply(userBlockedEvent)
this.blocked = true
this.Changes.Add(userBlockedEvent)
Update:
As a side note: Yves' answer reminded me of an interesting article by Udi Dahan from a couple of years ago:
Don’t Create Aggregate Roots
A small variation on Dennis excellent answer:
When dealing with "creational" use cases (i.e. that should spin off new aggregates), try to find another aggregate or factory you can move that responsibility to. This does not conflict with having a ctor that takes events to hydrate (or any other mechanism to rehydrate for that matter). Sometimes the factory is just a static method (good for "context"/"intent" capturing), sometimes it's an instance method of another aggregate (good place for "data" inheritance), sometimes it's an explicit factory object (good place for "complex" creation logic).
I like to provide an explicit GetChanges() method on my aggregate that returns the internal list as an array. If my aggregate is to stay in memory beyond one execution, I also add an AcceptChanges() method to indicate the internal list should be cleared (typically called after things were flushed to the event store). You can use either a pull (GetChanges/Changes) or push (think .net event or IObservable) based model here. Much depends on the transactional semantics, tech, needs, etc ...
Your eventstream is a linked list. Each revision (event/changeset) pointing to the previous one (a.k.a. the parent). Your eventstream is a sequence of events/changes that happened to a specific aggregate. The order is only to be guaranteed within the aggregate boundary.
I almost agree with yves-reynhout and dennis-traub but I want to show you how I do this. I want to strip my aggregates of the responsibility to apply the events on themselves or to re-hydrate themselves; otherwise there is a lot of code duplication: every aggregate constructor will look the same:
UserAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
OrderAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
ProfileAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
Those responsibilities could be left to the command dispatcher. The command is handled directly by the aggregate.
Command dispatcher class
dispatchCommand(command) method:
newEvents = ConcurentProofFunctionCaller.executeFunctionUntilSucceeds(tryToDispatchCommand)
EventDispatcher.dispatchEvents(newEvents)
tryToDispatchCommand(command) method:
aggregateClass = CommandSubscriber.getAggregateClassForCommand(command)
aggregate = AggregateRepository.loadAggregate(aggregateClass, command.getAggregateId())
newEvents = CommandApplier.applyCommandOnAggregate(aggregate, command)
AggregateRepository.saveAggregate(command.getAggregateId(), aggregate, newEvents)
ConcurentProofFunctionCaller class
executeFunctionUntilSucceeds(pureFunction) method:
do this n times
try
call result=pureFunction()
return result
catch(ConcurentWriteException)
continue
throw TooManyRetries
AggregateRepository class
loadAggregate(aggregateClass, aggregateId) method:
aggregate = new aggregateClass
priorEvents = EventStore.loadEvents()
this.applyEventsOnAggregate(aggregate, priorEvents)
saveAggregate(aggregateId, aggregate, newEvents)
this.applyEventsOnAggregate(aggregate, newEvents)
EventStore.saveEventsForAggregate(aggregateId, newEvents, priorEvents.version)
SomeAggregate class
handleCommand1(command1) method:
return new SomeEvent or throw someException BUT don't change state!
applySomeEvent(SomeEvent) method:
changeStateSomehow() and not throw any exception and don't return anything!
Keep in mind that this is pseudo code projected from a PHP application; the real code should have things injected and other responsibilities refactored out in other classes. The ideea is to keep aggregates as clean as possible and avoid code duplication.
Some important aspects about aggregates:
command handlers should not change state; they yield events or
throw exceptions
event applies should not throw any exception and should not return anything; they only change internal state
An open-source PHP implementation of this could be found here.

Is there a way to tell which Tasks are currently running in Task Parallel Library?

I can't see a way to see which tasks are running. There is the Task.Current property, but what if there are multiple tasks running? Is there a way to get this kind of information?
Alternately, is there a built in way to get notified when a task starts or completes?
Hey Mike, there is no public way of accessing the list of pending tasks in TPL. The mechanism that makes it available for the debugger relies on the fact that all threads will be frozen at enumeration time, therefore it can't be used at runtime.
Yes, there's a built in way to get notified whan a task completes. Check out the Task.ContinueWith APIs. Basically this API creates a new task that will fired up when the target task completes.
I'm assuming you want to do some quick accounting / progress reporting based on this, if that's the case, I'd recommend that you call task.ContinueWith() with the TaskContinuationOptions.ExecuteSynchronously flag. When you specify that the continuation action will be run right there on the same thread when the target task finishes (if you don't specify this the continuation task is queued up like any other regular task).
Hope this helps.
Huseyin
You can also get the currently running task (or a Task's parent) with reflection:
static public class Extensions
{
public static Task Parent(this Task t)
{
FieldInfo info = typeof(Task).GetField("m_parent",
BindingFlags.NonPublic | BindingFlags.Instance);
return info != null ? (Task)info.GetValue(t) : null;
}
public static Task Self
{
get
{
return Task.Factory.StartNew(
() => { },
CancellationToken.None,
TaskCreationOptions.AttachedToParent,
TaskScheduler.Default).Parent();
}
}
};
You can create a TaskScheduler class deriving from the provided one. Within that class you have full control and can add logging either side of the execution. See for example: http://msdn.microsoft.com/en-us/library/ee789351.aspx
You'll also need to use a Taskfactory with an instance of your class as the scheduler.

How can I test a scheduled flow?

I'm using the mocking network for testing a scheduled flow, but I can't track the result given because it isn't returned as a future like using node.services.startFlow(...) .
I've already tried the approach stated in the heartbeat example:
val recordedTxs = node.database.transaction {
val (recordedTxs, futureTxs) = node.services.validatedTransactions.track()
futureTxs.notUsed()
recordedTxs
}
I've listed the content inside recordedTxs, and the one loaded by the scheduled flow doesn't appear. I've also subscribed to futureTxs but there are no updates of the observable.
Are there other ways?
Thanks
One other aspect of testing scheduled flows is by timing the flow using a parameter to moderate it. The corda node's scheduler will kick off a flow that is (by the contract scheduling logic) expected to be done now or in the past. This gives you two ways to check the flow completion.
Setting the flow to be scheduled immediately and then check the
node's database for new consumed states with attributes you'd expect
to see.
Initiate a flow and move the platform clock (in the test setup) to a
time where the scheduled flow would have completed and then check
states.
Samples:
// Set up the network as:
net: MockNetwork = MockNetwork(threadPerNode = true)
// Logic to set up nodes
...
net.startNodes()
// Additional set up
...
val scheduledFlow = SimpleScheduledFlow(parameterForImmediateScheduling)
testNode.services.startFlow(scheduledFlow)
net.waitQuiescent()
node.database.transaction {
// Check validated transactions ...
// Checks on the states newly produced by the flow ...
}
Alternatively,
val scheduledFlow = SimpleScheduledFlow(parameterForLaterScheduling)
testNode.services.startFlow(scheduledFlow)
(node.internals.platformClock as TestClock).setTo(valueDate.atTime(LocalTime.MIDNIGHT).plusSeconds(1).toInstant(ZoneOffset.UTC))
net.waitQuiescent()
node.database.transaction {
// Check validated transactions ...
// Checks on the states newly produced by the flow ...
}
Additional Reference:
https://github.com/corda/corda/blob/release-V2.0/node/src/test/kotlin/net/corda/node/services/events/ScheduledFlowTests.kt
You can test scheduled states as follows:
Invoke a flow that creates a scheduled activity
Sleep for long enough for the scheduled activity to have occurred
Check that the scheduled activity has occurred
For en example, see the flow test in the Heartbeat sample (https://github.com/joeldudleyr3/heartbeat):
#Test
fun `heartbeat occurs every second`() {
val flow = StartHeartbeatFlow()
a.services.startFlow(flow).resultFuture
val enoughTimeForFiveScheduledTxs: Long = 5500
Thread.sleep(enoughTimeForFiveScheduledTxs)
val recordedTxs = a.database.transaction {
val (recordedTxs, futureTxs) = a.services.validatedTransactions.track()
futureTxs.notUsed()
recordedTxs
}
val originalTxPlusFiveScheduledTxs = 6
assertEquals(originalTxPlusFiveScheduledTxs, recordedTxs.size)
}
In this test, we proceed as follows:
We run StartHeartbeatFlow. This flow creates a scheduled state that one second later will create a transaction creating another scheduled state, and so on. This means that running StartHeartbeatFlow causes a new transaction to occur every second until the node is shut down
We wait 5.5 seconds. That's long enough for another five transactions to occur
We check that there are now six transactions in the vault
When using this approach, you must ensure you initialise the MockNetwork with threadPerNode = true, otherwise sleeping on the thread will block all network activity.

Angular Service and Web Workers

I have an Angular 1 app that I am trying to increase the performance of a particular service that makes a lot of calculations (and probably is not optimized but that's besides the point for now, running it in another thread is the goal right now to increase animation performance)
The App
The app runs calculations on your GPA, Terms, Courses Assignments etc. The service name is calc. Inside Calc there are user, term, course and assign namespaces. Each namespace is an object in the following form
{
//Times for the calculations (for development only)
times:{
//an array of calculation times for logging and average calculation
array: []
//Print out the min, max average and total calculation times
report: function(){...}
},
//Hashes the object (with service.hash()) and checks to see if we have cached calculations for the item, if not calls runAllCalculations()
refresh: function(item){...},
//Runs calculations, saves it in the cache (service.calculations array) and returns the calculation object
runAllCalculations: function(item){...}
}
Here is a screenshot from the very nice structure tab of IntelliJ to help visualization
What Needs To Be Done?
Detect Web Worker Compatibility (MDN)
Build the service depending on Web Worker compatibility
a. Structure it the exact same as it is now
b. Replace with a Web Worker "proxy" (Correct terminology?) service
The Problem
The problem is how to create the Web Worker "Proxy" to maintain the same service behavior from the rest of the code.
Requirements/Wants
A few things that I would like:
Most importantly, as stated above, keep the service behavior unchanged
To keep one code base for the service, keep it DRY, not having to modify two spots. I have looked at WebWorkify for this, but I am unsure how to implement it best.
Use Promises while waiting for the worker to finish
Use Angular and possibly other services inside the worker (if its possible) again WebWorkify seems to address this
The Question
...I guess there hasn't really been a question thus far, it's just been an explanation of the problem...So without further ado...
What is the best way to use an Angular service factory to detect Web Worker compatibility, conditionally implement the service as a Web Worker, while keeping the same service behavior, keeping DRY code and maintaining support for non Web Worker compatible browsers?
Other Notes
I have also looked at VKThread, which may be able to help with my situation, but I am unsure how to implement it the best.
Some more resources:
How to use a Web Worker in AngularJS?
http://www.html5rocks.com/en/tutorials/workers/basics/
https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers#Worker_feature_detection
In general, good way to make a manageable code that works in worker - and especially one that also can run in the same window (eg. when worker is not supported) is to make the code event-driven and then use simple proxy to drive the events through the communication channel - in this case worker.
I first created abstract "class" that didn't really define a way of sending events to the other side.
function EventProxy() {
// Object that will receive events that come from the other side
this.eventSink = null;
// This is just a trick I learned to simulate real OOP for methods that
// are used as callbacks
// It also gives you refference to remove callback
this.eventFromObject = this.eventFromObject.bind(this);
}
// Object get this as all events callback
// typically, you will extract event parameters from "arguments" variable
EventProxy.prototype.eventFromObject = (name)=>{
// This is not implemented. We should have WorkerProxy inherited class.
throw new Error("This is abstract method. Object dispatched an event "+
"but this class doesn't do anything with events.";
}
EventProxy.prototype.setObject = (object)=> {
// If object is already set, remove event listener from old object
if(this.eventSink!=null)
//do it depending on your framework
... something ...
this.eventSink = object;
// Listen on all events. Obviously, your event framework must support this
object.addListener("*", this.eventFromObject);
}
// Child classes will call this when they receive
// events from other side (eg. worker)
EventProxy.prototype.eventReceived = (name, args)=> {
// put event name as first parameter
args.unshift(name);
// Run the event on the object
this.eventSink.dispatchEvent.apply(this.eventSink, args);
}
Then you implement this for worker for example:
function WorkerProxy(worker) {
// call superconstructor
EventProxy.call(this);
// worker
this.worker = worker;
worker.addEventListener("message", this.eventFromWorker = this.eventFromWorker.bind(this));
}
WorkerProxy.prototype = Object.create(EventProxy.prototype);
// Object get this as all events callback
// typically, you will extract event parameters from "arguments" variable
EventProxy.prototype.eventFromObject = (name)=>{
// include event args but skip the first one, the name
var args = [];
args.push.apply(args, arguments);
args.splice(0, 1);
// Send the event to the script in worker
// You could use additional parameter to use different proxies for different objects
this.worker.postMessage({type: "proxyEvent", name:name, arguments:args});
}
EventProxy.prototype.eventFromWorker = (event)=>{
if(event.data.type=="proxyEvent") {
// Use superclass method to handle the event
this.eventReceived(event.data.name, event.data.arguments);
}
}
The usage then would be that you have some service and some interface and in the page code you do:
// Or other proxy type, eg socket.IO, same window, shared worker...
var proxy = new WorkerProxy(new Worker("runServiceInWorker.js"));
//eg user clicks something to start calculation
var interface = new ProgramInterface();
// join them
proxy.setObject(interface);
And in the runServiceInWorker.js you do almost the same:
importScripts("myservice.js", "eventproxy.js");
// Here we're of course really lucky that web worker API is symethric
var proxy = new WorkerProxy(self);
// 1. make a service
// 2. assign to proxy
proxy.setObject(new MyService());
// 3. profit ...
In my experience, eventually sometimes I had to detect on which side am I but that was with web sockets, which are not symmetric (there's server and many clients). You could run into similar problems with shared worker.
You mentioned Promises - I think the approach with promises would be similar, though maybe more complicated as you would need to store the callbacks somewhere and index them by ID of the request. But surely doable, and if you're invoking worker functions from different sources, maybe better.
I am the author of the vkThread plugin which was mentioned in the question. And yes, I developed Angular version of vkThread plugin which allows you to execute a function in a separate thread.
Function can be defined directly in the main thread or called from an external javascript file.
Function can be:
Regular functions
Object's methods
Functions with dependencies
Functions with context
Anonymous functions
Basic usage:
/* function to execute in a thread */
function foo(n, m){
return n + m;
}
// to execute this function in a thread: //
/* create an object, which you pass to vkThread as an argument*/
var param = {
fn: foo // <-- function to execute
args: [1, 2] // <-- arguments for this function
};
/* run thread */
vkThread.exec(param).then(
function (data) {
console.log(data); // <-- thread returns 3
}
);
Examples and API doc: http://www.eslinstructor.net/ng-vkthread/demo/
Hope this helps,
--Vadim

scalatra < squeryl < select ALL | Always

I want to read elements from the database and return them as JSON objects.
Scalatra is set up to return JSON.
Databaseschema is created.
Players are added.
The following code seems to be the main problem:
get("/") {
inTransaction {
List(from(MassTournamentSchema.players)(s => select(s)))
}
}
I get the following error:
"No session is bound to current thread, a session must be created via Session.create and bound to the thread via 'work' or 'bindToCurrentThread' Usually this error occurs when a statement is executed outside of a transaction/inTrasaction block "
I want to do it right so simply adding something like "Session.create" may not really be the right way.
Can anyone help a scalatra-noob? :-)
I think that your comment is on the right track. The inTransaction block will bind a JDBC connection to a thread local variable and start the connection on it. If the select doesn't occur on the same thread, you'll see an error like the one your received. There are two things I would suggest that you try:
Start your transaction later
List(inTransaction {
from(MassTournamentSchema.players)(s => select(s))
})
I'm not familiar with Scalatra's List, but it's possible that it's accepting a by-name parameter and executing it later on a different thread.
Force an eager evaluation of the query
inTransaction {
List(from(MassTournamentSchema.players)(s => select(s)).toList)
}
Here the .toList call will turn the Query object Squeryl returns into a Scala List immediately and guard against any lazy evaluation errors caused by later iteration.

Resources