Dynamic rerouting and circular dependency with Mostjs - xstream-js

It is evident that xstream, with the two methods addListener and removeListener, is able to reroute streams (change their sources and sinks) dynamically. I see no equivalent with mostjs. Does most only let you lay out the routing of the streams once? If so, is this static nature what allows mostjs to optimize for such superior performance?
Also, xstream uses an imitate method that lets it feature circular dependencies. Is there any way to achieve circular dependency with mostjs?

There are many functions in most.js that operate on both a Source and a Sink, e.g., map(), which transforms all the events in a stream, acts as a Sink by consuming events, and as a Source when producing new event values after applying a function to them. observe() is an example of a particular kind of Sink that consumes events, and passes them to a function you provide.
Most.js Streams are not active until you consume them, by using one of the "terminal" combinators, observe, drain, or reduce. When you call one of those, the Stream sends a signal in the Source-Sink chain to the Source at the very beginning of the chain. That producer Source will then begin producing events.
Events are then propagated synchronously from the Source through the Source-Sink chain by a simple method call.
Thus, you could provide your own "listener" function to a map which would transform the events.
There are many factors contributing to most.js’s performance.
The simple call stack event propagation architecture, plus hoisting
try/catch out of combinator implementations were two of the earliest
and biggest performance improvements.
Most.js performs several other optimizations automatically, based on
algebraic equivalences. A relatively well-known example is combining
multiple map operations, e.g. map(g, map(f, stream)), into a
single map by doing function composition on f and g.
The operation also combines multiple filter operations, multiple
merge operations, multiple take and skip, among others. These
optimizations reduce the number of method calls needed to propagate an
event from producer to consumer.
See this interview with Brian Cavalier
Most.js itself doesn’t handle circular dependencies, but it is totally possible using most-proxy. Motorcycle does this to create its cycle in its run package.

have you seen this issue regarding xstream.js imitate in most.js?https://github.com/cujojs/most/issues/308


Are there patterns for writing a library that hides its actor system implementation?

All of the actor system implementations I have seen (primarily using Akka) presume a web app, meaning an HTTP interface that can quite naturally be served by an asynchronous actor system.
But what if I'm writing a desktop app, or a library to be used as a component of a platform-independent app?
I want client subroutines to be able to call val childObj = parentObject.createChild( initParam ) without having to know about my allowed message types, or the actor system in general. eg, Not parentObject ! CreateChild( initParam ), and then handle a response received in another message.
I know I could hide the asynchronous responses behind Futures, but are there other known patterns for a synchronous system handing off computation to a hidden actor system?
(I realize that this will result in a blocking call into the library.)
Desktop app
A lot of things that apply to libraries also apply here, so look at the below section. If nothing else, you can wrap the part of your code that uses Akka as a separate library. One caveat is that, if you're using Swing, you will probably want to use SwingUtilities.invokeLater to get back on the Event Dispatch Thread prior to interacting with a GUI. (Also, don't block that thread. You probably want to use futures to avoid this, so consider designing your library to return futures.)
Your example seems to assume a thin wrapper around your actors, or, at the very least, a bottom-up design where your interface is driven by your implementation details. Instead, design the library in a more top-down manner, first figuring out the library's interface, then (possibly) using Akka as an implementation detail. (This is a good idea for library design in general.) If you've already written something using Akka, don't worry, just design the interface separately from the implementation and stitch the two together. If you do this, you don't need a specific pattern, as the normal pattern of interface design apply regardless of the fact that you are using Akka.
As an example, consider a compiler. The compile method signature might be simple:
def compile(sources: List[File]): List[File] // Returns a list of binaries
No mention of actors here. But this might do:
compileActor ? Compile(sources)
...and block on the result. The main compiler actor might depend on other actors, but there's no reason to expose those through the public API.

Subscribing twice on Spring Data MongoDB save() results in double insert

We encountered the following behaviour which we do understand, however we'd like to find out whether it is expected and whether it might be of interest to document it as some kind of pitfall.
We're experimenting with Spring Boot 2/Spring WebFlux and set up a small application that basically has something like this (all shortened):
public Mono<Todo> addTodos( #RequestBody Person person ) {
return personService.addPerson( person );
The service first looked like this, as we want to publish the event of a person addition also to a message queue:
public class PersonService {
public Mono<Person> addPerson( Person person ) {
Mono<Person> addedPerson = personRepository.save( person );
addedPerson.subscribe( p -> rabbitTemplate.convertAndSend( "persons", p ) );
return addedPerson;
So, this is obviously wrong to do it like that. The .subscribe() triggers the flow and we assume that the reactive REST controller does the same in the background before serializing the data for the response, resulting in a second parallel flow. In the end we ended up with two duplicate entries in the persons collection in the database.
After this lengthy introduction finally the question: is this expected behaviour that multiple subscribers trigger multiple inserts (basically, if you subscribe n times you get n inserts)?
If yes this might be a pitfall to emphasize for beginners, especially if our understanding is correct, that the reactive REST controllers perform a .subscribe() under the hoods.
You came yourself to the conclusion which describes the expected behavior.
The reactive programming model differs from an imperative programming model in various areas.
Imperative programming combines transformations, mapping, execution, and other aspects. You express these by creating conditional/loop flows, method invocations that may return values and pass values on to API calls.
Reactive programming decouples the declaration of what is happening from how it's going to be executed. Execution using reactive infrastructure is divided into two parts: Reactive sequence composition and the actual execution. In your code, you only compose reactive sequences. Execution happens outside of your code.
When you compose a Publisher, then the resulting Publisher contains a declaration of things that will happen if executed. A Publisher does not imply whether it's going to be executed in the first place nor how many subscribers will subscribe eventually.
Taken the example from above, Mono<Person> PersonRepository.save(…) returns a publisher that:
Maps data from Person to Document
Saves the Document to MongoDB and
Emits the saved Person once a response from MongoDB comes back
It's a recipe for saving data using a specific repository method. Creating the publisher does not execute the publisher and the publisher is not opinionated on the number of executions. Multiple calls to .subscribe() execute the publisher multiple times.
I'd argue .subscribe() is not a pitfall. A reactive programming model approach takes the execution out of your way. If you call .subscribe() or .block(), then you should have a very good reason for doing so. Every time you see .subscribe() or .block() in your code you should pay extra attention whether that's the right thing to do. Your execution environment is in charge of subscribing to Publishers.
A few observations:
RabbitTemplate is a blocking API. You should not mix reactive and blocking APIs. If you have no other option, then offload blocking calls to a worker. Use either publishOn(…) along a Scheduler before the actual operator containing the blocking work or use ExecutorService/CompletableFuture together with flatMap(…).
Use flatMap(…) operators for reactive flow composition of Mono/Flux. The flatMap(…) operator starts non-blocking subprocesses that complete eventually and continue the flow.
Use doOnXXX(…) operators (doOnNext(…), doOnSuccess(…), …) for callbacks when the publisher emits particular signals. These hook methods allow convenient interception of elements non-blocking consumption.
Project Reactor: Which operator do I need.

Correct usage of _writev in node.js

What is the correct usage of _writev() in node.js?
The documentation says:
If a stream implementation is capable of processing multiple chunks of data at once, the writable._writev() method should be implemented.
It also says:
The primary intent of writable.cork() is to avoid a situation where writing many small chunks of data to a stream do not cause a backup in the internal buffer that would have an adverse impact on performance. In such situations, implementations that implement the writable._writev() method can perform buffered writes in a more optimized manner.
From a stream implementation perspective this is okay. But from a writable stream consumer perspective, the only way that write or writev gets invoked is through Writable.write() and writable.cork()
I would like to see a small example which would depict the practical use case of implementing _writev()
A writev method can be added to the instance, in addition to write, and if the streams contains several chunks that method will be picked instead of write. For example Elasticsearch allows you to bulk insert records; so if you are creating a Writable stream to wrap Elasticsearch, it makes sense to have a writev method doing a single bulk insert rather than several individual ones, it is far more efficient. The same holds true, for example, for MongoDB and so on.
This post (not mine) shows an Elasticsearch implementation https://medium.com/#mark.birbeck/using-writev-to-create-a-fast-writable-stream-for-elasticsearch-ac69bd010802
The _writev() will been invoked when using uncork(). There is a simple example in node document.
stream.write('some ');
stream.write('data ');
process.nextTick(() => stream.uncork());
More See,

Is there a feature like NodeJS's EventEmitter in Rust?

I'm looking for a central 'object' to which multiple tasks can 'subscribe' for async update messages.
As far as I understand, EventEmitter is just a generic interface for event listeners support; objects which "implement" this interface provide several kinds of events on which the client code can put listener callbacks. These callbacks will be then called when corresponding event is emitted on the object's discretion. As JS is dynamically typed language, such interface arise very naturally and can be implemented by a lot of things.
First of all, neither in NodeJS nor in Rust you can "subscribe" tasks/threads: you put a listener callback on some object, and then this callback will be invoked from some thread, possibly even the current one, but in general the thread which subscribes to an object and the thread in which this callback will be run are different. In NodeJS there is a global event loop which calls into functions and external event listeners which in turn can invoke other event listeners, so you don't really know which thread will execute the listener. Not that you should care - event loop abstraction hides explicit threading from you.
Rust, however, is a proper multithreaded language. It does not run over a global event loop (though via libgreen it is possible - yet - to run Rust program in an event loop similar to the one in Node; it will be used for task management and I/O handling, but it will be separated from the libstd in the near future). Default Rust runtime, libnative, exposes facilities for creating native preemptively-scheduled threads and native I/O. This means that it does matter which thread eventually executes the callback, and you should keep in mind that all callbacks will be executed in the thread which owns the object, unless it creates separate threads specifically for event handling.
Another kind of problem with listeners is that Rust is statically typed language, and writing generic event listener interface is somewhat more difficult in statically typed languages than in dynamically typed ones, because you will need to write a sufficiently polymorphic interface. You would also want to take advantage of strong type system and make your interface as type safe as possible. This is not a trivial task. Sure, it is possible to use Box<Any> anywhere, but such an API wouldn't be very pleasant to work with.
So, at the moment there is no general-purpose event listener interface. There is no event bus library either. However, you can always write something yourself. If it is not very generic, it shouldn't be very difficult to write it.

Creating Dependencies Within An NSOperation

I have a fairly involved download process I want to perform in a background thread. There are some natural dependencies between steps in this process. For example, I need to complete the downloads of both Table A and Table B before setting the relationships between them (I'm using Core Data).
I thought first of putting each dependent step in its own NSOperation, then creating a dependency between the two operations (i.e. download the two tables in one operation, then set the relationship between them in the next, dependent operation). However, each NSOperation requires it's own NSManagedContext, so this is no good. I don't want to save the background context until both tables have been downloaded and their relationships set.
I've therefore concluded this should all occur inside one NSOperation, and that I should use notifications or some other mechanism to call the dependent method when all the conditions for running it have been met.
I'm an iOS beginner, however, so before I venture down this path, I wouldn't mind advice on whether I've reached the right conclusion.
Given your validation requirements, I think it will be easiest inside of one operation, although this could turn into a bit of a hairball as far as code structure goes.
You'll essentially want to make two wire fetches to get the entire dataset you require, then combine the data and parse it at one time into Core Data.
If you're going to use the asynchronous API's this essentially means structuring a class that waits for both operations to complete and then launches another NSOperation or block which does the parse and relationship construction.
Imagine this order of events:
User performs some action (button tap, etc.)
Selector for that action fires two network requests
When both requests have finished (they both notify a common delegate) launch the parse operation
Might look something like this in code:
- (IBAction)someAction:(id)sender {
//fire both network requests
request1.delegate = aDelegate;
request2.delegate = aDelegate;
//later, inside the implementation of aDelegate
- (void)requestDidComplete... {
if (request1Finished && request2Finished) {
NSOperation *parse = //init with fetched data
//launch on queue etc.
There's two major pitfalls that this solution is prone to:
It keeps the entire data set around in memory until both requests are finished
You will have to constantly switch on the specific request that's calling your delegate (for error handling, success, etc.)
Basically, you're implementing operation dependencies on your own, although there might not be a good way around that because of the structure of NSURLConnection.