Yesterdays post I looked into how to make a fast way to defer things into the next event loop by looking into how the major promise libraries did it. At the end of yesterdays we figured out that when (and Q I think) worked by only pushing things into the next event loop if they weren’t in the process of draining their queue, aka I make a promises it gets run in the next loop, but if that promise makes a promise it gets executed in the same loop. Since there is only one queue they will still be executed in the correct order, but the whole block of functions will be executed synchronously from the perspective of an outside function.
This method will give those function an advantage in certain types of operations that aren’t necessarily realistic, like this setImmediate benchmark which the scheduler based on when’s BEASTS by doing it totally synchronously except for the very first call.
This is not an abstract thing for me as one of the things I’m trying to use promises is a library that deals with web workers. If my promises don’t legitimately defer then messages from workers are going to get needlessly delayed. So I decided to do some more realistic perfs.
The first one I did just has the same setup as previous ones, we have my library lie with is like when but uses window.postMessage instead of MessageChannel, we then have when, rsvp, q, and catiline which uses a modified version of lie which creates a new task queue every event loop so that promises never resolve in the same loop they are made. You can see when wins by a landslide followed by lie and catiline is in the rear.
So what happens if the libraries have to wait for a message event before they can resolve forcing them to be async? The results are vastly different with catiline, lie and rsvp way out in front. These results don’t actually make sense though, lie should be closer to when, not to catiline, the only similarity to catiline is also its only difference from when, which is that it (like catiline) uses postMessage instead of using MessageChannel (like when).
So what happens when we force them to wait for an event via a MessageChannel instead of a window message? This happens and it now seems that the type of event really really matters, further proof with this test which uses a version of lie that uses MessageChannel.
You will notice that RSVP is equally fast both times, and that Q is equally slow. For RSVP it’s because it uses MutationObserver, something that I’d never heard of and is about to go into immediate … immediately. Q on the other hand has a similar setup to when but has a significantly more complex version of when’s drainQueue called flush which I’m guessing is causing the code to not be optimized, this type of trade off of speed for error handling is probably not something I’d do, if I was worried about my node server crashing then I’d use the cluster module.
If your curious about what lie looks like using mutation observers here we have it with postMessage events and we have it here with MessageChannel events and a big mash up of both types here which also shows that postMessage events happen quicker in general.
Now back to the original point of all this which was to look for the best promise timer to use with my web worker library, one nice benefit I have with this library is I have a well defined set of browsers I support. They either have to have a web worker or for the fallback they need to support cross document messages. This means that postMessage will cover all our bases, so lets repeat our tests but to check how the speeds are with worker events. Ew not good in chrome, worker events would definitely seem to be similar to MessageChannel type events. Though it should be noted that in IE9 catiline and lie are both the fastest, that’s because they fall back to postMessage while everything else falls back to setTimeout.
- Always double check your perfs are measuring what you think they are measuring.
- Chrome seems like it almost has multiple event loops or (or else there is a confounding variable I’m missing) when I get around to running this on firefox we’ll see how much of this actually holds up.
- Observer mutation looks awesome.
- Thanks to Brian Cavalier, John Hann, Domenic Denicola, John-David Dalton, and Cesidio DiBenedetto who were all helpful on twitter.
Feel free to discuss on reddit