Web Workers are slower and that’s OK.
Workers in JavaScript are for running things in parallel, everyone knows that parallelism is how you make stuff Fast these days, therefore I can write a version of map that distributes the workload over 4 workers and it will be 4 times as fast as the regular map right?
Sorry to burst your bubble but no you probably can’t. While your 4 worker version is going to be faster than one with 1 or 2 workers It’s going to be a lot slower than the native map method (which in fairness is going to be much slower than doing a for loop, but that’s a whole different thing).
Now why is this and why shouldn’t you care?
The first reason is worker creation. Unlike in Erlang creating a worker involves spinning up a new VM so this can take a bit of time; see this perf where the only difference is worker creation. And on top of that you’re going to want to make sure to close your workers too as leaving too many open can cause issues where you least expect it. Don’t take my word for it, make a version of the perf which doesn’t call ww.close(); and watch chrome just go white. It doesn’t even crash.
‘Ahah’ an intrepid reader could conceivably say, ‘your very perf shows the way to deal with this, all you need to do is create the workers ahead of time.’ And yes you do avoid that issue if you are able to predict in advance what you will need to do like if instead of using
and you had to do the creating part out of the way of everything else because it was 50 times slower than a normal creation. For many things you might as well just do the calculations then and there.
Sorry though it’s still slower http://jsperf.com/parall-threads/8 and that actually give an unfair advantage to the worker, http://jsperf.com/parall-threads/9 is the real comparison. Yes that’s right it’s over 100,000 times slower to do the function in a web worker and 10,000,000 times slower if you count the worker creation and destruction. So there is something else, and that is…drum roll…
Message passing
Any data that is sent to the worker is copied and serialized(side note: the serialization method is not JSON.stringify() as is erroneously implied some places, but structured cloning, a main differences being circular references being acceptable and being able to accept all data types except errors and functions.) and doing this is non-trivial time wise. Now this can be minimized somewhat by either dividing up the data beforehand (again are you going to gain anything beyond just doing it then.
Array Buffers
Our imaginary foil might then exclaim ‘Array Buffers’ if your browser supports array buffers you can transfer your data instead of coping it, unless your use IE10 in which case your browser will explode. Yes is also true, though converting your data from whatever it is to an array buffer is not fast, but are you doing a lot of calculations already that take and return array buffers (not a typed array mind you and array buffer), I thought not.
To wrap up, you can use a web worker to parallelize a task to make it faster, if it’s already using array buffers, already divided it up, and you had time previously to spin up the workers.
Some of you might be confused now because you know I’m obsessed with workers and you wonder what this is all about.
Workers are about prioritizing.
The worker is not really about parallelism, that is more of a side benefit, it’s about concurrency and getting things out of the most valuable thread you have, the UI thread. A web worker isn’t about making something take 2 seconds instead of 4 seconds, it’s about doing that thing with the DOM freezing for 0 seconds.