NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Some features that every JavaScript developer should know in 2025 (waspdev.com)
bhaney 5 days ago [-]
I wish authors would actually test their performance claims before publishing them.

I quickly benchmarked the two code snippets:

    arr.slice(10, 20).filter(el => el < 10).map(el => el + 5)
and

    arr.values().drop(10).take(10).filter(el => el < 10).map(el => el + 5).toArray()
but scaled up to much larger arrays. On V8, both allocated almost exactly the same amount of memory, but the latter snippet took 3-4x as long to run.
chrismorgan 5 days ago [-]
Over time, these performance characteristics are very likely to change in Iterator’s favour. (To what extent, I will not speculate.)

JavaScript engines have put a lot of effort into optimising Array, so that these sorts of patterns can run significantly faster than they have any right to.

Iterators are comparatively new, and haven’t had so much effort put into them.

senfiaj 5 days ago [-]
I'm am the author of this blog. As for speed, you are probably right, I was mainly talking about wasting memory for temporary arrays, not the speed, it's unlikely that iterators are faster. But I'm curious, how large arrays did you test with? For example, will there be a memory difference for 10M size arrays.
bhaney 5 days ago [-]
> I was mainly talking about wasting memory for temporary arrays

Right, but the runtime is perfectly capable of optimizing those temporary arrays out, which it appears to do.

> I'm curious, how large arrays did you test with? For example, will there be a memory difference for 10M size arrays

10M size arrays are exactly what I tested with

senfiaj 5 days ago [-]
My speculation is also that with iterators the array size might be somewhat less predictable, because you might not know when the iterator finishes. For example by doing .filter().map(). So there is no way to precisely know how much memory will be preallocated.
senfiaj 5 days ago [-]
Interesting, I did some testing, just opened the task manager and run this js code in the browser without opening dev tools in order to see how the browser will behave when I don't prevent any optimizations.

Then I commented withArrayTransform and uncommented withIteratorTransform and did again in a fresh tab to prevent the browser reusing the old process.

//////////////////////////////////////////////////////////////////////////////////

const arr = Array.from({length: 100000000}, (_, i) => i % 10);

function withArrayTransform(arr) { return arr.slice(10, arr.length - 10).filter(el => el < 8).map(el => el + 5).map(el => el * 2).map(el => el - 7); }

function withIteratorTransform(arr) { return arr.values().drop(10).take(arr.length - 20).filter(el => el < 8).map(el => el + 5).map(el => el * 2).map(el => el - 7).toArray(); }

console.log(withArrayTransform(arr)) //console.log(withIteratorTransform(arr))

////////////////////////////////////////////////////////////////////////////////////

The peak memory usage with withArrayTransform was about ~1.6GB. The peak memory usage with withIteratorTransform was about ~0.8GB. Results sometimes vary, and it honestly feels complicated, but the iterator version is consistently more memory efficient. As of the speed, the iterator version was about ~1.5 times slower.

So probably the GC quickly cleaned up some temporary arrays when it saw an excessive memory usage in the process of running withArrayTransform(arr).

But imagine you use flatMap which unrolls the returned iterable and it can create even a bigger temporary array than the original and the final one. So using iterables still has an advantage of protecting from excessive memory usage and potentially crashing the browser tab or the whole Node server. I think it's still a nice thing to have.

gnabgib 5 days ago [-]
Iterator Helpers:

  arr.slice(10, 20).filter(el => el < 10).map(el => el + 5)
> This is really inefficient because for each transformation a new array should be allocated.

.slice[0] does not allocate, nor does .filter[1], only map does.. so one allocation.

  arr.values().drop(10).take(10).filter(el => el < 10).map(el => el + 5).toArray()
Allocates once for .values[2], and again for .toArray[3].. there's decreased efficiency here.

Swapping variables:

Only do this if you don't care about performance (the advice is written like using the array swap hack is categorically better).

[0]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

[2]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

[3]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

golergka 5 days ago [-]
Vast majority of JS/TS we write doesn’t live on the critical path and should be optimised for readability over performance.

However, when it does, it should be explicitly recognised, appropriate boundaries put, and that code written in a completely different style altogether. There are A LOT more of idiomatic and widespread JS/TS patterns that should not be used in such code.

Before TS, I used to write games in C# (Unity), and it was the same there too. I think this distinction between two kinds of code holds in a lot of environments.

gnabgib 5 days ago [-]
Neither of these cases are more readable, if you want a readable swap can you get better than:

  const [newA,newB]=swap(b,a);
It does look a lot like linq, and has the same hidden complexity problem of linq.. while C# went through extreme lengths to optimise the performance penalties down to almost negligible - JS isn't there yet (and may never be after requiring multiple engines to catch up). An individual function in a chain of drop/jump/take/filter/change(map)/cast steps cannot look ahead to future needs, nor re-order for better efficiency... a (good) software engineer can. In the same way there's good ways to write SQL (benefit from indexes and reduce the set quickly) and bad (cause full table scans).
neonsunset 5 days ago [-]
To be fair Unity is somewhat special because it requires a lot more massaging to get acceptable performance in the happy path than standard C# code. In the latter you mainly care about large allocations if application is latency-tolerant (i.e. as long as you don't touch LOH it's fine) or avoiding allocations in general if it's latency-sensitive. There isn’t that much difference with regular sane C# code.
leosanchez 5 days ago [-]
filter allocates an array with new elements doesn't it ? C# variant on the other hand doesn't.
chrismorgan 5 days ago [-]
Sorry, but your comment is completely, completely wrong.

> .slice[0] does not allocate, nor does .filter[1], only map does.. so one allocation.

This is simply not true. I presume you’re misunderstanding “shallow copy” in the MDN docs; it’s pretty poor wording, in my opinion: it means shallow copies of the items, it’s not about the array; it’s not like a live collection, they all do create a new Array.

Array.prototype.slice is specified in a way that must allocate for the array, but since it’s a contiguous slice it could also be implemented as copy-on-write, only allocating a whole new chunk of memory to back the array when either array is modified and so they diverge. I don’t know for certain what engines do, but my weak guess is that browser engines will all behave that way. I’m fairly sure that for strings they all work that way. (Aside: such an optimisation would also require that the slice() caller be an Array. Most of the array methods are deliberately designed to work on array-like objects: not just Arrays, but anything with a length property and numbered properties. One fun place to see that is how NodeList.prototype.forEach === Array.prototype.forEach.)

But Array.prototype.filter must allocate (in the general case, at least), because it’s taking bits and pieces of the original array. So the array itself must allocate.

Array.prototype.map similarly must allocate (in the general case), because it’s creating new values.

Then, when we’re talking about allocation-counting, you have to bear in mind that, when the size is not known, you may make multiple allocations, growing the collection as you go.

Rust’s equivalent of Array, Vec, starts with a small allocation, which depends on the specific type being stored but we’ll simplify and call it 8, and then when you try to add beyond that capacity, reallocates, doubling the capacity. (This is the current growth strategy, but it’s not part of any contract, and can change.)

A naive implementation of JavaScript backed by such a growth strategy would make one exact-sized allocation for slice(), approximately log₂ N allocations for filter() where N is the number of retained elements, and one exact-sized allocation for map().

> > arr.values().drop(10).take(10).filter(el => el < 10).map(el => el + 5).toArray()

> Allocates once for .values[2], and again for .toArray[3].. there's decreased efficiency here.

It’s generally difficult talking about allocations in a GC language in details like this, but in the way you tend to talk about allocations in such systems, .values() can be assumed not to allocate. Especially once you get to what optimisers are likely to do. Or, at the very least, drop(), take(), filter() and map() all allocate just as much, as they also create iterator objects.

—⁂—

> > Swapping variables

> Only do this if you don't care about performance (the advice is written like using the array swap hack is categorically better).

My own hypothesis: any serious JS engine is going to recognise the [a, b] = [b, a] idiom and optimise it to be at least as good as the temporary variable hack. If you’re going to call the array swap a hack, I can call temporary variables a hack—it’s much more of a hack, far messier, especially semantically. The temporary variable thing will mildly resist optimisation for a couple of reasons, whereas [a, b] = [b, a] is neatly self-contained, doesn’t leak anything onto the stack, and can thus be optimised much more elegantly.

Now then the question is whether it is optimised so. And that’s the problem with categoric statements in a language like JavaScript: if you make arguments about fine performance things, they’re prone to change, because JavaScript performance is a teetering stack of flaming plates liable to come crashing down if you poke it in the wrong direction, which changes from moment to moment as the pile sways.

In practice, trivial not-very-careful benchmarking suggests that in Firefox array swap is probably a little slower, but in Chromium they’re equivalent (… both quite a bit slower than in Firefox).

gnabgib 5 days ago [-]
You're right.. all of these functions require more memory. Allocate is wrong... let's use shallow vs deep (significantly different expense). All pointers use a bit more memory than the original, shallow copies use more, but only the size of a primitive (best case) or pointer to an object (worst case.. often the same size), deep can use much, much more.

> arr.slice(10, 20).filter(el => el < 10).map(el => el + 5)

  slice  = shallow
  filter = shallow
  map    = deep
(2s+d)

As you point out, many engines optimise shallow with copy on write (zero cost).. so just 1 allocation at map.

> arr.values().drop(10).take(10).filter(el => el < 10).map(el => el + 5).toArray()

  values = deep
  drop   = shallow
  take   = shallow
  filter = shallow
  map    = deep
  toArray= deep
(3s+3d) .. significantly worse performance

Note the shallow copy:

  const arr=[{a:{b:"c"}},{d:"e"}];
  const [s]=arr.slice(0,1);
  s.a.b="f";
  console.log(arr);//=[a:{b:"f"},{d:"e"}]
chrismorgan 5 days ago [-]
You’re counting completely the wrong thing. Shallow versus deep is about the items inside, but we care about the costs of creating the collection itself. As far as structured clones are concerned, none of the operations we’re talking about are deep. At best, it’s just the wrong word to use. (Example: if you were going to call it anything, you’d call .map(x => x) shallow.)

Array:

• Array.prototype.slice may be expensive (it creates a collection, but it may be able to be done in such a way that you can’t tell).

• Array.prototype.filter is expensive (it creates a collection).

• Array.prototype.map is expensive (it creates a collection).

So you have two or three expensive operations, going through as much as the entire list (depends on how much you trim out with slice and filter) two or three times, creating an intermediate list at each step.

Iterator:

• Array.prototype.values is cheap, creating a lightweight iterator object.

• Iterator.prototype.drop is cheap, creating a lightweight iterator helper object.

• Iterator.prototype.take is cheap, creating a lightweight iterator helper object.

• Iterator.prototype.filter is cheap, creating a lightweight iterator helper object.

• Iterator.prototype.map is cheap, creating a lightweight iterator helper object.

• Iterator.prototype.toArray is the thing that actually drives everything. Now you drive the iterator chain through, going through the list only once, applying each filter or transformation as you go, and only doing one expensive allocation of a new array.

In the end, in terms of time complexity, both are O(n), but the array version has a much higher coefficient on that n. For small inputs, array may be faster. For large values, iterators will be faster.

senfiaj 5 days ago [-]
> My own hypothesis: any serious JS engine is going to recognise the [a, b] = [b, a]

This. As the author of this blog I actually run a benchmark, the loop body was only doing swap, I remember the penalty was around ~2%. But yeah, if it's a critical path and you care about every millisecond, then sure, you should optimize for speed, not for code ergonomics.

5 days ago [-]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 12:31:01 GMT+0000 (Coordinated Universal Time) with Vercel.