Does this for loop iterate multiple times?
Clash Royale CLAN TAG#URR8PPP
I have been discussing some code with colleagues:
for(const a of arr)
if(a.thing)
continue;
// do a thing
A suggestion was to filter this and use a forEach
arr.filter(a => !a.thing)
.forEach(a => /* do a thing */);
There was a discussion about iterating more than necessary. I've looked this up, and I can't find anything. I also tried to figure out how to view the optimized output, but I don't know how to do that either.
I would expect that the filter
and forEach
turn into code that is very much like the for of
with the continue
, but I don't know how to be sure.
How can I find out? The only thing I've tried so far is google.
javascript performance
|
show 4 more comments
I have been discussing some code with colleagues:
for(const a of arr)
if(a.thing)
continue;
// do a thing
A suggestion was to filter this and use a forEach
arr.filter(a => !a.thing)
.forEach(a => /* do a thing */);
There was a discussion about iterating more than necessary. I've looked this up, and I can't find anything. I also tried to figure out how to view the optimized output, but I don't know how to do that either.
I would expect that the filter
and forEach
turn into code that is very much like the for of
with the continue
, but I don't know how to be sure.
How can I find out? The only thing I've tried so far is google.
javascript performance
1
The second snippet definitely loops twice, while the first one would loop only once.
– Taplar
Dec 31 '18 at 19:45
2
btw, the first approach is faster, even if you take a singleforEach
.
– Nina Scholz
Dec 31 '18 at 19:46
1
Good question! Just as a side note I would usefor of
instead offor in
to iterate over an array.
– kev
Dec 31 '18 at 19:46
2
By the way, these two snippets are not functionally identical.filter
predicates are written in the positive sense, in other words,filter
will return entries which match, rather than discarding matches. So you should remove the!
.
– Brandon
Dec 31 '18 at 21:28
2
@Brandon Actually, the!
is required in this case, since the action is supposed to happen ifa.thing
is falsy (the code hitscontinue
whena.thing
is truthy in the first example). The code was correct before.
– Brian McCutchon
Dec 31 '18 at 23:25
|
show 4 more comments
I have been discussing some code with colleagues:
for(const a of arr)
if(a.thing)
continue;
// do a thing
A suggestion was to filter this and use a forEach
arr.filter(a => !a.thing)
.forEach(a => /* do a thing */);
There was a discussion about iterating more than necessary. I've looked this up, and I can't find anything. I also tried to figure out how to view the optimized output, but I don't know how to do that either.
I would expect that the filter
and forEach
turn into code that is very much like the for of
with the continue
, but I don't know how to be sure.
How can I find out? The only thing I've tried so far is google.
javascript performance
I have been discussing some code with colleagues:
for(const a of arr)
if(a.thing)
continue;
// do a thing
A suggestion was to filter this and use a forEach
arr.filter(a => !a.thing)
.forEach(a => /* do a thing */);
There was a discussion about iterating more than necessary. I've looked this up, and I can't find anything. I also tried to figure out how to view the optimized output, but I don't know how to do that either.
I would expect that the filter
and forEach
turn into code that is very much like the for of
with the continue
, but I don't know how to be sure.
How can I find out? The only thing I've tried so far is google.
javascript performance
javascript performance
edited Jan 2 at 16:43
loctrice
asked Dec 31 '18 at 19:38
loctriceloctrice
9251923
9251923
1
The second snippet definitely loops twice, while the first one would loop only once.
– Taplar
Dec 31 '18 at 19:45
2
btw, the first approach is faster, even if you take a singleforEach
.
– Nina Scholz
Dec 31 '18 at 19:46
1
Good question! Just as a side note I would usefor of
instead offor in
to iterate over an array.
– kev
Dec 31 '18 at 19:46
2
By the way, these two snippets are not functionally identical.filter
predicates are written in the positive sense, in other words,filter
will return entries which match, rather than discarding matches. So you should remove the!
.
– Brandon
Dec 31 '18 at 21:28
2
@Brandon Actually, the!
is required in this case, since the action is supposed to happen ifa.thing
is falsy (the code hitscontinue
whena.thing
is truthy in the first example). The code was correct before.
– Brian McCutchon
Dec 31 '18 at 23:25
|
show 4 more comments
1
The second snippet definitely loops twice, while the first one would loop only once.
– Taplar
Dec 31 '18 at 19:45
2
btw, the first approach is faster, even if you take a singleforEach
.
– Nina Scholz
Dec 31 '18 at 19:46
1
Good question! Just as a side note I would usefor of
instead offor in
to iterate over an array.
– kev
Dec 31 '18 at 19:46
2
By the way, these two snippets are not functionally identical.filter
predicates are written in the positive sense, in other words,filter
will return entries which match, rather than discarding matches. So you should remove the!
.
– Brandon
Dec 31 '18 at 21:28
2
@Brandon Actually, the!
is required in this case, since the action is supposed to happen ifa.thing
is falsy (the code hitscontinue
whena.thing
is truthy in the first example). The code was correct before.
– Brian McCutchon
Dec 31 '18 at 23:25
1
1
The second snippet definitely loops twice, while the first one would loop only once.
– Taplar
Dec 31 '18 at 19:45
The second snippet definitely loops twice, while the first one would loop only once.
– Taplar
Dec 31 '18 at 19:45
2
2
btw, the first approach is faster, even if you take a single
forEach
.– Nina Scholz
Dec 31 '18 at 19:46
btw, the first approach is faster, even if you take a single
forEach
.– Nina Scholz
Dec 31 '18 at 19:46
1
1
Good question! Just as a side note I would use
for of
instead of for in
to iterate over an array.– kev
Dec 31 '18 at 19:46
Good question! Just as a side note I would use
for of
instead of for in
to iterate over an array.– kev
Dec 31 '18 at 19:46
2
2
By the way, these two snippets are not functionally identical.
filter
predicates are written in the positive sense, in other words, filter
will return entries which match, rather than discarding matches. So you should remove the !
.– Brandon
Dec 31 '18 at 21:28
By the way, these two snippets are not functionally identical.
filter
predicates are written in the positive sense, in other words, filter
will return entries which match, rather than discarding matches. So you should remove the !
.– Brandon
Dec 31 '18 at 21:28
2
2
@Brandon Actually, the
!
is required in this case, since the action is supposed to happen if a.thing
is falsy (the code hits continue
when a.thing
is truthy in the first example). The code was correct before.– Brian McCutchon
Dec 31 '18 at 23:25
@Brandon Actually, the
!
is required in this case, since the action is supposed to happen if a.thing
is falsy (the code hits continue
when a.thing
is truthy in the first example). The code was correct before.– Brian McCutchon
Dec 31 '18 at 23:25
|
show 4 more comments
3 Answers
3
active
oldest
votes
Your first example (the for in loop) is O(n), which will execute n times (n being the size of the array).
Your second example (the filter forEach) is O(n+m), which will execute n times in the filter (n being the size of the array), and then m times (m being the size of the resulting array after the filter takes place).
As such, the first example is faster. However, in this type of example without an exceedingly large sample set the difference is probably measured in microseconds or nanoseconds.
With regards to compilation optimization, that is essentially all memory access optimization. The major interpreters and engines will all analyze issues in code relating to function, variable, and property access such as how often and what the shape of the access graph looks like; and then, with all of that information, optimize their hidden structure to be more efficient for access. Essentially no optimization so far as loop replacement or process analysis is done on the code as it for the most part is optimized while it is running (if a specific part of code does start taking an excessively long time, it may have its code optimized).
When first executing the JavaScript code, V8 leverages full-codegen which directly translates the parsed JavaScript into machine code without any transformation. This allows it to start executing machine code very fast. Note that V8 does not use intermediate bytecode representation this way removing the need for an interpreter.
When your code has run for some time, the profiler thread has gathered enough data to tell which method should be optimized.
Next, Crankshaft optimizations begin in another thread. It translates the JavaScript abstract syntax tree to a high-level static single-assignment (SSA) representation called Hydrogen and tries to optimize that Hydrogen graph. Most optimizations are done at this level.
-https://blog.sessionstack.com/how-javascript-works-inside-the-v8-engine-5-tips-on-how-to-write-optimized-code-ac089e62b12e
*While continue
may cause the execution to go to the next iteration, it still counts as an iteration of the loop.
1
Is there nolazy
filtering stream construct in JS?
– Alexander
Dec 31 '18 at 21:59
@Alexander No, at least nothing built-in, and the engines don't do stream fusion optimization (yet). Though there are libraries which provide lazy collections, such as Lazy.js.
– interphx
Dec 31 '18 at 23:21
4
Your asymptotic analysis is incorrect. Filtering cannot increase the size of the array, hence m <= n and O(n+m) = O(n + (x)n) = O((1+x)n) = O(n) where 0 >= x >= 1.
– ApproachingDarknessFish
Jan 1 at 4:38
@ApproachingDarknessFish - No. You can reduce it to O(n), but that does not mean that O(n+m) is incorrect. That you have opted to rewrite the time complexity into a simpler form just obscures the difference I was highlighting. Do remember, time complexity analyzes instruction execution, and while it is possible to completely reduce it to its simplest form, that is not always applicable during analysis.
– Travis J
Jan 2 at 19:34
add a comment |
The right answer is "it really doesn't matter". Some previously posted answer states that the second approach is O(n+m), but I beg to differ. The same exact "m" operations will also run in the first approach. In the worst case, even if you consider the second batch of operations as "m" (which doesn't really make much sense - we're talking about the same n elements given as input - that's not how complexity analysis work), in the worst case m==n and the complexity will be O(2n), which is just O(n) in the end anyway.
To directly answer your question, yes, the second approach will iterate over the collection twice while the first one will do it only once. But that probably won't make any difference to you. In cases like these, you probably want to improve readability over efficiency. How many items does your collection have? 10? 100? It's better to write code that will be easier to maintain over time than to strive for maximum efficiency all the time - because most of the time it just doesn't make any difference.
Moreover, iterating more than once doesn't mean your code runs slower. It's all about what's inside each loop. For instance:
for (const item of arr)
// do A
// do B
Is virtually the same as:
for (const item of arr)
// do A
for (const item of arr)
// do B
The for loop itself doesn't add any significant overhead to the CPU. Although you would probably want to write a single loop anyway, if your code readability is improved when you do two loops, go ahead and do it.
Efficiency is about picking the right algorithm
If you really need to be efficient, you don't want to iterate through the whole collection, not even once. You want some smarter way to do it: either divide and conquer (O(log n)) or use hash maps (O(1)). A hash map a day keeps the inefficiency away :-)
Do things only once
Now, back to your example, if I find myself iterating over and over and doing the same operation every time, I'd just run the filtering operation only once, at the beginning:
// during initialization
const things = ;
const notThings = ;
for (const item of arr)
item.thing ? things.push(item) : notThings.push(item);
// now every time you need to iterate through the items...
for (const a of notThings) // replaced arr with notThings
// if (a.thing) // <- no need to check this anymore
// continue;
// do a thing
And then you can freely iterate over notThings
, knowing that unwanted items were already filtered out. Makes sense?
Criticism to "for of
is faster than calling methods"
Some people like to state that for of
will always be faster than calling forEach()
. We just cannot say that. There are lots of Javascript interpreters out there and for each one there are different versions, each with its particular ways of optimizing things. To prove my point, I was able to make filter() + forEach()
run faster than for of
in Node.js v10 on macOS Mojave:
const COLLECTION_SIZE = 10000;
const RUNS = 10000;
const collection = Array.from(Array(COLLECTION_SIZE), (e, i) => i);
function forOf()
for (const item of collection)
if (item % 2 === 0)
continue;
// do something
function filterForEach()
collection
.filter(item => item % 2 === 0)
.forEach(item => /* do something */ );
const fns = [forOf, filterForEach];
function timed(fn)
if (!fn.times) fn.times = ;
const i = fn.times.length;
fn.times[i] = process.hrtime.bigint();
fn();
fn.times[i] = process.hrtime.bigint() - fn.times[i];
for (let r = 0; r < RUNS; r++)
for (const fn of fns)
timed(fn);
for (const fn of fns)
const times = fn.times;
times.sort((a, b) => a - b);
const median = times[Math.floor(times.length / 2)];
const name = fn.constructor.name;
console.info(`$name: $median`);
Times (in nanoseconds):
forOf: 81704
filterForEach: 32709
for of
was consistently slower in all tests I ran, always around 50% slower. That's the main point of this answer: Don't trust on each interpreter's implementation details, because that can (and will) change over time. Unless you're developing for embedded or high-efficiency/low-latency systems -- where you need to be as close to the hardware as possible -- get to know your algorithm complexities first.
It's not multiple times for the same operation. Your example is just manually implementing a filter method. I don't see how that is different than just calling filter.
– loctrice
Dec 31 '18 at 21:52
You could just go ahead and call filter() (and I would probably do that myself too). My point is that if you're doing the check over and over for the same collection, you should probably filter it out from the start.
– Lucio Paiva
Dec 31 '18 at 21:54
1
If we follow your assumption, then I would agree with you.
– loctrice
Dec 31 '18 at 21:55
On the other hand, if your code does that O(n) only once during the whole program execution, then go ahead and do what you're doing already - that's perfectly fine and it doesn't matter if you use filter()+forEach() orfor of
.
– Lucio Paiva
Dec 31 '18 at 21:56
Not sure if I made myself clear enough, so I edited my answer and added the part "Moreover, iterating more than once...". Please check it out and see if it makes sense to you.
– Lucio Paiva
Dec 31 '18 at 22:13
|
show 2 more comments
An easy way to see how many times each part of that statement is called would be to add log statements like so and run it in the Chrome console
var arr = [1,2,3,4];
arr.filter(a => console.log("hit1") ;return a%2 != 0;)
.forEach(a => console.log("hit2"));
"Hit1" should print to the console 4 times regardless in this case. If it were to iterate too many times, we'd see "hit2" output 4 times, but after running this code it only outputs twice. So your assumption is partially correct, that the second time it iterates, it doesn't iterate over the whole set. However it does iterate over the whole set once in the .filter
and then iterates again over the part of the set that matches the condition again in the .filter
Another good place to look is in the MDN developer docs here especially in the "Polyfill" section which outlines the exact equivalent algorithm and you can see that .filter()
here returns the variable res
, which is what .forEach
would be performed upon.
So while it overall iterates over the set twice, in the .forEach
section it only iterates over the part of the set that matches the .filter
condition.
1
of course it iterates twice, if the first filtering returns some elements. but it's slower than thefor
loop. the only advantage of the second is to use callbacks with functional patterns.
– Nina Scholz
Dec 31 '18 at 19:56
1
Right, definitely slower. Just wanted to illustrate to them how they can break down the problem in the future if they want to analyze something like this real quick.
– nkorai
Dec 31 '18 at 19:59
1
I get your point, but I wasn't so much asking how the language itself worked. I think adding the console.log (and making it an actual function body) would make it harder to optimize, and create the same amount of operation even after optimization.
– loctrice
Dec 31 '18 at 20:35
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53990895%2fdoes-this-for-loop-iterate-multiple-times%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
Your first example (the for in loop) is O(n), which will execute n times (n being the size of the array).
Your second example (the filter forEach) is O(n+m), which will execute n times in the filter (n being the size of the array), and then m times (m being the size of the resulting array after the filter takes place).
As such, the first example is faster. However, in this type of example without an exceedingly large sample set the difference is probably measured in microseconds or nanoseconds.
With regards to compilation optimization, that is essentially all memory access optimization. The major interpreters and engines will all analyze issues in code relating to function, variable, and property access such as how often and what the shape of the access graph looks like; and then, with all of that information, optimize their hidden structure to be more efficient for access. Essentially no optimization so far as loop replacement or process analysis is done on the code as it for the most part is optimized while it is running (if a specific part of code does start taking an excessively long time, it may have its code optimized).
When first executing the JavaScript code, V8 leverages full-codegen which directly translates the parsed JavaScript into machine code without any transformation. This allows it to start executing machine code very fast. Note that V8 does not use intermediate bytecode representation this way removing the need for an interpreter.
When your code has run for some time, the profiler thread has gathered enough data to tell which method should be optimized.
Next, Crankshaft optimizations begin in another thread. It translates the JavaScript abstract syntax tree to a high-level static single-assignment (SSA) representation called Hydrogen and tries to optimize that Hydrogen graph. Most optimizations are done at this level.
-https://blog.sessionstack.com/how-javascript-works-inside-the-v8-engine-5-tips-on-how-to-write-optimized-code-ac089e62b12e
*While continue
may cause the execution to go to the next iteration, it still counts as an iteration of the loop.
1
Is there nolazy
filtering stream construct in JS?
– Alexander
Dec 31 '18 at 21:59
@Alexander No, at least nothing built-in, and the engines don't do stream fusion optimization (yet). Though there are libraries which provide lazy collections, such as Lazy.js.
– interphx
Dec 31 '18 at 23:21
4
Your asymptotic analysis is incorrect. Filtering cannot increase the size of the array, hence m <= n and O(n+m) = O(n + (x)n) = O((1+x)n) = O(n) where 0 >= x >= 1.
– ApproachingDarknessFish
Jan 1 at 4:38
@ApproachingDarknessFish - No. You can reduce it to O(n), but that does not mean that O(n+m) is incorrect. That you have opted to rewrite the time complexity into a simpler form just obscures the difference I was highlighting. Do remember, time complexity analyzes instruction execution, and while it is possible to completely reduce it to its simplest form, that is not always applicable during analysis.
– Travis J
Jan 2 at 19:34
add a comment |
Your first example (the for in loop) is O(n), which will execute n times (n being the size of the array).
Your second example (the filter forEach) is O(n+m), which will execute n times in the filter (n being the size of the array), and then m times (m being the size of the resulting array after the filter takes place).
As such, the first example is faster. However, in this type of example without an exceedingly large sample set the difference is probably measured in microseconds or nanoseconds.
With regards to compilation optimization, that is essentially all memory access optimization. The major interpreters and engines will all analyze issues in code relating to function, variable, and property access such as how often and what the shape of the access graph looks like; and then, with all of that information, optimize their hidden structure to be more efficient for access. Essentially no optimization so far as loop replacement or process analysis is done on the code as it for the most part is optimized while it is running (if a specific part of code does start taking an excessively long time, it may have its code optimized).
When first executing the JavaScript code, V8 leverages full-codegen which directly translates the parsed JavaScript into machine code without any transformation. This allows it to start executing machine code very fast. Note that V8 does not use intermediate bytecode representation this way removing the need for an interpreter.
When your code has run for some time, the profiler thread has gathered enough data to tell which method should be optimized.
Next, Crankshaft optimizations begin in another thread. It translates the JavaScript abstract syntax tree to a high-level static single-assignment (SSA) representation called Hydrogen and tries to optimize that Hydrogen graph. Most optimizations are done at this level.
-https://blog.sessionstack.com/how-javascript-works-inside-the-v8-engine-5-tips-on-how-to-write-optimized-code-ac089e62b12e
*While continue
may cause the execution to go to the next iteration, it still counts as an iteration of the loop.
1
Is there nolazy
filtering stream construct in JS?
– Alexander
Dec 31 '18 at 21:59
@Alexander No, at least nothing built-in, and the engines don't do stream fusion optimization (yet). Though there are libraries which provide lazy collections, such as Lazy.js.
– interphx
Dec 31 '18 at 23:21
4
Your asymptotic analysis is incorrect. Filtering cannot increase the size of the array, hence m <= n and O(n+m) = O(n + (x)n) = O((1+x)n) = O(n) where 0 >= x >= 1.
– ApproachingDarknessFish
Jan 1 at 4:38
@ApproachingDarknessFish - No. You can reduce it to O(n), but that does not mean that O(n+m) is incorrect. That you have opted to rewrite the time complexity into a simpler form just obscures the difference I was highlighting. Do remember, time complexity analyzes instruction execution, and while it is possible to completely reduce it to its simplest form, that is not always applicable during analysis.
– Travis J
Jan 2 at 19:34
add a comment |
Your first example (the for in loop) is O(n), which will execute n times (n being the size of the array).
Your second example (the filter forEach) is O(n+m), which will execute n times in the filter (n being the size of the array), and then m times (m being the size of the resulting array after the filter takes place).
As such, the first example is faster. However, in this type of example without an exceedingly large sample set the difference is probably measured in microseconds or nanoseconds.
With regards to compilation optimization, that is essentially all memory access optimization. The major interpreters and engines will all analyze issues in code relating to function, variable, and property access such as how often and what the shape of the access graph looks like; and then, with all of that information, optimize their hidden structure to be more efficient for access. Essentially no optimization so far as loop replacement or process analysis is done on the code as it for the most part is optimized while it is running (if a specific part of code does start taking an excessively long time, it may have its code optimized).
When first executing the JavaScript code, V8 leverages full-codegen which directly translates the parsed JavaScript into machine code without any transformation. This allows it to start executing machine code very fast. Note that V8 does not use intermediate bytecode representation this way removing the need for an interpreter.
When your code has run for some time, the profiler thread has gathered enough data to tell which method should be optimized.
Next, Crankshaft optimizations begin in another thread. It translates the JavaScript abstract syntax tree to a high-level static single-assignment (SSA) representation called Hydrogen and tries to optimize that Hydrogen graph. Most optimizations are done at this level.
-https://blog.sessionstack.com/how-javascript-works-inside-the-v8-engine-5-tips-on-how-to-write-optimized-code-ac089e62b12e
*While continue
may cause the execution to go to the next iteration, it still counts as an iteration of the loop.
Your first example (the for in loop) is O(n), which will execute n times (n being the size of the array).
Your second example (the filter forEach) is O(n+m), which will execute n times in the filter (n being the size of the array), and then m times (m being the size of the resulting array after the filter takes place).
As such, the first example is faster. However, in this type of example without an exceedingly large sample set the difference is probably measured in microseconds or nanoseconds.
With regards to compilation optimization, that is essentially all memory access optimization. The major interpreters and engines will all analyze issues in code relating to function, variable, and property access such as how often and what the shape of the access graph looks like; and then, with all of that information, optimize their hidden structure to be more efficient for access. Essentially no optimization so far as loop replacement or process analysis is done on the code as it for the most part is optimized while it is running (if a specific part of code does start taking an excessively long time, it may have its code optimized).
When first executing the JavaScript code, V8 leverages full-codegen which directly translates the parsed JavaScript into machine code without any transformation. This allows it to start executing machine code very fast. Note that V8 does not use intermediate bytecode representation this way removing the need for an interpreter.
When your code has run for some time, the profiler thread has gathered enough data to tell which method should be optimized.
Next, Crankshaft optimizations begin in another thread. It translates the JavaScript abstract syntax tree to a high-level static single-assignment (SSA) representation called Hydrogen and tries to optimize that Hydrogen graph. Most optimizations are done at this level.
-https://blog.sessionstack.com/how-javascript-works-inside-the-v8-engine-5-tips-on-how-to-write-optimized-code-ac089e62b12e
*While continue
may cause the execution to go to the next iteration, it still counts as an iteration of the loop.
edited Dec 31 '18 at 20:15
answered Dec 31 '18 at 20:00
Travis JTravis J
63.9k29148223
63.9k29148223
1
Is there nolazy
filtering stream construct in JS?
– Alexander
Dec 31 '18 at 21:59
@Alexander No, at least nothing built-in, and the engines don't do stream fusion optimization (yet). Though there are libraries which provide lazy collections, such as Lazy.js.
– interphx
Dec 31 '18 at 23:21
4
Your asymptotic analysis is incorrect. Filtering cannot increase the size of the array, hence m <= n and O(n+m) = O(n + (x)n) = O((1+x)n) = O(n) where 0 >= x >= 1.
– ApproachingDarknessFish
Jan 1 at 4:38
@ApproachingDarknessFish - No. You can reduce it to O(n), but that does not mean that O(n+m) is incorrect. That you have opted to rewrite the time complexity into a simpler form just obscures the difference I was highlighting. Do remember, time complexity analyzes instruction execution, and while it is possible to completely reduce it to its simplest form, that is not always applicable during analysis.
– Travis J
Jan 2 at 19:34
add a comment |
1
Is there nolazy
filtering stream construct in JS?
– Alexander
Dec 31 '18 at 21:59
@Alexander No, at least nothing built-in, and the engines don't do stream fusion optimization (yet). Though there are libraries which provide lazy collections, such as Lazy.js.
– interphx
Dec 31 '18 at 23:21
4
Your asymptotic analysis is incorrect. Filtering cannot increase the size of the array, hence m <= n and O(n+m) = O(n + (x)n) = O((1+x)n) = O(n) where 0 >= x >= 1.
– ApproachingDarknessFish
Jan 1 at 4:38
@ApproachingDarknessFish - No. You can reduce it to O(n), but that does not mean that O(n+m) is incorrect. That you have opted to rewrite the time complexity into a simpler form just obscures the difference I was highlighting. Do remember, time complexity analyzes instruction execution, and while it is possible to completely reduce it to its simplest form, that is not always applicable during analysis.
– Travis J
Jan 2 at 19:34
1
1
Is there no
lazy
filtering stream construct in JS?– Alexander
Dec 31 '18 at 21:59
Is there no
lazy
filtering stream construct in JS?– Alexander
Dec 31 '18 at 21:59
@Alexander No, at least nothing built-in, and the engines don't do stream fusion optimization (yet). Though there are libraries which provide lazy collections, such as Lazy.js.
– interphx
Dec 31 '18 at 23:21
@Alexander No, at least nothing built-in, and the engines don't do stream fusion optimization (yet). Though there are libraries which provide lazy collections, such as Lazy.js.
– interphx
Dec 31 '18 at 23:21
4
4
Your asymptotic analysis is incorrect. Filtering cannot increase the size of the array, hence m <= n and O(n+m) = O(n + (x)n) = O((1+x)n) = O(n) where 0 >= x >= 1.
– ApproachingDarknessFish
Jan 1 at 4:38
Your asymptotic analysis is incorrect. Filtering cannot increase the size of the array, hence m <= n and O(n+m) = O(n + (x)n) = O((1+x)n) = O(n) where 0 >= x >= 1.
– ApproachingDarknessFish
Jan 1 at 4:38
@ApproachingDarknessFish - No. You can reduce it to O(n), but that does not mean that O(n+m) is incorrect. That you have opted to rewrite the time complexity into a simpler form just obscures the difference I was highlighting. Do remember, time complexity analyzes instruction execution, and while it is possible to completely reduce it to its simplest form, that is not always applicable during analysis.
– Travis J
Jan 2 at 19:34
@ApproachingDarknessFish - No. You can reduce it to O(n), but that does not mean that O(n+m) is incorrect. That you have opted to rewrite the time complexity into a simpler form just obscures the difference I was highlighting. Do remember, time complexity analyzes instruction execution, and while it is possible to completely reduce it to its simplest form, that is not always applicable during analysis.
– Travis J
Jan 2 at 19:34
add a comment |
The right answer is "it really doesn't matter". Some previously posted answer states that the second approach is O(n+m), but I beg to differ. The same exact "m" operations will also run in the first approach. In the worst case, even if you consider the second batch of operations as "m" (which doesn't really make much sense - we're talking about the same n elements given as input - that's not how complexity analysis work), in the worst case m==n and the complexity will be O(2n), which is just O(n) in the end anyway.
To directly answer your question, yes, the second approach will iterate over the collection twice while the first one will do it only once. But that probably won't make any difference to you. In cases like these, you probably want to improve readability over efficiency. How many items does your collection have? 10? 100? It's better to write code that will be easier to maintain over time than to strive for maximum efficiency all the time - because most of the time it just doesn't make any difference.
Moreover, iterating more than once doesn't mean your code runs slower. It's all about what's inside each loop. For instance:
for (const item of arr)
// do A
// do B
Is virtually the same as:
for (const item of arr)
// do A
for (const item of arr)
// do B
The for loop itself doesn't add any significant overhead to the CPU. Although you would probably want to write a single loop anyway, if your code readability is improved when you do two loops, go ahead and do it.
Efficiency is about picking the right algorithm
If you really need to be efficient, you don't want to iterate through the whole collection, not even once. You want some smarter way to do it: either divide and conquer (O(log n)) or use hash maps (O(1)). A hash map a day keeps the inefficiency away :-)
Do things only once
Now, back to your example, if I find myself iterating over and over and doing the same operation every time, I'd just run the filtering operation only once, at the beginning:
// during initialization
const things = ;
const notThings = ;
for (const item of arr)
item.thing ? things.push(item) : notThings.push(item);
// now every time you need to iterate through the items...
for (const a of notThings) // replaced arr with notThings
// if (a.thing) // <- no need to check this anymore
// continue;
// do a thing
And then you can freely iterate over notThings
, knowing that unwanted items were already filtered out. Makes sense?
Criticism to "for of
is faster than calling methods"
Some people like to state that for of
will always be faster than calling forEach()
. We just cannot say that. There are lots of Javascript interpreters out there and for each one there are different versions, each with its particular ways of optimizing things. To prove my point, I was able to make filter() + forEach()
run faster than for of
in Node.js v10 on macOS Mojave:
const COLLECTION_SIZE = 10000;
const RUNS = 10000;
const collection = Array.from(Array(COLLECTION_SIZE), (e, i) => i);
function forOf()
for (const item of collection)
if (item % 2 === 0)
continue;
// do something
function filterForEach()
collection
.filter(item => item % 2 === 0)
.forEach(item => /* do something */ );
const fns = [forOf, filterForEach];
function timed(fn)
if (!fn.times) fn.times = ;
const i = fn.times.length;
fn.times[i] = process.hrtime.bigint();
fn();
fn.times[i] = process.hrtime.bigint() - fn.times[i];
for (let r = 0; r < RUNS; r++)
for (const fn of fns)
timed(fn);
for (const fn of fns)
const times = fn.times;
times.sort((a, b) => a - b);
const median = times[Math.floor(times.length / 2)];
const name = fn.constructor.name;
console.info(`$name: $median`);
Times (in nanoseconds):
forOf: 81704
filterForEach: 32709
for of
was consistently slower in all tests I ran, always around 50% slower. That's the main point of this answer: Don't trust on each interpreter's implementation details, because that can (and will) change over time. Unless you're developing for embedded or high-efficiency/low-latency systems -- where you need to be as close to the hardware as possible -- get to know your algorithm complexities first.
It's not multiple times for the same operation. Your example is just manually implementing a filter method. I don't see how that is different than just calling filter.
– loctrice
Dec 31 '18 at 21:52
You could just go ahead and call filter() (and I would probably do that myself too). My point is that if you're doing the check over and over for the same collection, you should probably filter it out from the start.
– Lucio Paiva
Dec 31 '18 at 21:54
1
If we follow your assumption, then I would agree with you.
– loctrice
Dec 31 '18 at 21:55
On the other hand, if your code does that O(n) only once during the whole program execution, then go ahead and do what you're doing already - that's perfectly fine and it doesn't matter if you use filter()+forEach() orfor of
.
– Lucio Paiva
Dec 31 '18 at 21:56
Not sure if I made myself clear enough, so I edited my answer and added the part "Moreover, iterating more than once...". Please check it out and see if it makes sense to you.
– Lucio Paiva
Dec 31 '18 at 22:13
|
show 2 more comments
The right answer is "it really doesn't matter". Some previously posted answer states that the second approach is O(n+m), but I beg to differ. The same exact "m" operations will also run in the first approach. In the worst case, even if you consider the second batch of operations as "m" (which doesn't really make much sense - we're talking about the same n elements given as input - that's not how complexity analysis work), in the worst case m==n and the complexity will be O(2n), which is just O(n) in the end anyway.
To directly answer your question, yes, the second approach will iterate over the collection twice while the first one will do it only once. But that probably won't make any difference to you. In cases like these, you probably want to improve readability over efficiency. How many items does your collection have? 10? 100? It's better to write code that will be easier to maintain over time than to strive for maximum efficiency all the time - because most of the time it just doesn't make any difference.
Moreover, iterating more than once doesn't mean your code runs slower. It's all about what's inside each loop. For instance:
for (const item of arr)
// do A
// do B
Is virtually the same as:
for (const item of arr)
// do A
for (const item of arr)
// do B
The for loop itself doesn't add any significant overhead to the CPU. Although you would probably want to write a single loop anyway, if your code readability is improved when you do two loops, go ahead and do it.
Efficiency is about picking the right algorithm
If you really need to be efficient, you don't want to iterate through the whole collection, not even once. You want some smarter way to do it: either divide and conquer (O(log n)) or use hash maps (O(1)). A hash map a day keeps the inefficiency away :-)
Do things only once
Now, back to your example, if I find myself iterating over and over and doing the same operation every time, I'd just run the filtering operation only once, at the beginning:
// during initialization
const things = ;
const notThings = ;
for (const item of arr)
item.thing ? things.push(item) : notThings.push(item);
// now every time you need to iterate through the items...
for (const a of notThings) // replaced arr with notThings
// if (a.thing) // <- no need to check this anymore
// continue;
// do a thing
And then you can freely iterate over notThings
, knowing that unwanted items were already filtered out. Makes sense?
Criticism to "for of
is faster than calling methods"
Some people like to state that for of
will always be faster than calling forEach()
. We just cannot say that. There are lots of Javascript interpreters out there and for each one there are different versions, each with its particular ways of optimizing things. To prove my point, I was able to make filter() + forEach()
run faster than for of
in Node.js v10 on macOS Mojave:
const COLLECTION_SIZE = 10000;
const RUNS = 10000;
const collection = Array.from(Array(COLLECTION_SIZE), (e, i) => i);
function forOf()
for (const item of collection)
if (item % 2 === 0)
continue;
// do something
function filterForEach()
collection
.filter(item => item % 2 === 0)
.forEach(item => /* do something */ );
const fns = [forOf, filterForEach];
function timed(fn)
if (!fn.times) fn.times = ;
const i = fn.times.length;
fn.times[i] = process.hrtime.bigint();
fn();
fn.times[i] = process.hrtime.bigint() - fn.times[i];
for (let r = 0; r < RUNS; r++)
for (const fn of fns)
timed(fn);
for (const fn of fns)
const times = fn.times;
times.sort((a, b) => a - b);
const median = times[Math.floor(times.length / 2)];
const name = fn.constructor.name;
console.info(`$name: $median`);
Times (in nanoseconds):
forOf: 81704
filterForEach: 32709
for of
was consistently slower in all tests I ran, always around 50% slower. That's the main point of this answer: Don't trust on each interpreter's implementation details, because that can (and will) change over time. Unless you're developing for embedded or high-efficiency/low-latency systems -- where you need to be as close to the hardware as possible -- get to know your algorithm complexities first.
It's not multiple times for the same operation. Your example is just manually implementing a filter method. I don't see how that is different than just calling filter.
– loctrice
Dec 31 '18 at 21:52
You could just go ahead and call filter() (and I would probably do that myself too). My point is that if you're doing the check over and over for the same collection, you should probably filter it out from the start.
– Lucio Paiva
Dec 31 '18 at 21:54
1
If we follow your assumption, then I would agree with you.
– loctrice
Dec 31 '18 at 21:55
On the other hand, if your code does that O(n) only once during the whole program execution, then go ahead and do what you're doing already - that's perfectly fine and it doesn't matter if you use filter()+forEach() orfor of
.
– Lucio Paiva
Dec 31 '18 at 21:56
Not sure if I made myself clear enough, so I edited my answer and added the part "Moreover, iterating more than once...". Please check it out and see if it makes sense to you.
– Lucio Paiva
Dec 31 '18 at 22:13
|
show 2 more comments
The right answer is "it really doesn't matter". Some previously posted answer states that the second approach is O(n+m), but I beg to differ. The same exact "m" operations will also run in the first approach. In the worst case, even if you consider the second batch of operations as "m" (which doesn't really make much sense - we're talking about the same n elements given as input - that's not how complexity analysis work), in the worst case m==n and the complexity will be O(2n), which is just O(n) in the end anyway.
To directly answer your question, yes, the second approach will iterate over the collection twice while the first one will do it only once. But that probably won't make any difference to you. In cases like these, you probably want to improve readability over efficiency. How many items does your collection have? 10? 100? It's better to write code that will be easier to maintain over time than to strive for maximum efficiency all the time - because most of the time it just doesn't make any difference.
Moreover, iterating more than once doesn't mean your code runs slower. It's all about what's inside each loop. For instance:
for (const item of arr)
// do A
// do B
Is virtually the same as:
for (const item of arr)
// do A
for (const item of arr)
// do B
The for loop itself doesn't add any significant overhead to the CPU. Although you would probably want to write a single loop anyway, if your code readability is improved when you do two loops, go ahead and do it.
Efficiency is about picking the right algorithm
If you really need to be efficient, you don't want to iterate through the whole collection, not even once. You want some smarter way to do it: either divide and conquer (O(log n)) or use hash maps (O(1)). A hash map a day keeps the inefficiency away :-)
Do things only once
Now, back to your example, if I find myself iterating over and over and doing the same operation every time, I'd just run the filtering operation only once, at the beginning:
// during initialization
const things = ;
const notThings = ;
for (const item of arr)
item.thing ? things.push(item) : notThings.push(item);
// now every time you need to iterate through the items...
for (const a of notThings) // replaced arr with notThings
// if (a.thing) // <- no need to check this anymore
// continue;
// do a thing
And then you can freely iterate over notThings
, knowing that unwanted items were already filtered out. Makes sense?
Criticism to "for of
is faster than calling methods"
Some people like to state that for of
will always be faster than calling forEach()
. We just cannot say that. There are lots of Javascript interpreters out there and for each one there are different versions, each with its particular ways of optimizing things. To prove my point, I was able to make filter() + forEach()
run faster than for of
in Node.js v10 on macOS Mojave:
const COLLECTION_SIZE = 10000;
const RUNS = 10000;
const collection = Array.from(Array(COLLECTION_SIZE), (e, i) => i);
function forOf()
for (const item of collection)
if (item % 2 === 0)
continue;
// do something
function filterForEach()
collection
.filter(item => item % 2 === 0)
.forEach(item => /* do something */ );
const fns = [forOf, filterForEach];
function timed(fn)
if (!fn.times) fn.times = ;
const i = fn.times.length;
fn.times[i] = process.hrtime.bigint();
fn();
fn.times[i] = process.hrtime.bigint() - fn.times[i];
for (let r = 0; r < RUNS; r++)
for (const fn of fns)
timed(fn);
for (const fn of fns)
const times = fn.times;
times.sort((a, b) => a - b);
const median = times[Math.floor(times.length / 2)];
const name = fn.constructor.name;
console.info(`$name: $median`);
Times (in nanoseconds):
forOf: 81704
filterForEach: 32709
for of
was consistently slower in all tests I ran, always around 50% slower. That's the main point of this answer: Don't trust on each interpreter's implementation details, because that can (and will) change over time. Unless you're developing for embedded or high-efficiency/low-latency systems -- where you need to be as close to the hardware as possible -- get to know your algorithm complexities first.
The right answer is "it really doesn't matter". Some previously posted answer states that the second approach is O(n+m), but I beg to differ. The same exact "m" operations will also run in the first approach. In the worst case, even if you consider the second batch of operations as "m" (which doesn't really make much sense - we're talking about the same n elements given as input - that's not how complexity analysis work), in the worst case m==n and the complexity will be O(2n), which is just O(n) in the end anyway.
To directly answer your question, yes, the second approach will iterate over the collection twice while the first one will do it only once. But that probably won't make any difference to you. In cases like these, you probably want to improve readability over efficiency. How many items does your collection have? 10? 100? It's better to write code that will be easier to maintain over time than to strive for maximum efficiency all the time - because most of the time it just doesn't make any difference.
Moreover, iterating more than once doesn't mean your code runs slower. It's all about what's inside each loop. For instance:
for (const item of arr)
// do A
// do B
Is virtually the same as:
for (const item of arr)
// do A
for (const item of arr)
// do B
The for loop itself doesn't add any significant overhead to the CPU. Although you would probably want to write a single loop anyway, if your code readability is improved when you do two loops, go ahead and do it.
Efficiency is about picking the right algorithm
If you really need to be efficient, you don't want to iterate through the whole collection, not even once. You want some smarter way to do it: either divide and conquer (O(log n)) or use hash maps (O(1)). A hash map a day keeps the inefficiency away :-)
Do things only once
Now, back to your example, if I find myself iterating over and over and doing the same operation every time, I'd just run the filtering operation only once, at the beginning:
// during initialization
const things = ;
const notThings = ;
for (const item of arr)
item.thing ? things.push(item) : notThings.push(item);
// now every time you need to iterate through the items...
for (const a of notThings) // replaced arr with notThings
// if (a.thing) // <- no need to check this anymore
// continue;
// do a thing
And then you can freely iterate over notThings
, knowing that unwanted items were already filtered out. Makes sense?
Criticism to "for of
is faster than calling methods"
Some people like to state that for of
will always be faster than calling forEach()
. We just cannot say that. There are lots of Javascript interpreters out there and for each one there are different versions, each with its particular ways of optimizing things. To prove my point, I was able to make filter() + forEach()
run faster than for of
in Node.js v10 on macOS Mojave:
const COLLECTION_SIZE = 10000;
const RUNS = 10000;
const collection = Array.from(Array(COLLECTION_SIZE), (e, i) => i);
function forOf()
for (const item of collection)
if (item % 2 === 0)
continue;
// do something
function filterForEach()
collection
.filter(item => item % 2 === 0)
.forEach(item => /* do something */ );
const fns = [forOf, filterForEach];
function timed(fn)
if (!fn.times) fn.times = ;
const i = fn.times.length;
fn.times[i] = process.hrtime.bigint();
fn();
fn.times[i] = process.hrtime.bigint() - fn.times[i];
for (let r = 0; r < RUNS; r++)
for (const fn of fns)
timed(fn);
for (const fn of fns)
const times = fn.times;
times.sort((a, b) => a - b);
const median = times[Math.floor(times.length / 2)];
const name = fn.constructor.name;
console.info(`$name: $median`);
Times (in nanoseconds):
forOf: 81704
filterForEach: 32709
for of
was consistently slower in all tests I ran, always around 50% slower. That's the main point of this answer: Don't trust on each interpreter's implementation details, because that can (and will) change over time. Unless you're developing for embedded or high-efficiency/low-latency systems -- where you need to be as close to the hardware as possible -- get to know your algorithm complexities first.
edited Jan 1 at 13:50
answered Dec 31 '18 at 21:21
Lucio PaivaLucio Paiva
6,36523653
6,36523653
It's not multiple times for the same operation. Your example is just manually implementing a filter method. I don't see how that is different than just calling filter.
– loctrice
Dec 31 '18 at 21:52
You could just go ahead and call filter() (and I would probably do that myself too). My point is that if you're doing the check over and over for the same collection, you should probably filter it out from the start.
– Lucio Paiva
Dec 31 '18 at 21:54
1
If we follow your assumption, then I would agree with you.
– loctrice
Dec 31 '18 at 21:55
On the other hand, if your code does that O(n) only once during the whole program execution, then go ahead and do what you're doing already - that's perfectly fine and it doesn't matter if you use filter()+forEach() orfor of
.
– Lucio Paiva
Dec 31 '18 at 21:56
Not sure if I made myself clear enough, so I edited my answer and added the part "Moreover, iterating more than once...". Please check it out and see if it makes sense to you.
– Lucio Paiva
Dec 31 '18 at 22:13
|
show 2 more comments
It's not multiple times for the same operation. Your example is just manually implementing a filter method. I don't see how that is different than just calling filter.
– loctrice
Dec 31 '18 at 21:52
You could just go ahead and call filter() (and I would probably do that myself too). My point is that if you're doing the check over and over for the same collection, you should probably filter it out from the start.
– Lucio Paiva
Dec 31 '18 at 21:54
1
If we follow your assumption, then I would agree with you.
– loctrice
Dec 31 '18 at 21:55
On the other hand, if your code does that O(n) only once during the whole program execution, then go ahead and do what you're doing already - that's perfectly fine and it doesn't matter if you use filter()+forEach() orfor of
.
– Lucio Paiva
Dec 31 '18 at 21:56
Not sure if I made myself clear enough, so I edited my answer and added the part "Moreover, iterating more than once...". Please check it out and see if it makes sense to you.
– Lucio Paiva
Dec 31 '18 at 22:13
It's not multiple times for the same operation. Your example is just manually implementing a filter method. I don't see how that is different than just calling filter.
– loctrice
Dec 31 '18 at 21:52
It's not multiple times for the same operation. Your example is just manually implementing a filter method. I don't see how that is different than just calling filter.
– loctrice
Dec 31 '18 at 21:52
You could just go ahead and call filter() (and I would probably do that myself too). My point is that if you're doing the check over and over for the same collection, you should probably filter it out from the start.
– Lucio Paiva
Dec 31 '18 at 21:54
You could just go ahead and call filter() (and I would probably do that myself too). My point is that if you're doing the check over and over for the same collection, you should probably filter it out from the start.
– Lucio Paiva
Dec 31 '18 at 21:54
1
1
If we follow your assumption, then I would agree with you.
– loctrice
Dec 31 '18 at 21:55
If we follow your assumption, then I would agree with you.
– loctrice
Dec 31 '18 at 21:55
On the other hand, if your code does that O(n) only once during the whole program execution, then go ahead and do what you're doing already - that's perfectly fine and it doesn't matter if you use filter()+forEach() or
for of
.– Lucio Paiva
Dec 31 '18 at 21:56
On the other hand, if your code does that O(n) only once during the whole program execution, then go ahead and do what you're doing already - that's perfectly fine and it doesn't matter if you use filter()+forEach() or
for of
.– Lucio Paiva
Dec 31 '18 at 21:56
Not sure if I made myself clear enough, so I edited my answer and added the part "Moreover, iterating more than once...". Please check it out and see if it makes sense to you.
– Lucio Paiva
Dec 31 '18 at 22:13
Not sure if I made myself clear enough, so I edited my answer and added the part "Moreover, iterating more than once...". Please check it out and see if it makes sense to you.
– Lucio Paiva
Dec 31 '18 at 22:13
|
show 2 more comments
An easy way to see how many times each part of that statement is called would be to add log statements like so and run it in the Chrome console
var arr = [1,2,3,4];
arr.filter(a => console.log("hit1") ;return a%2 != 0;)
.forEach(a => console.log("hit2"));
"Hit1" should print to the console 4 times regardless in this case. If it were to iterate too many times, we'd see "hit2" output 4 times, but after running this code it only outputs twice. So your assumption is partially correct, that the second time it iterates, it doesn't iterate over the whole set. However it does iterate over the whole set once in the .filter
and then iterates again over the part of the set that matches the condition again in the .filter
Another good place to look is in the MDN developer docs here especially in the "Polyfill" section which outlines the exact equivalent algorithm and you can see that .filter()
here returns the variable res
, which is what .forEach
would be performed upon.
So while it overall iterates over the set twice, in the .forEach
section it only iterates over the part of the set that matches the .filter
condition.
1
of course it iterates twice, if the first filtering returns some elements. but it's slower than thefor
loop. the only advantage of the second is to use callbacks with functional patterns.
– Nina Scholz
Dec 31 '18 at 19:56
1
Right, definitely slower. Just wanted to illustrate to them how they can break down the problem in the future if they want to analyze something like this real quick.
– nkorai
Dec 31 '18 at 19:59
1
I get your point, but I wasn't so much asking how the language itself worked. I think adding the console.log (and making it an actual function body) would make it harder to optimize, and create the same amount of operation even after optimization.
– loctrice
Dec 31 '18 at 20:35
add a comment |
An easy way to see how many times each part of that statement is called would be to add log statements like so and run it in the Chrome console
var arr = [1,2,3,4];
arr.filter(a => console.log("hit1") ;return a%2 != 0;)
.forEach(a => console.log("hit2"));
"Hit1" should print to the console 4 times regardless in this case. If it were to iterate too many times, we'd see "hit2" output 4 times, but after running this code it only outputs twice. So your assumption is partially correct, that the second time it iterates, it doesn't iterate over the whole set. However it does iterate over the whole set once in the .filter
and then iterates again over the part of the set that matches the condition again in the .filter
Another good place to look is in the MDN developer docs here especially in the "Polyfill" section which outlines the exact equivalent algorithm and you can see that .filter()
here returns the variable res
, which is what .forEach
would be performed upon.
So while it overall iterates over the set twice, in the .forEach
section it only iterates over the part of the set that matches the .filter
condition.
1
of course it iterates twice, if the first filtering returns some elements. but it's slower than thefor
loop. the only advantage of the second is to use callbacks with functional patterns.
– Nina Scholz
Dec 31 '18 at 19:56
1
Right, definitely slower. Just wanted to illustrate to them how they can break down the problem in the future if they want to analyze something like this real quick.
– nkorai
Dec 31 '18 at 19:59
1
I get your point, but I wasn't so much asking how the language itself worked. I think adding the console.log (and making it an actual function body) would make it harder to optimize, and create the same amount of operation even after optimization.
– loctrice
Dec 31 '18 at 20:35
add a comment |
An easy way to see how many times each part of that statement is called would be to add log statements like so and run it in the Chrome console
var arr = [1,2,3,4];
arr.filter(a => console.log("hit1") ;return a%2 != 0;)
.forEach(a => console.log("hit2"));
"Hit1" should print to the console 4 times regardless in this case. If it were to iterate too many times, we'd see "hit2" output 4 times, but after running this code it only outputs twice. So your assumption is partially correct, that the second time it iterates, it doesn't iterate over the whole set. However it does iterate over the whole set once in the .filter
and then iterates again over the part of the set that matches the condition again in the .filter
Another good place to look is in the MDN developer docs here especially in the "Polyfill" section which outlines the exact equivalent algorithm and you can see that .filter()
here returns the variable res
, which is what .forEach
would be performed upon.
So while it overall iterates over the set twice, in the .forEach
section it only iterates over the part of the set that matches the .filter
condition.
An easy way to see how many times each part of that statement is called would be to add log statements like so and run it in the Chrome console
var arr = [1,2,3,4];
arr.filter(a => console.log("hit1") ;return a%2 != 0;)
.forEach(a => console.log("hit2"));
"Hit1" should print to the console 4 times regardless in this case. If it were to iterate too many times, we'd see "hit2" output 4 times, but after running this code it only outputs twice. So your assumption is partially correct, that the second time it iterates, it doesn't iterate over the whole set. However it does iterate over the whole set once in the .filter
and then iterates again over the part of the set that matches the condition again in the .filter
Another good place to look is in the MDN developer docs here especially in the "Polyfill" section which outlines the exact equivalent algorithm and you can see that .filter()
here returns the variable res
, which is what .forEach
would be performed upon.
So while it overall iterates over the set twice, in the .forEach
section it only iterates over the part of the set that matches the .filter
condition.
answered Dec 31 '18 at 19:48
nkorainkorai
115110
115110
1
of course it iterates twice, if the first filtering returns some elements. but it's slower than thefor
loop. the only advantage of the second is to use callbacks with functional patterns.
– Nina Scholz
Dec 31 '18 at 19:56
1
Right, definitely slower. Just wanted to illustrate to them how they can break down the problem in the future if they want to analyze something like this real quick.
– nkorai
Dec 31 '18 at 19:59
1
I get your point, but I wasn't so much asking how the language itself worked. I think adding the console.log (and making it an actual function body) would make it harder to optimize, and create the same amount of operation even after optimization.
– loctrice
Dec 31 '18 at 20:35
add a comment |
1
of course it iterates twice, if the first filtering returns some elements. but it's slower than thefor
loop. the only advantage of the second is to use callbacks with functional patterns.
– Nina Scholz
Dec 31 '18 at 19:56
1
Right, definitely slower. Just wanted to illustrate to them how they can break down the problem in the future if they want to analyze something like this real quick.
– nkorai
Dec 31 '18 at 19:59
1
I get your point, but I wasn't so much asking how the language itself worked. I think adding the console.log (and making it an actual function body) would make it harder to optimize, and create the same amount of operation even after optimization.
– loctrice
Dec 31 '18 at 20:35
1
1
of course it iterates twice, if the first filtering returns some elements. but it's slower than the
for
loop. the only advantage of the second is to use callbacks with functional patterns.– Nina Scholz
Dec 31 '18 at 19:56
of course it iterates twice, if the first filtering returns some elements. but it's slower than the
for
loop. the only advantage of the second is to use callbacks with functional patterns.– Nina Scholz
Dec 31 '18 at 19:56
1
1
Right, definitely slower. Just wanted to illustrate to them how they can break down the problem in the future if they want to analyze something like this real quick.
– nkorai
Dec 31 '18 at 19:59
Right, definitely slower. Just wanted to illustrate to them how they can break down the problem in the future if they want to analyze something like this real quick.
– nkorai
Dec 31 '18 at 19:59
1
1
I get your point, but I wasn't so much asking how the language itself worked. I think adding the console.log (and making it an actual function body) would make it harder to optimize, and create the same amount of operation even after optimization.
– loctrice
Dec 31 '18 at 20:35
I get your point, but I wasn't so much asking how the language itself worked. I think adding the console.log (and making it an actual function body) would make it harder to optimize, and create the same amount of operation even after optimization.
– loctrice
Dec 31 '18 at 20:35
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53990895%2fdoes-this-for-loop-iterate-multiple-times%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
The second snippet definitely loops twice, while the first one would loop only once.
– Taplar
Dec 31 '18 at 19:45
2
btw, the first approach is faster, even if you take a single
forEach
.– Nina Scholz
Dec 31 '18 at 19:46
1
Good question! Just as a side note I would use
for of
instead offor in
to iterate over an array.– kev
Dec 31 '18 at 19:46
2
By the way, these two snippets are not functionally identical.
filter
predicates are written in the positive sense, in other words,filter
will return entries which match, rather than discarding matches. So you should remove the!
.– Brandon
Dec 31 '18 at 21:28
2
@Brandon Actually, the
!
is required in this case, since the action is supposed to happen ifa.thing
is falsy (the code hitscontinue
whena.thing
is truthy in the first example). The code was correct before.– Brian McCutchon
Dec 31 '18 at 23:25