HN Gopher Feed (2017-08-21) - page 1 of 10 ___________________________________________________________________
How JavaScript works: inside the V8 engine
300 points by zlatkov
https://blog.sessionstack.com/how-javascript-works-inside-the-v8...de-the-v8-engine-5-tips-on-how-to-write-optimized-code-ac089e62b12e
___________________________________________________________________
fdw - 5 hours ago
If you're into V8 internals, I'd recommend watching these talks by
Franzsika Hinkelmann, a V8 engineer at Google:
https://www.youtube.com/watch?v=1kAkGWJZ6Zo ,
https://www.youtube.com/watch?v=B9igDWV5ZUg and
https://www.youtube.com/watch?v=p-iiEDtpy6I&t=606sShe's also
recently started blogging at https://medium.com/@fhinkel
[deleted]
yanowitz - 4 hours ago
Interesting article--I'd love to see one just on GC.I just
downloaded the latest node.js sources and v8 still has a call to
CollectAllAvailableGarbage in a loop of 2-7 passes. It does this if
a much cheaper mark-and-sweep fails. Under production loads, that
would occasionally happen. This led to pause-the-world-gc of
3600+ms with v8, which was terrible for our p99 latency.The fix
still feels weird -- we just commented out the fallback strategy
and saw much tighter response time variance with no increased
memory footprint (RSS).I never submitted a patch though because
although it was successful for our workload, I wasn't sure it was
generally appropriate (exposed as a runtime flag) and I left the
job before I could do a better job of running it all down.
tracker1 - 3 hours ago
I've had a similar, but differing issue on a server that ran a
lot of one-off node scripts for things ranging from ETL, or queue
processing. I found that I wanted to force GC after every, or
every N items, because the memory could bloat out a lot before GC
would happen and pause for several seconds... or, potentially
starving out peer processes.Fortunately, that was already in the
box, though behind a runtime flag.
cm2187 - 6 hours ago
I was wondering, given that 90% of the javascript in browsers must
be standard libraries (jquery, bootstrap & co) wouldn't it make
sense for google to hash the source code for every published
version of these libraries, compile those statically using full
optimisation and ship the binaries as part of their updates to the
browser, so that you only have to compile the idiosyncratic part of
the code?
fenomas - 5 hours ago
What are you suggesting "compile those statically using full
optimization" would mean in this context?The point of the article
is that v8's optimizations work by watching the code execute. It
can inline functions because it saw where they got called from;
it can turn property accesses into fixed-offset lookups because
it saw what object the accesses happened to, and so on.(Unless
you're talking about asm.js-style optimizations, but then the
answer is presumably to use asm.js...)
cm2187 - 4 hours ago
I don't understand well enough dynamic language compilations to
answer that but in principle, having 1 billion devices
consuming CPU cycles to JIT recompile the same javascript paths
over and over seems horribly wasteful.
chrisseaton - 3 hours ago
> recompile the same javascript paths over and overThe person
you are replying to's point is that they aren't the same
paths, when you take into account the other code and data in
the rest of the application, which effects how the code even
in the shared libraries is compiled.
cm2187 - 3 hours ago
They are mostly the same path, like if I select something
based on an ID to add a function as an event handler, it is
the same path, just with a different argument.I don't know,
I am outside of my area of confidence here. But in a world
where the technological advances in computing have less to
do with increasing computing capacity than reducing power
consumption, whether on mobile, laptops or servers, the
jquery library that needs recompiling on every execution
feels like the non-recyclable plastic bag of computing.
chrisseaton - 3 hours ago
> like if I select something based on an ID to add a
function as an event handlerNah it's more complicated
than that in reality (I work on dynamic languages and
have a PhD in them). You just want to select based on an
ID, but in order to make that operator fast, JavaScript
would normally pass information about what you want to
select, what possible values it could have, whether it's
allocated on the heap or you somehow stored it somewhere
else, and a whole host of other things.You could say
'well don't do those optimisations then' but then you
just have slow unoptimised code and the point was to make
this fast.
dfox - 4 hours ago
Most of what jQuery does could be moved into standard browser
APIs and then presumably implemented in native code. I feel
that implementing it that way would certainly be significantly
faster, because any performance loss from impossibility of
trace-directed optimizations is outweighted by significant
reduction of amount of crossings between native and JS code
with completely different memory model. The fact that VM/native
transition is expensive is exactly the reason why things like
WebAssembly cannot directly manipulate DOM.But this could not
be implemented by writing C++ class that gets exported to JS
code as $ when something that looks like jquery is loaded. In
JS you can load jQuery and then monkey patch it, which will
break when you use this kind of "transparent" optimization.
erikpukinskis - 5 hours ago
Adding an entire new layer of complexity seems like a bad
tradeoff. You'd break the debugger. You'd break the security
model.
colordrops - 5 hours ago
I'm thinking it's because they already have mounds of tech debt
that would better address long term issues if they had the time
to work on it. Take a look at the chromium bug tracker. It's
overwhelming.
sand500 - 6 hours ago
Maybe ship popular libraries as web assembly?
mhh__ - 6 hours ago
Could be done, although it would have to be done carefully.
sjrd - 6 hours ago
Right, because wasm is there to solve all our performance
problems, isn't it?And how exactly is your jquery.wasm going to
access the DOM? Or expose methods to your application, for that
matter?wasm is not a drop-in replacement for JS.
adamnemecek - 4 hours ago
Eventually the Dom might be replaced with some webgl
renderer.
markdog12 - 5 hours ago
There are plans for WebAssembly modules to directly access
the DOM: https://github.com/WebAssembly/gc/blob/master/propos
als/gc/O...
Spivak - 6 hours ago
No, but if we're being honest, it's exactly what everyone
wants it to be. A bytecode for the web.
Fifer82 - 6 hours ago
I really don't know much about wasm.I just wondered though,
from a birds eye view, does using wasm effectively give you
control of everything within the "frame" of your browser?Can
I get rid of the DOM(boilerplate aside), get rid of CSS?For
example HTMLCanvasElement is a flat surface. I can do what I
want within it as a scene graph renderer.Could I do that with
wasm? Could I invent my own 2D entities (x, y, width,
height). My own 2D Constraints resize like Apple Devices etc?
skybrian - 5 hours ago
It's still in the same sandbox as JavaScript. You don't get
any new API's. There's only going to be speedup on CPU-
intensive things.
chrisseaton - 6 hours ago
You wouldn't be able to optimise the standard libraries very well
without any information about the program calling them and the
data they are using, though. I guess it would turn out to be a
lot slower in practice.
jFlatz - 6 hours ago
I feel like this is one of those things that I have been thinking
to my self for a little while now. "Why hasn't someone already
solved this problem." In my mind there is no reason for my
browser to download 26 copies of the same jquerry file.
ChrisSD - 5 hours ago
If [websites] use a common CDN then this can be
avoided.[edited, with thanks to delinka]
delinka - 5 hours ago
If the sites you're visiting use a common CDN then this can
be avoided.
tracker1 - 3 hours ago
This was going to be my comment as well.. I tend to prefer
cdnjs for as much as possible, since it seems to be the
most diverse CDN in terms of supported libraries.Though
I've been trending away from using most libraries
separately and bundling... may need to think about breaking
things out again. For the most part, however, most of my
bundle sizes are pretty small. React in dev, Preact for
prod is the biggest mix... checking bundle size
before/after adding a module (some will add a huge
dependency tree).There's no substitution for being
judicious... and imho, even if you have to look it up
frequently, using the DOM is usually better than loading in
jQuery... even if you center around
Array.from(document.querySelectorAll('...')) as your main
usage.
cpeterso - 5 hours ago
Here is a proposal to use the hashes from