Pull to refresh

Comments 6

I prefer passing the context explicitly instead of using implicit techniques like this one. I had previously worked with code bases which used a similar method (based on Node domains), and I identified the following weaknesses of such approaches:

1) code becomes more dependent on a specific runtime (on NodeJS in this case)
2) it's hard to reason about the inputs and outputs of a particular module or function
3) dependencies may use the context in an unpredictable way
4) worse performance (sometimes)

I see how namespaces address parts of the weaknesses (as well as proper encapsulation/abstraction from the runtime), but I'd like to discuss how successful is this approach for larger projects and teams. Any insights into this?
4) worse performance (sometimes)

It depends. Benchmarks I saw showed around 10-15% degradation which is not to much for having trace IDs


1) code becomes more dependent on a specific runtime (on NodeJS in this case)

Do we really have any other solid alternative to run JS server-side?


2) and 3)

Maybe. Hard to argue as it's very subjective. Generally I agree. I wouldn't use CLS for anything complex, but storing and utilizing trace IDs and stuff like that is exactly what I'd be using CLS for.

IMHO async context tracking is the future, and we’re all waiting impatiently for Node core engineers (or community) to implement it in a native and some more performant way (async hooks module is almost pure JavaScript, cpp support for it is pretty shallow, I recommend to read the library source code in node*/lib/*/async_*.js).

Keep in mind a few things meanwhile:
1. The implementation of async_hooks is not stable in at least Node 8 (the process crashes sometimes when an unhandled exception is thrown). I don’t know if it’s fixed in later versions or not yet.
2. If we write a JS program which does nothing but scheduling a series of process.nextTick() calls, we’ll notice that the performance degradation is closer to 10x rather to 10% (on my Macbook Node can do ~1.2M async context switches per second without async_hooks enabled and only 150K switches with async_hooks turned on, even if the init/destroy callbacks are empty). Try such benchmark.
3. Notice that for Promises, async_hooks destroy callback is called NOT instantly after the promise is resolved, but only at garbage collection stage, i.e. is delayed. So contextual objects in the namespace tend to outlive the request and be collected (it’s not a memory leak though, just lots of objects hanging in nowhere and pending destruction). This has been discussed in issues at Node guthub, but they haven’t fixed or improved anything yet (since it’s hard to do due to the way how context ids processing is organized in cpp code).

I also recommend to look at a very tiny and minimal implementation of the same idea on top of bare async_hooks — google for “npm contexty”. It would help to build the 1st step towards deep understanding on how async_hooks work under the hood.

It is based on async_hooks and slows down everything like crazy still (because it basically runs a piece of JS code on each promise resolution). In v13 it’s faster than ever though, but still, minus 10-30% tax (depending on business logic).

Only those users with full accounts are able to leave comments. Log in, please.