Buy tickets for the next conference JavaScript fwdays’24 conference!

Node.js vs workers — A comparison of two JavaScript runtimes [eng]

Talk presentation

Workers is the open source kernel of the Cloudflare Workers platform, and despite being built around v8, and running JavaScript and WebAssembly, it is quite different from Node.js. This talk will explore the differences and similarities and hopefully give you a bit more insight into how both operate.

James M Snell
Cloudflare
  • James is a long time contributor to open source and open standards on the web
  • He's been a core contributor to Node.js and a member of the Node.js Technical Steering Committee since 2015
  • Works on the Cloudflare Workers runtime

Talk transcription

Thank you, Anna, for the introduction. I am James, and I am pleased to be present here. As mentioned earlier, I have been actively involved with various runtimes, particularly Node, since around 2015, where I have served as one of the core contributors. Additionally, I am a member of the Technical Steering Committee alongside several other individuals. My responsibilities encompass various aspects, including but not limited to URLs, board control, web crypto, and web streams within the Node ecosystem. While I initially introduced these components, their refinement and enhancement over the years have been a collaborative effort involving numerous contributors. Humorously, I often quip about my role in introducing bugs, which others then assist in resolving.

Furthermore, I have contributed to initiatives such as HTTP/3 and HTTP/2 integration in Node. In addition to my involvement with Node, I also dedicate my efforts to the workers runtime, particularly Cloudflare workers, where I hold the position of principal engineer. In this capacity, I contribute to both the production environment and the open-source core, known as WorkerD, which serves as the kernel responsible for executing JavaScript and WebAssembly within the runtime. Additionally, I am one of the co-founders of the WinterCGU (Web Interoperable Runtimes Community Group), which facilitates collaboration among various runtimes such as Node, Deno, Bun, and others, to collectively address standard API implementations.

The primary objective of this talk is to offer insights into the internal mechanisms of Node by drawing comparisons with workers. Despite some similarities, these two runtimes exhibit notable differences, despite their shared foundation on V8 and execution of JavaScript. Firstly, both Node and workers utilize V8 for JavaScript and WebAssembly execution and employ a blend of C++ and JavaScript in their implementations. They also offer implementations of Web Platform APIs, including the WinterCG Common Minimum API, aimed at facilitating common tasks across different environments.

However, beyond these commonalities, there are substantial differences between Node and workers. Notably, the process model differs significantly, particularly in the case of Node. Upon the initiation of a Node process, distinct activities occur, including the establishment of a main thread and a pool of I/O and garbage collection threads. These auxiliary threads, responsible for I/O operations and garbage collection, do not execute JavaScript code. Instead, they await instructions for tasks such as file operations, network requests, or garbage collection triggers from V8.

In contrast, JavaScript execution in Node primarily occurs within the main thread, which hosts an event loop bound to it. Additionally, Node introduces the concept of an environment, which represents the internal state of the thread and encapsulates Node's internal functionalities. This overview underscores the fundamental differences in process models between Node and workers, laying the groundwork for further exploration into their respective internal workings.

In the context of Node's internal workings, there is a pivotal structure known as the environment or realm. Within this realm, numerous strings and objects are created and stored in memory for performance optimization purposes. Crucially, this environment encompasses the V8 isolate and the V8 context. In Node, each main thread is associated with a single V8 isolate throughout its entire lifecycle. The V8 isolate serves as the component responsible for executing JavaScript code, essentially functioning as the virtual machine within V8. Additionally, the V8 context maintains the global state and facilitates operations such as memory allocations for JavaScript.

It's noteworthy that the V8 isolate and the main context persist for the duration of the thread's existence. All JavaScript code executed within Node, including the standard library and user-defined modules, operates within this singular V8 isolate and context. This distinction is particularly significant when comparing Node with workers. In Node, the main thread comprises one V8 isolate and context, while workers represent separate instances with their own isolates, realms, and sets of JavaScript code. Notably, multiple worker threads can coexist within a Node process, each functioning as an independent Node environment.

Furthermore, the event loop in Node serves a crucial role in monitoring I/O operations. When an I/O operation completes, the event loop triggers the corresponding callback to execute user code. This mechanism ensures efficient handling of asynchronous tasks within the Node environment. The lifecycle of the event loop in Node begins with the execution of entry point JavaScript code. If this code initiates I/O operations, the event loop continues running until these operations are completed. However, if no I/O operations are scheduled, the Node process may exit promptly after executing the entry point code.

Ultimately, understanding the intricacies of Node's event loop and its relationship with the V8 isolate and context is essential for comprehending Node's performance characteristics. This foundational understanding forms the basis for optimizing Node applications and addressing performance-related challenges effectively. In Node, the event loop serves as a critical component for managing asynchronous tasks efficiently. Conceptually, it functions as a perpetual for loop, continuously iterating and processing various tasks. Throughout its execution, the event loop encounters points where it invokes callbacks, often leading to the execution of C++ functions.

When a callback triggers a C++ function call, the event loop halts, blocking further processing until the C++ function completes its execution. Subsequently, control returns to JavaScript, allowing the V8 isolate to execute JavaScript code. This JavaScript execution can encompass a variety of actions, such as scheduling additional I/O operations, logging to the console, or resolving promises. Upon returning control to C++ from JavaScript, Node initiates a process to handle microtasks, including tasks scheduled via process.nextTick() or resolved promises. This process involves draining the microtask queue entirely before allowing the event loop to resume its operation.

Considering the event loop's behavior, it's crucial to understand that while JavaScript is running, the event loop remains blocked, preventing Node from handling other tasks such as accepting new requests, completing I/O operations, or executing additional cryptographic operations. Therefore, optimizing application performance in Node involves minimizing event loop delays by ensuring that tasks execute within short, manageable intervals. This emphasis on event loop efficiency is particularly relevant in Node applications acting as web servers. In such scenarios, the initial entry point JavaScript code typically initializes and starts the server, thereby scheduling I/O tasks on the event loop to await incoming socket connections. This illustrates the pivotal role of event loop management in maintaining the responsiveness and performance of Node-based web servers.

In the Node environment, handling incoming socket connections involves triggering a callback function within the event loop. This callback function, written in JavaScript, is responsible for parsing the HTTP request, validating headers, and determining the appropriate handling for the request. However, the execution of these tasks within the event loop introduces overhead and temporarily halts the Node process from accepting additional requests until the current processing is complete.

To enhance Node's performance, it's imperative to minimize the time spent on tasks such as header parsing and request routing. Frameworks like Express or Fastify achieve faster processing by optimizing these critical components, thereby reducing the event loop delay. The event loop in Node functions as a multiplexer, allocating time for processing various tasks, including handling multiple requests simultaneously. This time-sharing mechanism is crucial for achieving optimal performance but requires careful management to prevent bottlenecks.

The throughput of Node applications, measured in requests per second, relies heavily on the efficiency of individual callbacks. If callbacks take too long to execute, it impedes the event loop's ability to move on to the next task, potentially leading to performance degradation and errors. In one instance, excessive blocking of the event loop due to a prolonged processing task resulted in timeout errors, even though backend servers had already responded with data. This highlights the importance of keeping callbacks small and fast to prevent event loop blocking and ensure timely task execution.

Additionally, in the Node environment, all code is considered trusted, regardless of whether it runs within the main thread or in worker threads. Worker threads are permitted to share memory and exchange messages without any trust boundaries. Moreover, there are no trust boundaries between different Node projects or between the Node process and the operating system, allowing access to the file system based on the user account privileges. In summary, optimizing Node performance involves minimizing event loop delays, ensuring efficient callback execution, and understanding the trust model and communication mechanisms within the Node environment. These considerations are crucial for achieving high throughput and responsiveness in Node applications.

In the Node environment, all HTTP request dispatching occurs within the main thread, where JavaScript code is responsible for parsing and processing incoming requests. This includes tasks such as parsing HTTP headers and determining how to handle each request. Optimizing the performance of these tasks is crucial, as delays can hinder the responsiveness of the Node application. Node's versatility extends beyond HTTP request handling, allowing it to fulfill various roles beyond serving as an HTTP server. However, this flexibility also makes Node more susceptible to event loop delays, particularly when executing computationally intensive tasks.

In contrast, workers, such as those in Cloudflare's environment, are designed with a narrower focus, primarily catering to HTTP request handling. A worker is essentially a JavaScript application deployed to a global network of servers, enabling requests to be handled across multiple locations worldwide. Unlike Node, which can handle a wide range of tasks, workers are primarily oriented towards processing HTTP requests and performing related tasks, such as scheduled tasks or logging. A worker application consists of modules and bindings. Modules can include JavaScript code, WebAssembly applications, or static data, while bindings provide access to specific capabilities or features of the runtime environment, such as fetching data from private networks. The main module of a worker typically exports entry point handlers, such as a fetch handler, which handles incoming requests and generates responses.

Unlike Node, where execution begins with the provided file (e.g., node foo.js), a worker's setup code runs upon creation but is restricted from performing I/O operations. Instead, the runtime environment manages the invocation of entry point handlers based on incoming requests, ensuring that I/O operations are only performed within the context of request handling. In terms of process model, when a worker process starts, it begins with a single thread responsible for bootstrapping and initializing the environment. This single-threaded nature simplifies the execution model of workers compared to the multi-threaded approach in Node. Overall, while both Node and workers serve as platforms for executing JavaScript code, they differ in their design and focus, with Node offering greater flexibility but also requiring careful management of event loop delays, while workers are more specialized for handling HTTP requests efficiently.

In contrast to Node, where a single main thread with one event loop handles all requests, the worker process operates differently. Upon starting, the worker process initializes a main thread, which begins listening for incoming requests after completing its bootstrap process. When a request is received and processing begins, another thread is spawned to await the next incoming request. This ensures that there is always an available thread to handle incoming requests, even while one request is being processed. This multi-threaded approach differs from Node's single-threaded event loop model, where the event loop cannot handle additional tasks while processing a request.

Within a worker's connection thread, an event loop manages I/O operations such as reading stream data and parsing HTTP headers. However, before executing any JavaScript code, the thread performs HTTP parsing and routing to determine which worker should handle the request. This pre-processing of requests distinguishes workers from Node, where such tasks are typically handled within JavaScript code upon receiving the request.

Once the appropriate worker is identified, its fetch handler function is invoked to handle the request. This function executes for a single request, after which the connection is closed, and the thread resumes waiting for the next incoming request. Importantly, in a worker process, the isolate (the execution environment for JavaScript code) is attached to the worker rather than the thread. This allows multiple isolates to be temporarily locked and unlocked from a single thread as it handles multiple requests. In a worker process, multiple threads can be spawned to handle concurrent requests, potentially thousands of threads within a single process. This contrasts with Node's single-threaded event loop, where additional concurrency is typically achieved through the use of worker threads managed by the developer.

Understanding these differences between worker processes and Node's event loop model provides insight into how requests are processed internally and how concurrency is managed in each environment. While Node's event loop executes code sequentially, potentially leading to blocking behavior during request processing, workers' multi-threaded approach ensures that incoming requests can be handled concurrently without blocking the processing of subsequent requests.

So, to provide a summary, let's discuss the key differences between workers and node. With node, the V8 isolate is bound to one thread for its entire lifetime. Conversely, with workers, the V8 isolate is bound to the worker, which operates on any thread and is currently handling a request for that worker, albeit only one at a time. In the context of node, the thread will continue to run as long as there is IO scheduled, such as having a timer or a server waiting for requests. The node process will persist indefinitely while such tasks are ongoing. On the other hand, with workers, the process runs indefinitely, awaiting requests until manually stopped, without considering the event loop's activity.

In node, tasks like receiving an HTTP request, parsing it, and determining how to route it are executed in JavaScript, albeit utilizing some C++ code for operations like HTTP parsing. However, these tasks are carried out within JavaScript, often involving callbacks that interact with C++. Consequently, it's essential to avoid blocking the event loop for extended periods. In contrast, with workers, tasks such as receiving and processing requests occur before JavaScript execution, allowing the event loop to proceed without interruption if necessary. Asynchronous operations are facilitated through a promise model at the C++ level, enhancing efficiency. Additionally, each request can potentially be handled by a different thread. In node, the event loop and promise microtasks are distinct entities. Callbacks are associated with the event loop, while promises operate independently, getting resolved and queued into microtasks as necessary.

With workers, the event loop revolves entirely around promises, resolving tasks and handling microtasks accordingly. Unlike node, there is no concept of a next tick or similar queues; instead, the event loop drives the microtask queue in JavaScript. Another notable difference is that with node, the thread will not exit as long as there are pending IO tasks in the event loop. Conversely, with workers, upon completion of a request, all pending IO tasks related to that request are canceled, ensuring timely termination.

In node, any scheduled IO operation will cause the process to wait indefinitely until completion. Conversely, in workers, IO operations are canceled unless explicitly instructed to wait, leading to potential confusion for developers unfamiliar with this behavior. Moreover, in node, all code processes are inherently trusted within the node environment. In contrast, workers treat every worker as a trust boundary, implementing strict sandboxing to prevent sharing of state or memory between workers. Therefore, while node operates on a single-process, single-tenant model, workers prioritize isolation and security, treating each worker as an independent entity.

In the context of Node, it operates as a single application for the entire process. Conversely, with workers, a single process is always multi-tenant. Each application and every worker within it serve as trust boundaries, enabling the concurrent execution of thousands of these workers simultaneously. This clear distinction is crucial to comprehend the differing perspectives of what constitutes an application in Node versus workers.

Let's examine an example. This serves as the entry point for a Node server. We import and create the server, configure it to handle requests, and instruct it to listen for incoming connections. When a connection is received, the server executes the specified code. Additionally, a timeout is set to print "hello" after a second, followed by sending "hello world" as the response. In Node, the timeout will always trigger upon request completion because it perpetually awaits responses, as Node continuously listens for connections unless explicitly stopped.

All these operations, including server creation, configuration, header parsing, and request handling, are executed in JavaScript within Node. Consequently, it's imperative for the request handler function to return promptly to facilitate the processing of subsequent requests, as the event loop remains engaged while JavaScript is running. In contrast, the worker's approach is simpler. The focus is solely on the JavaScript code's execution. For instance, we export a fetch handler, set a timer, and specify the response, without needing to configure or instruct the server to listen explicitly. The runtime handles these tasks autonomously. Each request may be handled by a different thread, enabling concurrent processing without blocking incoming requests.

Importantly, in workers, timeouts are canceled upon response completion unless the runtime is instructed otherwise. All IO operations within the worker are associated with the request, and upon its completion, any pending IO tasks are canceled to proceed to the next task. In summary, while the two runtimes share similarities, understanding these fundamental differences in task processing is essential for optimizing performance and compatibility across platforms.

Sign in
Or by mail
Sign in
Or by mail
Register with email
Register with email
Forgot password?