Leveraging the Event Loop for Blazing-Fast Applications! [eng]
Talk presentation
Can the Microtask Queue help you improve your performances by 100x? It turns out it can, but how? JavaScript is single-threaded, yet it provides a really powerful Event Loop to allow non-blocking operations, so let's try to tame this beast together and get the most out of it! As I like to say: The Event Loop is the only infinite loop you'll love.
Talk transcription
Hi Cristina, thank you for introducing me. So hi everyone and welcome to my talk, as we said, Leveraging the Event Loop for Blazing Fast Applications. So I'm Michael DiPrisco, you can find me online as Cadiman. I'm from Italy, and I'm currently a senior backend developer in Jointly. This is the only good photo I will ever use in my lifetime, so thanks to my wedding photographer for it. I'm also a LogRocket Content Advisory Board member, and I like to call myself a waster of NPM storage because I'm an open source contributor, mainly in the JavaScript field (NPM), and I'm a SQL pool requester. I'm not the topic of this talk, so I suggest we go ahead and start our journey as we have a lot to cover.
So let's start with a really bad joke, but someone had to make it in a JavaScript conference. So I promise you will learn something, and I'm sacrificing my credibility for just this joke. So at least we will try to learn something today. So let's start with our table of contents. What are we going to talk about today? So first thing first, what is the event loop? So we will briefly talk about the event loop in JavaScript. Our second topic will be the Microtask queue. So specifically, we will go in deep dive into the Microtask queue. And the third thing will be a live demo. So we will provide what we learned before and put it inside a simple application.
So I don't have both the expertise and the time to extensively talk about the event loop and to properly connect all the dots about how it works. So please excuse me for taking shortcuts and abstracting many concepts. Today we will concentrate on how the event loop works at a high level and concentrate mainly on the usage of the Microtask queue to provide performance improvements in our web applications. If you really want to know more about the event loop and how it works in different run times, more about run times in a couple of minutes, please search for the awesome presentation by Jake Archibald in the loop or the incredible create your own JavaScript run time by Eric Wendel.
So let's start with a couple of premises. So one step back before starting our journey. The first question we have to answer is, is JavaScript single threaded? Yes and no, but yes. So there has been a lot of debate during the years about JavaScript being single or multi-threaded and how this language works. Yet the answer still is yes, JavaScript is a single threaded. Web workers and service workers in the browsers or child processes and forks in Node and Deno are APIs provided by those run times. But in the ECMAScript specification, let's say the list of rules governing the language doesn't allow for multi-threading and yet it can allow you to execute non-blocking operations even in scenarios where multi-threading will be the only answer. So let's go with our second question. As we are going to talk about a specific run time, let's go through what a run time essentially is.
So being a non-compiled language, JavaScript runs in some sort of container and a program that reads a random code. The container must do two things mainly. The first is parse, convert and execute your code. And the second is provide some objects, prototypes, functions, and APIs for your code to interact with. So the first part, the part involved in parsing, converting, and executing your code is called the engine. The latter, so the one providing you the APIs and some additional functionalities, is the run time.
So let's take the most famous duo of run times in the JavaScript world. So browsers, implementations and Node. Both of them, let's talk about the Chrome browser mainly so we can simplify a lot of things. So both the Chrome browser and the Node run time share the same engine, the V8, yet their run times differ a lot. Node.js provides you structures such as streams, buffers, libraries like FS or PATH, APIs such as the file system ones and the require method, for example.
While the browser provides you DOM objects, the window objects, web APIs, etc. So there are different APIs and different functionalities brought in. Let's take the console.log as an example. So you know what it is and we all use it in production. So don't lie to me because I do it all the time. So what if I told you V8, so as we said, the engine responsible for both the Chrome implementation and the Node.js environment doesn't know what console.log is? Yeah, that's right, because this is part of the web API in the case of a browser runtime like the Chrome one and a different implementation in the Node.js source code.
As per Node.js, you can look at a nice implementation of the console in the official library, so in the official Node.js source code, so on github.com slash Node.js at lib internal console constructor dot js. So every runtime implements its flavor of the event loop, usually leveraging some key concepts we will discuss in a second. So one step back, even if I'm mainly a Node developer, this talk will be based on the browser runtime and its implementation of the event loop because it will be easier to provide a demo, and it will be easier to follow. The browser implementation mainly differs from the Node one because of how some queues in the event loop interact with the loop itself. So in Node, we could say, oversimplifying it, that we have one event loop per process, while in the browser, we have one per agent. For agent, we mean a Chrome tab. So if you have a tab that has a block at the event loop, you can still close that tab and proceed with your navigation in other tabs. So blocking the main thread of a tab doesn't block the other tabs from doing their job.
So that said, how many runtimes are there? We talked about the Chrome one and Node, but there are many others, as I like to say, at least one more than when I started talking. But the most common ones are mainly Node.js, Dino from the same creator, BAN, WorkerD from Cloudflare, and browsers' own implementations. So let's now go to our main question. Now that we know what is a runtime and that JavaScript is a single thread, let's try to understand how the event loop allows us to provide non-blocking input-output, so non-blocking main thread, while still leveraging some functionalities that should be only part of a multi-threaded language. I want to give credit to Lydia Hally for creating this really empowering gif explaining how the microtask queue and the macro task queue work and their order of execution. As you can see, we have the call stack and the microtask queue, which is emptied every time a task is being executed, and the macro task following it later. So the microtask queue is usually filled with, well, usually it's effectively filled with things like process.nextick, promise callbacks, async functions, and the queue microtask, which will be our main topic in a second.
As per the microtask queue, we have mainly the setTimeout, setInterval, and setImmediate. So you can ask, what's the difference between the microtask queue and the macro task queue? Is it just the order in which tasks are executed? Not at all. And we will talk about it mainly later. So let's consider what is the event loop and what are the main parts composing it. As I like to say, the event loop is rather a concept than an actual thing. It's, as we said, implemented in different ways in different browsers. But the event loop is mainly a concept. Implementations in different run times can vary, yet the main parts are usually consistent. So we have mainly the heap or the memory heap, which is some area in memory used to store objects. And to provide some context, the garbage collector attacks, let's say, this heap when it needs to free up memory. Then we have the stack or the call stack, which is the ordered list of operations to be executed. So being single-threaded and non-concurrent, JavaScript can grant predictable execution steps, and we will talk about it in a second, just following the first-in, first-out model in the call stack. Queues. As we said, we have two queues mainly in the implementations we are looking at. The micro and the macro and our list of callbacks waiting for their time to be executed.
So let's simplify it in this way. So queues vary between run times but are usually, as we saw before, filled with timeouts, asynchronous calls, event lists. So whatever is not immediately and doesn't have to be executed at this exact moment. So it doesn't have to be put exactly in the call stack or the stack, as we want to call it. So let's talk about the three main concepts of the event loop. Three things really worth mentioning because they will be really important for the live demo later. So the first thing is run-to-completion operations. So run-to-completion means that every portion of our code, an execution block, a function, etc., is executed till its end before another element of the call stack is executed. OK, so you may ask, what about concurrency? So I continuously hear about JavaScript being concurrent, being parallel, being single, multi-threaded, etc. Well, a lot has been said about this topic in JavaScript, but please be aware that JavaScript has no way of executing concurrent code. This is important. The main thread, of course. It can do something similar thanks to the worker implementation.
So as we said before, spawn processes and use different threads. Yet the main thread executing the main block of your code and responsible for keeping it in line with your application and keeping your script alive isn't concurrent in any way. So do not, please, misunderstand asynchronicity with concurrency because they are two totally different concepts. And this is the real difference between JavaScript and many other languages. So leveraging this event loop implementation allows it to be single-threaded but non-blocking. Yet it cannot be concurrent. And finally, leveraging queues to free up the thread. So leveraging queues, as said before, means putting in some queue every known immediately executable operation in a specific area and wait for it to be ready before executing it. The two most important queues, as we saw, are the macro task queue and the micro task queue. So let's go to the third thing worth mentioning.
So the timeout isn't granted. So what do we mean by the timeout isn't granted? Let's say we call a set timeout of half a second in our application and then wait for it to be executed, for our callback to be put back in the stack. This set timeout just grants us that the execution of the callback will be at least 500 milliseconds, so half a second in the future. Yet it could even last 10 seconds, for example, if our main thread is blocked. So please remember when using these timeouts, how the event loop works and how a blocking thread works. So if you need the time correctness, please use timestamps checks instead of timeout calls. What's the difference? A timeout call is, as we said, a set timeout. A timestamp check is some part of your application in the lifecycle of your application, which checks some expiration timestamp, for example, before executing a callback. OK. So as we just said, the main concept of the event loop is to allow you to have a non-blocked main thread. So why don't we try to block it? So I decided to use a llama because I love llamas in this example.
We have this GIF that will start moving, and then we have two buttons. The first one is the I.O. blocking button, and the second one is the non-I.O. blocking button. The difference, of course, is blocking the main thread or not. So as we see, we can continuously click on the non-I.O. blocking function, and it keeps working. But as soon as we start pressing the I.O. blocking button, everything stops. Everything breaks, and nothing works. So let's do it again because the GIF is really fast. So as you see, I am continuously clicking on the non-I.O. blocking button. And then as soon as I click on the I.O. blocking one, everything is frozen. So what happened and why is that?
Let's look at our blocking code being executed. As you can see, clicking the button calls an unsafe loop function, which is simply an infinite loop. What does this mean? It means that the call stack keeps growing with tasks to do. This case, of course, just an empty execution between curly braces, yet it's still an operation to be put in the stack. So this is effectively an infinite loop, even if it isn't doing anything because it has to evaluate the condition inside the while parenthesis and then execute the empty but still existing code inside the curly braces. So let's now look at our non-blocking loop.
As you can see, we are just calling a setTimeout of zero milliseconds, but still doing the same infinite loop again, calling the safe loop function in this case. So why doesn't it block my UI? Well, simply because a setTimeout is a task scheduled later in the future, as we said before. So it can be put in what the event loop calls the macro task queue. So a specific queue, which will wait for the next time our sphere will do a complete turn. So let's imagine this event loop like a circle with a sphere continuously ticking 360 degrees. So wait until the stack is empty and then bring in the first runnable task. So we never block the event loop because we are effectively continuously moving to the next step, the next tick of the event loop, our safe loop execution. So it is effectively a loop that will never end, but still, it is not blocking our main thread. So it is a non-IO blocking operation. Fun fact, when talking about the macro task queue in the event loop, we usually talk about a set, so a different data structure from a queue. Why is that?
Because, as we just said, the event loop grabs the first runnable task in a set instead of dequeuing the first task in a list. So it's a little different. It's really a simple difference, but it's worth mentioning. And this is the macro task queue. But what about the micro task queue? So what is the micro task queue? After all this talking about the event loop and its queues, or as we learn, its sets, let's talk about the micro task queue, which is, well, a queue and not a set, unlike what we said before about the usual tasks being implemented, the usual queues being implemented in an event loop. This is because the micro task queue can only be filled with the runnable tasks provided by us, the developers interacting with the event loop. So they are runnable by definition right when we put them into the queue. So no need to take the first runnable one as they all are effectively in the micro task queue. So the micro task queue acts in a specific part of the event loop and is probably one of the most misunderstood parts of the whole event loop thing, because this queue effectively acts as soon as the call stack is empty. So in the context of a browser, it usually happens right before the event loop tick is completed, and it is almost starting to re-render.
I say usually because there are certain specific cases when this queue loses some priority. But this is not the topic of this talk. So let's say in an unusual context, though, it actually leverages the moment between the call stack empties and the rendering time, effectively acting as a last step before starting the page rendering phase. So after all this talking, what are we building today? So I promised you a live demo. This will be our third part. We have four steps to victory. So the first is a project scaffolding. What do we mean by a project scaffolding? A simple HTML page with nothing particular inside it, with a simple title and a couple of things inside it.
A basic signal implementation. So let's take a minute to talk about what I personally mean as a signal. So what are we going to build today? And a couple of things we will discover. So we will leverage JS classes or ECMAScript classes, as we want to say, even if they are just mainly syntactic sugar, because they are simpler, of course. Everyone not aware of the prototype pattern can still follow this implementation without having questions. And a signal is, in this case, mainly one-way data binding from JavaScript to the HTML. So we will just provide a class in which the objects can attach to a DOM element. And after attaching to that DOM element, can leverage some JavaScript functionalities to change effectively the HTML markup. So the third step will be a bare-bone benchmark. So a simple benchmark where we will track the time needed for 100,000 updates, one million updates. We will probably go for a million just to have some wow effect in the end.
And for... it's a secret. No, I'm just kidding. The whole talk is about what we do here. So we will implement the microtask queue and we will try to understand how much of a difference it can make. So making our application effectively blazing fast. So if you want to see some code, follow me. So we have a simple VS Code implementation now. So nothing hard here. We will create an index.html file, simple enough. And we will go with the help of Emmet for a simple HTML5 page. My signal implementation. We'll have a GitHub repository later. So please don't worry if you can't follow exactly code line by line because there will be a different implementation provided later. So let's start by creating a simple paragraph. So let's put an id on it and let's call it main paragraph. And let's, with the help of Emmet, of course, let's do some lorem ipsum. OK, now let's create a script tag.
I know it's not the best way to do this, but please, I want to keep it simple. So let's go by creating our class, which will be called signal. OK, and now let's start by putting a couple of things. So an initial element, so a DOM element attached to it and a value. So I know we can leverage private properties and private methods for classes, but I wanted to keep it simple. So you will not look at something like this. We will just simply prepend an underscore for our, let's say, private properties. So let's go for the constructor. And we will, of course, have an element and a value being passed to our constructor and we will put it inside. OK, this dot l equals l and this dot value equals value. OK, so here we are. Now our simple signal is doing mainly nothing, but we can start instructing it and creating it. So signal equals to new signal. I don't trust copilot enough, so I will write it by myself and we have plenty of time. So let's try to use it. Main paragraph. OK, here we are.
And then we will pass a value. Let's say, hello world. Of course, yeah, we don't need this because we are going to effectively control in a one-way data binding way the content of our main paragraph. So let's remove it. Then we will need a rendering function. Of course, because we need to bring in the inner HTML of this element. Of course, I'm keeping it simple. As I said before, later I will provide some slides, a slide with a QR code you can follow to look at a different implementation, but we will talk about it later. So please try to keep it simple with me. OK, so our rendering function just brings the inner HTML following the value being passed. OK, the internal value. So let's call this dot render in our constructor. Now that we have all this scaffolding ready, let's go live. I will move the Chrome browser below. OK. I hope you can see it.
I will zoom it in a lot. OK, here we are. So we have a simple implementation. Now we have our hello world being printed on the console. Of course, I cannot leverage this console to change my text. So let's say ABC, and then we can call the render function. But of course, we want to make it faster. So we will provide some magic inside this implementation. So let's start by providing an API to the users using, to the other developers using our signal. So let's provide a magic getter for our values, which will return this dot underscore value and a magic setter with a val being passed. So this dot value equals val and this dot render. OK, so what changed now?
Let's take a minute to talk about it. So our effect will be this one. So I can change the value. Let's see what we are missing here. This dot value equals val. OK, this dot render. OK, seems good. Signal dot value equals ABC. OK, you can see that our value is changing and our DOM is being updated. So our node element, so our DOM node is attached, and we can control it with our signal. We are using some JavaScript magic, some object-oriented programming magic. Of course, as I said, a prototype implementation would be better. We will talk about it later. But we have the scaffolding needed for our application. So now let's move to our third step. So we have the scaffolding. We have our basic signal implementation.
And now we can move to the part where we do some benchmarking. So let's try to... OK, const start equal new date dot get time. Of course, we can just do this. In a Node.js environment, we could use a better and more accurate API, which is the performance one. But as always, we are keeping it simple. So let's go with the const start equal date dot get time. Then we will just try to go for 100,000 updates. And of course, we are just moving our signal value. We are just changing it, calculating our end time. And then, let's say, time taken and minus start milliseconds. OK, so let's save it. Date dot get time is not a function. Yeah, sure, because it's not... Yeah, we don't need date dot get time, but we need date dot now. That's what we need. So let's do it also for the end, that date dot now. And here we go. OK, time taken, 135 milliseconds. OK, so now let's try to clutter this up a little. OK, so maybe create our... Let's say paragraph one, paragraph two and paragraph three.
So let's go with our signal one, paragraph one, two and three. So we are just trying to bring in some latency, some work for our main thread to do. OK, so that we can better leverage the DMAker task queue later. So let's say we are moving to this approach. OK, hello world one, hello world two and hello world three. OK, let's go. Time taken, 394, circa 400 milliseconds. So let's improve this number. So let's say we are doing a million operations, three times. So three million operations. So as you can see, my input is... My main thread is blocked. So if I try to update it again, I cannot copy and paste anything inside the page until it's done. So now that it is done, you can see the cursor changing. I will do it again. So as you can see, everything is blocked on my site. And then I can.
OK, so we have four seconds effectively where our event loop is just doing its job and blocking our main thread. So we need to fix this, of course, because we cannot wait four seconds for it to render everything. OK, so let's put something here. So let's go back to our signal implementation and let's move our implementation from an immediately rendered mechanism to a queue mechanism. So is it queued? So this will be a flag, of course. This dot queue equals false. And as soon as we now try to move the new value, we will effectively keep updating our internal value because, for example, and you will see this later, we can have some event listeners, for example, that have to be triggered when the value changes. Yet our rendering phase shouldn't be bored with three million operations every time. OK, so if this is not queued, this dot queued equals true. And then window dot queue micro task.
And we can call a function inside here. So this dot queued equals false. This dot render. So before looking at the result of this operation, of course, we have to remove this dot render from here. Let's talk about what we did here. So as we said, let's zoom in a little. OK, so we have added a queued property, putting it to false, of course, in our constructor. Rendering, of course, this didn't change because the render phase is always the same. So we have an inner HTML in this dot value. So that's it. But we changed our setter. So, of course, we just say, have we already queued this operation for our next rendering phase? If so, don't do nothing. We could just do it in a different way. So it does, if this dot queued, continue. But we don't care. OK, so if this is not queued, let's put it in our queue. So let's bring this flag on on true and then leverage the window queue micro task. So this is exactly the API we need. And this is the only thing we are looking for.
So a simple function being executed when our queue micro task will bring our item in. So this will be effectively our callback. So we put it again into a non-queued state and then we will render. So are we ready to look at the difference? As we said before, let's go back to our implementation so we can have a look at how this worked before. OK, let's re-render this one. 3.7 seconds. Let's do it again. So this is mainly the average, 3 and X. 3.8. So now let's go to our implementation. And if I didn't do anything wrong, 61 milliseconds. So we brought 50X improvement in our simple application. Of course, you could ask, why should I do 3 million operations inside a single rendering? But that's not the question you should do. You should ask yourself, in a simple life cycle of my applications, how many times do I need to re-render something on my screen?
So how many times will this render function be effectively called? If you look at a simple active application, with some animations, with some interactions with our user, of course, you can see there are dozens of updates being done. Of course, having a 50X improvement in performances is really awesome when these dozens of updates become hundreds, thousands, and maybe millions. Because if your application is really high in animations and interactions, you can easily reach a million interactions. So a million updates using this signal implementation. This is something... And of course, remember that everything that allows you to improve performances without any disadvantage should be done. So this is a rule we all should try to follow.
So we saw some code, and now we can say, wow. So if what I said earlier was well explained and this result is what many of you might have expected. If not, please, as I said, go to the... Look at the Archibald and Eric Wender's job. So our last question is, why don't we always use the Microtask EQ? So as we said, this is a 50X improvement in performances. This is awesome. It's easy to implement, and we can do it right away in every application. OK, so why aren't we effectively doing this? Well, all that glitters is not gold. So despite the enormous improvements in performances we can have by leveraging the Microtask EQ, it doesn't mean it's always the best solution.
As we said earlier, this EQ effectively acts before the event loop starts a new tick. OK, so effectively acting as a last step. So if you bring in too many tasks, we will still find ourselves blocking the main thread. What I mean by that, so let's try to look at our safe loop implementation and try to change the behavior by not doing a set timeout, but a promise.resolve. Promise.resolve means resolve it immediately. So put it inside the Microtask EQ. And then call the safe loop function. I need a glass of water, so please excuse me. OK, here we are. So promise.resolve effectively puts it into the Microtask EQ and then it executes it immediately. So as soon as our call stack is empty, we call the Microtask EQ. So we say the event loop asks the Microtask EQ, do you have anything to bring into my stack? Yeah, of course, I have the safe loop function. The safe loop function is executed, puts another execution of the safe loop function, the callback, inside the Microtask EQ and then empties itself. Now the event loop asks the same question. OK, so I'm done. Can I do a next tick or is there something inside the Microtask EQ? Yeah, there is this safe loop. So we continuously clutter our main thread just by doing one single operation. So we will effectively create an infinite loop.
OK, so as we started with a bad joke, I thought it would be nice to end with an even worse one. So let's say the event loop is the only infinite loop you will ever want in your app. So please be aware of one thing. We will go back a little because we need to do this. Remember, even if in this GIF, this really awesome GIF, the Microtask EQ acts effectively right before the Macrotask EQ, this is not exactly the correct order of execution because one is inside the current tick of the event loop and the other one, the Macrotask EQ, is effectively one of the first things being done after the tick of the event loop. So effectively, using setInterval, setTimeout and setImmediate, we are not creating a block in our main thread. And of course, I can agree with you that process.nextTick and setImmediate are the worst names for these kinds of functions because effectively the process.nextTick acts in the current tick of the event loop and the setImmediate isn't immediate but acts in the next tick of the event loop. Yet this is the specification, so we have to stick with it. So let's go. We have to repeat our animations all over. So I hope it's all clear and why you should leverage the Macrotask EQ and the Microtask EQ. And if you have some questions, of course, please provide them. But first, if you want to see a complete implementation of a signal leveraging the Microtask EQ, please find it following this QR code or search Super Simple Signal on my GitHub profile. Remember, I am Cadiman, so C-A-D-I-E-N-V-A-N, and you can find the Super Simple Signal implementation, which doesn't leverage the class implementation, but a different one, a prototype-based one. So, ready for your questions.