Подія відбулась
Подія відбулась

Квитки на наступну конференцію Конференція JavaScript fwdays’24 вже у продажу!

Visualised guide to memory management in JavaScript [eng]

Презентація доповіді

Memory management can be an overwhelming topic to navigate. This talk is the result of my journey down the rabbit hole to better understand it myself. By drawing on my research and experience, I've put together a comprehensive guide that covers everything from hardware implementation to the inner workings of V8.

We'll cover the basics of memory implementation in the computer and operating system. Then we'll talk about the challenges of managing memory in a dynamically-typed language like JavaScript, and explore how references and memory addresses work with practical examples. Finally, we'll take a deep dive into the intricacies of V8 implementation, including heap organization and garbage collection.

Катерина Поршнєва
Engineering Manager
  • Front-End Engineer & Engineering manager з України, зараз проживає в Естонії
  • Захоплюється веб-доступністю, design systems та тестуванням
  • Приймає участь у технічній спільноті, засновниця React Kyiv, нині співорганізаторка TallinnJS
  • Організовує вечори настільних ігор, любить читати художню літературу та трохи кавовий сноб

Транскрипція доповіді

Hello, everybody! I am Katerina, and I'm super excited to be joining you virtually from my home in Tallinn today to talk about some memory-related aspects in JavaScript. We know that on a fundamental level, everything we do on the computer eventually gets turned into binary code—essentially just a bunch of zeros and ones, including everything we write in JavaScript. But how exactly does that happen? And what are all the different steps it goes through in between? This talk will try to answer that question and is the result of my own journey down the rabbit hole, trying to understand how variables we create in JavaScript get turned into a bunch of zeros and ones. We have a lot of ground to cover, and I'll try to go through the whole journey, starting from binary code itself and memory architecture to the inner workings of a JavaScript engine, like memory allocation and garbage collection. So, as I said, we have quite a lot of things to go through. I invite you to join me on this journey down the rabbit hole, where I will show you around. Let's start at the beginning with those zeros and ones and understand how computers store and represent information.

So, zero or one is the smallest fraction of information there is. It's like an on and off switch or true or false. In older computers, there were literal light bulbs that glowed for one and were dark for zero. It is called one bit of information. With combinations of these bits, we can represent all sorts of data: small numbers, big numbers, characters, emoji, and even the answer to the ultimate question of life, the universe, and everything. But how do computers store these bits? There are different types of memory computers use that store them in their own unique ways. For example, DVD discs use small indentations on the space that reflects light differently when being shined on with a laser. And floppy disks, if you remember those, use magnetic encoding. When it comes to the memory our JavaScript applications use, we are primarily dealing with RAM or random access memory. It's a working memory that the CPU uses while running applications. Let's see a little bit more in detail how it works.

First of all, it consists of these memory cells. We can think of them as tiny, tiny storage boxes with the main idea that we can write to them and read from them, and they can hold a charge for some time. On the hardware level, they're implemented as an electric circuit with transistors or a combination of a transistor and capacitor. But a single cell isn't much use on its own. So we can arrange the cells in a grid and then use the address system to specify which cell exactly we need. We can think of it like a street and house address in real life, where the first part will specify the column and the second part will specify the row. Then we can read the contents of the memory cell at the intersection. So we just read one bit of information. However, it isn't enough in modern computing. We need a way to manipulate more bits simultaneously. To do that, we can arrange these grids in an array, allowing us to use the same address to access multiple memory cells at once.

Using this 8-bit address, we're able to read one byte of data at a time. It's helpful to think of it as a match between an address and then the data it holds. To simplify, we can convert the address to a decimal format and then represent all available memory as this kind of array of addresses and corresponding values that they hold. The current RAM we are using supports 8-bit addresses, which means that in total, we can store 256 bytes. This is very little in modern computing, but it is an example of memory that older computers used, often referred to as 8-bit memory architecture. Modern computers operate on 32 or 64-bit architecture instead, which means that addresses got a lot longer, and therefore, they're usually represented in hexadecimal format, as you see on the screen. However, the core principles we just discussed still apply.

While we are talking about physical RAM, it's important to note that applications, when they run on the computer, don't interface with physical memory directly. Instead, the operating system maintains this additional layer of obstruction, known as virtual memory. It maps to the physical RAM under the hood but gives the operating system the flexibility to, for example, reroute some of the data to the hard drive when physical RAM is running low. It helps manage memory usage more efficiently and alleviate some security concerns but also creates this illusion of a large continuous memory space available to the processes that run on the computer.

Now, let's say you want to store something in this memory. More often than not, whatever we want to store in a variable in our code won't fit into one byte. So it occupies a continuous segment of cells in the address space. Often, this is referred to as the first address in that sequence, and it is called a pointer. This address points to the location in memory where the data begins. From there, we can follow along until we cover the size of the data we are storing. In languages like C and C++, developers need to manage memory allocation manually and operate with these pointers within the language.

For example, this piece of code in C allocates memory for an array of 10 integers. What we do here is multiply 10 by the size of the type integer to allocate the memory needed to store it. After we have used that array, we need not forget to free the memory associated with it explicitly. As you can see, this is quite a lot of work to do every time you want to store something. It's quite a lot of things to remember when writing code. At the same time, it gives a lot of power to optimize memory consumption. Now, if we think about JavaScript, we don't need to do any of that, right? We can just go around defining arrays left and right, fill them with whatever we want, and not give a second thought about memory consumption. It's actually a good thing. JavaScript is a high-level language, which means it does a lot of things for us, so that we, as developers, can focus on building features and delivering value to the users. All of the memory allocation, along with some other things, is done behind the scenes for us by a JavaScript engine.

So, a JavaScript engine itself is just a program that runs within the runtime environment, being like a browser or Node.js, that takes the JavaScript code that you wrote, which is essentially just a long piece of text, and turns it into machine code that can be executed by the CPU. It goes through a lot of different transformations that we won't go deep into right now. But I think that's enough to understand. It just takes your code and turns it into machine code. There are a lot of different engines out there. For example, there's V8, developed by Google; SpiderMonkey, used in Firefox, developed by Mozilla; and JavaScript Core, which is part of WebKit and is used in Safari, developed by Apple.

In this talk, I will focus on V8, which is arguably the most popular of them all. It is used in Chrome and other Chromium-based browsers like Edge and Opera, Node.js and Deno, Electron, and more. V8 itself is written in C++, and it's open source. So if you want it, you can just go ahead and check out the code for yourself. It's pretty well documented, and there's a lot of instruction online on how to even run the developer version on your computer. Now, let's take this piece of code and follow it down the rabbit hole, trying to understand how V8 allocates memory for it.

First of all, there are two types of memory V8 uses under the hood. There is stack memory and heap memory. The stack is the region of memory that allocates local context when executing your code. The heap is a much larger region that stores everything that is allocated dynamically: your objects, functions, arrays, etc. Stack and heap are not something that is unique to V8. These are common ways to structure data, each with their ways of accessing and adding data into the structure.

Now, getting back to our code, when the interpreter reaches the line with the variable declaration, first it needs to allocate new memory for this object on the heap. Since JavaScript is a dynamically typed language, it doesn't know in advance how much space something will occupy in the future. So it calculates the approximate amount of space something will take based on its logic and some heuristics it has. Then the string itself that we have within the object will be stored in a different address, whereas the original object will be pointing to it. This is important to note here that primitive values in JavaScript are actually allocated on the heap, except for small integers. Then, of course, the pointer to the rabbit variable on the stack is updated, and then the code can be executed. You can see this structure in the memory tab if you run a heap snapshot in Chrome DevTools, for example. Here you can see the object that we just created. Under the hood, V8 uses this superclass called heap object for everything that is allocated on the heap. It contains pointers to all the internal values, plus a lot of some useful metadata that V8 uses for some optimizations that we won't go deep into right now.

Now, to connect back to what we discussed before about RAM memory, both of these actually map to the actual space in the virtual memory dedicated to our process and, by extension, in the physical RAM. As our program runs, more and more things get allocated on the heap. If we zoom out, we can see that both stack and heap are located on different sides of the virtual memory segment dedicated to our application. As we fill it with more and more data, they start growing towards each other into the free space. But it's not like our application is the only thing running on the computer. So we need to share memory with other applications. In the case of the browser, even with other tabs and browser plugins, you might have open. It might give a little bit more insight into why Chrome can be so memory-hungry sometimes. Yeah, I love this GIF.

So let's continue and explore a little bit more complicated examples and see what happens on the heap. From here on out, I'm not going to be showing stack memory, but you can remember that it's still there. It stores all of the local context as your code continues to execute. Now, let's say we create another variable here, which also has a string value of white. So these two are basically the same string. And it presents an interesting situation for V8 because if we are to allocate new memory space for it to store the string, again, it won't be something efficient. So instead, V8 uses a technique called string interlining. It detects that this is the same string. Instead of allocating new memory, both of them will be pointing to the same location. This brings to an important point that most of the variables in JavaScript are essentially just pointers to places where values are stored. Now, let's say what will happen when we reassign foo's value to a new string. Because strings are immutable, we'll create a new string somewhere. Then foo will change its pointer to point to the new location. I really liked the analogy from Dan Abramov's Just JavaScript course, where he suggested thinking of variables as wires that are pointing somewhere. We can imagine all of the variables we create in JavaScript as this kind of like wires connected to locations where these values are stored. When we create foo initially, it just connects to the same place. Then when we reassign it, this wire changes its kind of end location. Now let's explore another, a little bit more complicated example.

So here we have an object with two properties, right? Then we create another variable and assign it to the previous object. And then we change one of its properties. If we try to access the original properties, the original object's name value, we would see that it got changed as well. So when we made an update here, it got changed here too. This is something that is called modification by reference. And sometimes it can be tricky if you're not careful. To understand more why that happens, let's look at the heap. Here on the left, we have our code. We already know how to declare an object. The only difference here is that it has two properties, right? Now, when we create another object here and assign it to the same one, both of them will be pointing to the same location. So instead of copying the whole object over, these are both basically the same pointer. When we make an update here, we allocate space for the new string and update it in the object's original location. Because both of them point to it, it will get updated in both. If we zoom out and get back to our mental model, this is what happens. And you see that the name got updated in both objects here.

To avoid this behavior, we could use something like a spread operator like this. In that case, when we update the name property, the original object will keep its original value. Let's see what happens. The spread operator will copy all of the top-level properties of an object. Therefore, when you make a change, it won't affect the original object itself. So now they have different names. But it is kind of interesting that it wouldn't help for all of the nested objects. So here we have a deep nested object that describes our rabbit's coat.

Now let's see what happens here. When we update the deep nested property here, we would see that though we used the spread operator, the original object got changed as well. Let's see why that happens. When we use the spread operator, it copied only top-level properties, often referred to as shallow copying. Then when we update the color property, you see that because both of the object's internal coat object was pointing to the same location, it got updated in both of them at the same time. This is, again, modification by reference. Because it works for all of the nested objects out there, it can be tricky and can result in some bugs that are really hard to find.

For example, if you're using React, and you have a component that receives an object as a prop, and then modifies it somewhere by reference within its coat, it can result in this object getting updated in the parent component, and in the parent of the parent component, and maybe going back to your Redux or whatever else state management solution you're using. This is exactly why immutability got popular in the framework some time ago. Because libraries like immutable.js and others, they make sure that references to all pointers are recreated and not copied. So this is just an important thing to keep in mind. When you're modifying by reference, your objects, functions, arrays, everything that is not a primitive value, it can backfire like that and just modify everywhere because all of them are basically pointing to the same location.

Now, we talked a lot about memory allocation, right? We were creating a lot of things and putting them into the memory. But what about freeing the memory that we don't use anymore? For example, here we have the string white, right? After we change the code's color, we don't need this anymore. It's not used anywhere. This is something that can be considered garbage, right? This is kind of unused memory. Languages like C and C++, as we explored an example before, you actually need to go and free this memory explicitly yourself as a developer. But JavaScript runs an automated garbage collector. Within V8, the garbage collector it uses is called Ornico. Let's dive a little bit more into how garbage collection works and how all of the data that we create in JavaScript, after it's not used, gets collected.

But before that, let's understand how generally V8 can detect whether something is garbage or not. We can represent all objects, everything we have in our application, as this kind of chain of references that goes from the root. And as our program runs, the same as we just saw in the example before, some of those references stop existing. And it means that these parts are not reachable from the root anymore. Therefore, memory associated with them can be freed and used for the new allocations. But if we look at the memory array and then just free some of the memory that is associated with it, it can leave our memory fragmented, which means that there's a lot of different gaps in there. And it's really hard to allocate anything new into the memory because we don't have a lot of big spaces available for the new allocations.

To make more space available for the new allocations, the memory can be compacted so that we have a longer continuous chunk available in the end. This process is really similar to this if you remember old Windows operating system; there was this disk defragmenter utility that would defragment your hard drive. And this is a really similar process. Current operating systems, of course, do this automatically for you. So utilities like that are not needed anymore. But this is a very similar process. Now, as we understood how V8 can detect whether something is garbage or it's still used, let's dive a little bit deeper into how the garbage collection algorithm itself works. All heap is split between two main generations. We have the young generation and the old generations. And let's go through them one by one and start with the young generation first.

This is where all of the new objects get allocated. It is fairly small, usually in the size between one to eight megabytes. And within itself, it's split into two equal parts. And right now, we'll see why. They are called from space and to space. So let's say our program runs, and new projects get allocated into the memory. But then at some point, the moment comes when we are trying to allocate something, but there is just not enough space for it. And this is what triggers the garbage collection cycle. So V8 starts the garbage collection and stops the execution cycle until the cleanup has finished.

First of all, it needs to understand which objects are alive or not. So it goes through that chain of references that we just talked about and traverses from the root once, and then copies all of the objects that are still alive from space to space. So this is why they are called like that. Now, everything we had before can be cleared. But before doing that, one really important thing that we need not to forget is to update those references that are pointing to those objects like that. Now everything can be safely removed. And we can swap these two places in place and continue the execution cycle.

Garbage collection is finished, and we can continue the execution and allocate memory for the object that has been waiting. As I said before, the garbage collection stops the execution cycle for some time. And it is often referred to as stop the world. Because if you look at the main thread, it kind of blocks the main thread until the garbage collection is finished. And in older versions of V8, you could actually see a small lag when, for example, you would click a button because the garbage collector was doing its job. But right now, V8 actually does a lot of work in parallel. So a lot of this garbage collection work is parallelized up to seven threads and is usually just taking a few milliseconds and doesn't halt the main thread for a long time.

This algorithm that we just talked about is very fast, right? And it's fairly simple. It's called scavenger or is often referred to as the minor garbage collector. By the same time, it's very memory-hungry. We always need twice as needed memory to facilitate the algorithm's work, right? We always need to have that from space and to space so that we can copy over and swap things in place. Another important thing to note here is that at the same time of cleaning the memory, it also does compaction of it. Because when we clean, there's a lot of new continuous chunk of memory available for the new allocations.

And the fact that it's so kind of memory-hungry is okay for a small amount of space. But of course, it wouldn't be sustainable for everything. So it brings us to while all generation exists. So V8 follows what is called generational hypothesis, which means it sounds really depressing, which means that most objects die young. So it means that they get allocated and almost immediately become unreachable. And if you look at the way you write your own code, you can even see those patterns of why that happens. So you might have a function that you create a temporary object for, then you do some calculations with it, return the value, and this object is not needed anymore almost immediately after it got created. So this is why all generation exists.

And as garbage collection is run in the young generation, objects that survive the cycle are marked as intermediate. And after the young generation fills up again, and we run the second cycle, if there's any intermediate objects that are still alive, they will get, instead of staying in the young generation, they will get promoted into the old generation instead. Statistically, around only 20% of the objects survive into the old generation. Now let's talk a little bit more in detail about the old generation itself. It takes most of the heap, and it's way bigger. And this is where most of the data is actually stored. For garbage collection, it uses what is called the mark, sweep, and compact algorithm. And let's go on it kind of step by step. And let's look at marking first. So remember that chain of references we talked about? So V8 traverses through all of those references and marks the ones that are accessible from the root. After that, everything that is not accessible can be swept, which brings us to the sweep part. And the way that free memory is managed is actually really interesting. So the whole heap is divided into these sections that are called pages, which are basically just segments of memory that is available for the allocations. And as some of the memory is freed, there are some gaps that appear within those pages. And if V8 would just go through the whole heap trying to find the next available space, it wouldn't be very efficient.

So the way it works is that it maintains this structure called a free list, which is like a dictionary with all of the free locations available for allocations. They are categorized by size. We can think of it as Airbnb for memory locations. For example, we have three locations of size one available and then three locations of size two available. Of course, in reality, there are many more. Now, when a new object needs to get allocated, we detect its size, and then we can find a suitable location in the free list. If it is size two, there's an allocation of size two available, and the object can be located there. After it is located, this location needs to be removed from the free list because it's not available for new allocations anymore. The execution can then continue.

We've talked a little bit about how memory is freed and how free memory is managed. Let's now talk a bit about compaction. As our program runs and we free more and more memory, some pages become too fragmented. V8 runs an internal heuristic that detects pages on the brink of becoming too fragmented to operate, and then it runs compaction on them. The compaction process itself is fairly simple and familiar to what we saw with the young generation. It copies over all occupied spaces into the new page, updates all associated references, removes everything in the old page, and now we have a longer continuous chunk of memory available for new allocations, which is then added to the free list. The execution continues as we discussed before.

Now, let's see how this garbage collection is executed by V8. First of all, as our JavaScript runs and V8 detects that the heap limit is approaching, it starts the marking work. The marking work can be done concurrently, without blocking the main thread, split between a number of worker threads. However, there is still a bit of finalization of marking in the main thread where it takes the work done in other threads and puts it all together. Then we go into the sweeping part, where we need to clean the memory associated. Sweeping can be done concurrently because everything we do makes those spaces available in the free list. However, compaction, involving updating references to existing objects, is done in parallel over multiple threads. After all of this is done, the execution can continue. It's relatively fast, but of course, it depends on how many things you store on the heap and your heap size.

We've covered the minor garbage collector for the young generation and the major garbage collector for the older generation. This is where I'd like to wrap up this story. Before finishing, let's do a bit of a recap. When we write code in JavaScript, it's essentially just a long piece of text. To be executed, it needs to run in the runtime environment, such as a browser or Node.js. Within that environment, there is a JavaScript engine like V8 and others that takes your code and processes it to transform it into machine code that can be executed by the CPU. To store all the values you create, the engine uses two types of memory: stack and heap. The stack is for all execution contexts, and the heap is a much larger segment of memory that stores everything allocated dynamically. It's not just allocation; we also need to free memory, and V8 does it automatically by running garbage collection. There's a minor garbage collector for the young generation and a major garbage collector for the rest of the heap. All the stuff stored within V8 in heap and stack is stored in the virtual memory array dedicated to the process by the operating system. This translates into the actual physical RAM card in your computer, storing zeros and ones in its memory cells. This is just part of the picture; I tried to cover the whole story, but there is a lot of other really interesting stuff happening under the hood. I hope you liked this overview and that it gave you more insight into what happens behind the scenes when you create variables in JavaScript. Thank you for your attention. I'm super happy to hear any questions, ideas, or feedback you might have. You can find me on the social media displayed on the screen. Thank you for your attention. Bye.

Увійти
Або поштою
Увійти
Або поштою
Реєстрація через e-mail
Реєстрація через e-mail
Забули пароль?