You can allocate exactly what you need making allocations compact and efficient. But once you start deallocating, you are left with too many tiny non-uniform gaps and fragments that cannot be reused effectively (left image).
This is known as external fragmentation. There is enough space, but it’s not contiguously allocated (scattered all over the physical memory).
So to make reallocation possible, we can instead allocate fixed-size blocks or pages. Need 8 bytes? nope you are getting 4 KiB. This model is what we mainly use today and it solves external fragmentation. You get a whole block, next allocations are essentially free (given from the same block) until you ran out of block size after which you will be given another block.
This model is predictable, and allowed us to perform virtual memory, shared memory and swap. But it is also not free, as deallocation happens, a client process may end up having 10 allocated blocks, but only using 1 byte in each block.
This is referred to as internal fragmentation. There is enough space in each of those blocks but the memory allocation software (OS) doesn’t essentially know it is used or not.
Some garbage collectors are smart enough to detect and compact internal fragmentation as long as allocation happens through that language. Ruby comes to mind. Compaction of course has a cost.
No free lunch. Either way you have to give up something, thus the dilemma.
Hussein Nasser
memory allocation is an interesting dilemma.
You can allocate exactly what you need making allocations compact and efficient. But once you start deallocating, you are left with too many tiny non-uniform gaps and fragments that cannot be reused effectively (left image).
This is known as external fragmentation. There is enough space, but it’s not contiguously allocated (scattered all over the physical memory).
So to make reallocation possible, we can instead allocate fixed-size blocks or pages. Need 8 bytes? nope you are getting 4 KiB. This model is what we mainly use today and it solves external fragmentation. You get a whole block, next allocations are essentially free (given from the same block) until you ran out of block size after which you will be given another block.
This model is predictable, and allowed us to perform virtual memory, shared memory and swap. But it is also not free, as deallocation happens, a client process may end up having 10 allocated blocks, but only using 1 byte in each block.
This is referred to as internal fragmentation. There is enough space in each of those blocks but the memory allocation software (OS) doesn’t essentially know it is used or not.
Some garbage collectors are smart enough to detect and compact internal fragmentation as long as allocation happens through that language. Ruby comes to mind. Compaction of course has a cost.
No free lunch. Either way you have to give up something, thus the dilemma.
1 month ago | [YT] | 394