To mitigate overhead of heap memory allocation / deallocation, memcached organizes itself into slabs, pages, and chunks. Here I draw a fictitious example in illustration:
At startup, the specified maximum amount of RAM isallocated for the cache with a growth factor.
For instance, the command “memcached -m 64 -f 3” allocates 64 MB in RAM with growth factor of 3. The default smallest chunk size is 96 bytes. If each page is 1 MB, then approx 64 pages will be initially free and not assigned to any slab.
When code writes data into cache, 1 of the free pages will be allocated to slab class. Saw we write 50 byte data into cache, this will be stored in a 96 byte chunk inside. All cache data – key + value + overhead – is stored inside a chunk. 1 free page will be allocated to a slab class that only holds 96 byte chunks.
Here is a short blog by Mike Perham on concepts of slabs, pages and chunks in memcached distributed in-memory cache store:
This blog explains LRU in memcached: