memcached - a distributed memory object caching system

Extstore In The Cloud - Dormando (August 15, 2018)

In our introductory post about memcached external storage, we talk about persistent memory, expensive NVMe devices, and terabytes of cache. Is extstore relevant to commodity VMs-with-SSDs from various cloud vendors? In this post we answer this by installing and configuring extstore on a cheap DigitalOcean VM.

Use Case: Session caching

For better or worse, a large use case for key/value stores in the cloud are session caches. These objects are ephemeral, frequently updated, largeish blobs of data associated with a user’s login session. The unstructured data and TimeToLive limitations make them a poor fit for a primary database.

Web site owners will often set up a memcached instance just to handle sessions. We will focus on session data, however the instance could handle other data at the same time.

The Problem

Lets assume you have a website which has grown from a single droplet. You have a $20 per month instance with 3.5G of RAM dedicated to memcached (though we recommend you run with multiple, in case one fails!). You notice recently that users have been unable to stay logged in from day to day, and your DB usage is rising.

Memory Requirements

For many session implementations, if the session objects goes away, the user is logged out or the DB suffers a large hit to regenerate it. Thus, it is important to keep sessions around for as long as they’re useful.

Extstore requires keeping the key and metadata of an object in RAM, while pointing the value to storage. This is a poor fit for small objects. Session data can get quite large, so lets check first.

$ echo "stats slabs" | nc localhost 11211 | grep -E "total_pages|chunk_size"
[... snip ...]
STAT 26:chunk_size 27120
STAT 26:total_pages 321
STAT 27:chunk_size 33904
STAT 27:total_pages 2936
STAT 28:chunk_size 42384
STAT 28:total_pages 157
[... snip ...]

With a quick check, we can see that most of the assigned memory is ending up in objects using 26k-42k of memory, with most of them at 32k. Keys and metadata in memcached can’t be more than 300 bytes even in extreme cases, so most of the space is the item value. A good (if contrived) fit for extstore!

You can and should look over or graph the counters from stats items and stats slabs. In this case, we could find that slab class 26, 27 and 28 also have a high eviction count. We’re throwing sessions away early!

Expanding Cache and Cutting Cost

Currently, we have 3.5G of RAM cache via a $20 droplet. As of writing, a $10 droplet comes with 2G of RAM, and 50G of SSD backed disk space. Lets try tripling the available cache space for half the cost.

Setting up the instance

As extstore is still new, may have to build memcached yourself to enable it. This might not be necessary by the time you read this, so please check memcached -h to see if extstore is supported before manually building anything.

We use an Ubuntu 18.04 64bit droplet from DigitalOcean in this example. Please package, snapshot, or manage these binaries and configurations however you feel most comfortable.

First, grab the latest tarball. We should fix the redirect ;)

d@blogbox:~$ wget
[... snip ...]
latest		       100%[=========================>] 454.27K  --.-KB/s    in 0.005s

2018-08-14 04:46:27 (96.5 MB/s) - ‘latest’ saved [465169/465169]

d@blogbox:~$ tar -zxvf latest
d@blogbox:~$ ls
latest	memcached-1.5.10
d@blogbox:~$ rm latest

Next, we grab some dependencies and compile the daemon. Note on redhat-based distros (centos/etc) you may have to install some perl modules for the tests to run.

d@blogbox:~$ cd memcached-1.5.10/
d@blogbox:~/memcached-1.5.10$ sudo apt-get install build-essential libevent-dev
Reading package lists... Done
Building dependency tree... 50%
Reading state information... Done
[... snip ...]
Do you want to continue? [Y/n] y
[... snip ...]
d@blogbox:~/memcached-1.5.10$ ./configure --enable-extstore
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
[... snip ...]
configure: creating ./config.status
config.status: creating Makefile
config.status: creating doc/Makefile
config.status: creating config.h
config.status: executing depfiles commands
d@blogbox:~/memcached-1.5.10$ make
[... snip ...]
Leaving directory '/home/d/memcached-1.5.10'
make[1]: Leaving directory '/home/d/memcached-1.5.10'

Optionally, run the tests. NOTE in 1.5.10, on ubuntu 18.04, t/lru-maintainer.t can fail with a race condition. This is already patched and will be fixed in 1.5.11.

d@blogbox:~/memcached-1.5.10$ make test
[ might fail on t/lru-maintainer.t ]

Now, we install the binary into /usr/local. You can change this with --prefix in the configure step.

d@blogbox:~/memcached-1.5.10$ sudo make install

The source tarball comes with systemd service scripts and config files. Lets install them. See below for some edits to the files.

d@blogbox:~/memcached-1.5.10$ sudo cp scripts/memcached.service /etc/systemd/system/
d@blogbox:~/memcached-1.5.10$ sudo mkdir /etc/sysconfig
d@blogbox:~/memcached-1.5.10$ sudo cp scripts/memcached.sysconfig /etc/sysconfig/memcached
d@blogbox:~/memcached-1.5.10$ sudo vi /etc/systemd/system/memcached.service
d@blogbox:~/memcached-1.5.10$ sudo vi /etc/sysconfig/memcached

Edit the “ExecStart” line in /etc/systemd/system/memcached.service to look like:

ExecStart=/usr/local/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS

All we’re doing is adding “/local/” into the path.

Finally, use a /etc/sysconfig/memcached that looks like:

# These defaults will be used by every memcached instance, unless overridden
# by values in /etc/sysconfig/memcached.
OPTIONS="-l -o ext_path=/tmp/extstore:10G,ext_item_size=512"

# The PORT variable will only be used by memcached.service, not by
# memcached@xxxxx services, which will use the xxxxx

Finally, enable and start the service:

d@blogbox:~/memcached-1.5.10$ sudo systemctl enable memcached
Created symlink /etc/systemd/system/ → 
d@blogbox:~/memcached-1.5.10$ sudo systemctl start memcached
d@blogbox:~/memcached-1.5.10$ ps aux | grep -i memcached
nobody	 10069	0.0  0.3 1723160 7152 ?        Ssl  05:32   0:00
/usr/local/bin/memcached -p 11211 -u nobody -m 1024 -c 1024 -l -o
d	 10100	0.0  0.0  14856  1048 pts/1    S+   05:33   0:00 grep --color=auto -i memcached

and… you’re done! You’ve now configured memcached with 1.5G of RAM and 10G of disk backed storage. We’ve also told it to allow flushing any objects larger than 512 bytes to disk.

You can (and should!) experiment for your own use case. Perhaps you want to spread out the cache to more nodes for reliability reasons, perhaps your sessions are smaller (thus requiring more RAM), etc. There’s no hard rule.

Unlikely. Memcached doesn’t immediately flush large objects to disk: they stay in RAM as long as long as they can. In practice, the newest objects are also “hottest”, being read and overwritten frequently. Even with 10x more disk than RAM, 9/10ths of your reads and writes could stick to RAM.

As an added bonus, when objects are overwritten, deleted, or expired by TTL, they don’t cause any disk IO. This should scale very well even for the limited IO available to a cloud instance.


With this walkthrough we were able to expand a theoretical website session cache by 3x for half the cost. While this won’t cover all use cases, it’s quick to determine and cheap to experiment.

More detailed documentation is available on the wiki