Cache Consistency: Memcached at Facebook


  • We provision hundreds of memcached servers in a cluster to reduce load on databases and other services. Items are distributed across the memcached servers through consistent hashing

  • Front-end servers, database, pool of memcache servers.

  • Multiple regions, full DB primary-secondary DB replication.

  • Multiple clusters within a region. Cluster: only FE + DB.

  • Regional memcache pool for non-popular keys.

  • DB is sharded.

  • MC is also sharded, with consistent hashing.

  • Starting a new cluster is a problem because it temporarily increases the DB load. They "cold start" new clusters by making them read from other cluster's cache first before the new cache is warm.

  • One cluster can't be too big, will overload popular keys within the cluster.

  • "Thundering heard". Very popular key in MC, many FEs are reading it. Someone deletes the key, and the cache is invalidated. Now everyone tries to read the DB at the same time, not good.

    • They use "Leases". When you miss a cache, it gives you a "lease". Other callers are asked to wait.

    • The owner of the lease will be allowed to put.

  • If MC instance fails, the DB is exposed. The MC can be auto replaced, but it takes a while.

    • There's a small set of GUTTER servers, idle unless a MC server fails.

    • If a MC fails, the request is sent to a GUTTER instead.

  • Consistency problem - there are many copies of the same data. Master DB, each DB replica, many MCs... When a write comes in, something must happen on all these copies.

  • There can be races, ending up writing stale data to cache. It stays there indefinitely. Also solved with leases. When you read, you get a lease to write. When someone deletes, it invalidates the lease.

  • Caching is not so much about reducing latency, but about hiding a relatively slow DB from the very high load,

Last updated