Avoiding "mass evictions" in an LRU cache



I've implemented an LRU cache. The process to insert a new item looks like this:

  1. Check if enough space in haystack. If yes, skip to 4.
  2. Remove least recently used item.
  3. Check if enough space. If not, repeat 2.
  4. Insert item in free space.

Items are ordered effectively randomly in the haystack.

The problem arises when an item needs to be inserted that is bigger than previous items. It leads to a "mass eviction", where it keeps on evicting until enough items have been evicted that there happen to have been evictions of several contiguous items.

This "mass eviction" often involves evicting tens of thousands of items.

What can be done to avoid or mitigate this "mass eviction"?

2 Answers: 

I'd examine giving every cache-entry a weight that is proportional to both its size and age.

(eg. Really big items that are Really old have a big weight. A really old item that is pretty small in size doesn't have nearly so much weight)

Then evict things based on "weight". That will favor evicting large and moderately old items over evicting ten thousand small items that are older.

Perhaps you can look into Buddy memory allocation, used in e.g. the Linux kernel:

This way, I suppose you'd still evict more items than strictly necessary, yet because the buddy method is efficient in avoiding fragmentation, I'd assume you'd find a large block sooner.