46
submitted 1 day ago* (last edited 1 day ago) by papercache@programming.dev to c/rust@programming.dev

Hi everyone, I recently created an in-memory cache in Rust called PaperCache. It's the first in-memory cache that's able to switch between any eviction policy at runtime, allowing it to reduce its miss ratio by adapting to changing workloads. Currently, it supports the following eviction policies:

  • LFU
  • FIFO
  • CLOCK
  • SIEVE
  • LRU
  • MRU
  • 2Q
  • ARC
  • S3-FIFO

It typically has lower tail latencies than Redis (though with the trade-off of higher memory overhead as it needs to maintain extra metadata to be able to switch between policies at runtime).

Feel free to check out the website (https://papercache.io/) which has documentation, a high-level article I wrote on Kudos (https://link.growkudos.com/1f039cqaqrk), or the paper from HotStorage'25 (https://dl.acm.org/doi/abs/10.1145/3736548.3737836)

Here's a direct link to the cache internals: https://github.com/PaperCache/paper-cache

In case you want to test it out, you can find installation instructions here: https://papercache.io/guide/getting-started/installation

There are clients for most of the popular programming languages (https://papercache.io/guide/usage/clients), though some may be a little unpolished (I mainly use the Rust client for my own work, so that one is kept up-to-date).

If you have any feedback, please let me know!

you are viewing a single comment's thread
view the rest of the comments
[-] SubArcticTundra@lemmy.ml 4 points 1 day ago
[-] papercache@programming.dev 9 points 1 day ago

So if you consider a database that has billions of rows of data and typically resides on a relatively slow SSD/HDD, trying to retrieve any data can take a while. An in-memory cache is a small mini-database that saves just the most important data from the database and holds it in memory (DRAM) to make accessing that data a lot faster.

The cache has to be small because DRAM is much more expensive than an SSD/HDD. It can therefore only hold a small subset of the data from the database and has a set limit on the amount of data it can hold. So what happens if you want to add new data to the cache but it's already full? Well you have to select some data in the cache to evict to make room for the new data. The way you select that data is called the eviction policy.

The most common eviction policy is least recently used (LRU) which just evicts the least recently used piece of data from the cache. A simple way to implement this is to keep all the data in the cache in LRU order so it's easy to select the data to evict. But there are other eviction policies that may sometimes outperform LRU but you can't switch to them because the data is already in LRU order.

So that's what PaperCache solves. It monitors which is the best eviction policy and is able to switch between them and reorder the data as necessary.

Hope that helps!

[-] SubArcticTundra@lemmy.ml 2 points 1 day ago

Great explanation ❤ Sounds useful!

this post was submitted on 07 Aug 2025
46 points (100.0% liked)

Rust

7228 readers
22 users here now

Welcome to the Rust community! This is a place to discuss about the Rust programming language.

Wormhole

!performance@programming.dev

Credits

  • The icon is a modified version of the official rust logo (changing the colors to a gradient and black background)

founded 2 years ago
MODERATORS