Caching is one of the most effective strategies for improving web application performance. Redis, with its in-memory data store, sub-millisecond latency, and rich data structure support, has become the de facto caching layer for modern applications. However, choosing the right caching pattern is just as important as choosing the right caching technology. A poorly implemented cache can introduce stale data, inconsistency, and operational headaches.
Cache-Aside (Lazy Loading)
Cache-aside is the most commonly used caching pattern. The application checks the cache first. On a cache miss, it fetches data from the database, stores it in Redis, and returns the result. On subsequent requests, the cached value is served directly.
async function getUser(userId: string): Promise<User> {
const cacheKey = `user:${userId}`;
// Check cache first
const cached = await redis.get(cacheKey);
if (cached) {
return JSON.parse(cached);
}
// Cache miss: fetch from database
const user = await db.users.findById(userId);
// Store in cache with TTL
await redis.set(cacheKey, JSON.stringify(user), 'EX', 3600);
return user;
}
The advantages of cache-aside are simplicity and resilience—if Redis goes down, the application falls back to the database. The downside is that the first request for any data always hits the database, and cache invalidation must be handled explicitly when the underlying data changes.
Write-Through Caching
In write-through caching, every write operation updates both the cache and the database synchronously. This ensures the cache is always consistent with the database but adds latency to write operations since both stores must be updated before the operation completes.
async function updateUser(userId: string, data: Partial<User>): Promise<User> {
// Update database
const updated = await db.users.update(userId, data);
// Update cache immediately
const cacheKey = `user:${userId}`;
await redis.set(cacheKey, JSON.stringify(updated), 'EX', 3600);
return updated;
}
Write-through works well when read-after-write consistency is critical, such as user profile updates where the user expects to see their changes immediately.
Write-Behind (Write-Back) Caching
Write-behind caching writes to the cache first and asynchronously flushes changes to the database. This pattern significantly improves write performance since the application does not wait for the database. However, it introduces the risk of data loss if Redis crashes before the data is persisted. Implementing a reliable write-behind strategy requires a durable queue or Redis Streams to buffer pending writes.
Cache Invalidation Strategies
Cache invalidation is famously one of the two hard problems in computer science. Common strategies include:
- Time-based expiration (TTL): Set a time-to-live on every cached entry. Simple but allows stale reads until TTL expires.
- Event-based invalidation: Invalidate cache entries when the underlying data changes. This can be triggered by application events or database change data capture (CDC) streams.
- Versioned keys: Append a version number to cache keys. When data changes, increment the version so old cache entries are naturally bypassed.
Advanced Redis Data Structures for Caching
Redis is not just a key-value store. Its data structures enable sophisticated caching patterns:
- Hashes: Store object fields individually, allowing partial reads and updates without deserializing the entire object.
- Sorted Sets: Implement leaderboards, rate limiting windows, and time-series data with automatic ordering.
- HyperLogLog: Count unique visitors or events with constant memory usage, regardless of cardinality.
- Redis Streams: Build event logs, message queues, and consumer group patterns for processing pipelines.
Using Hashes for Partial Object Caching
// Store user fields as a hash
await redis.hset('user:123', {
name: 'Alice',
email: '[email protected]',
role: 'admin'
});
// Read only the fields you need
const email = await redis.hget('user:123', 'email');
// Update a single field without touching others
await redis.hset('user:123', 'role', 'superadmin');
Cache Stampede Prevention
When a popular cache entry expires, hundreds of concurrent requests may all experience a cache miss simultaneously and hit the database, causing a stampede. Mitigation strategies include locking (only one request fetches from the database while others wait), probabilistic early expiration (refreshing entries before TTL expires based on a probability function), and stale-while-revalidate (serving the stale value while asynchronously refreshing it in the background).
Redis caching, when implemented thoughtfully, can reduce database load by over 90% and cut response times from hundreds of milliseconds to single-digit milliseconds. The key is selecting the right pattern for each use case and planning for invalidation from the start.