Skip to main content
Innovation|Innovation

Caching Strategies + Redis: Make Your Application Lightning Fast

A comprehensive guide to caching strategies and Redis for Java developers — covering Spring Cache annotations, Redis data structures, multi-tier caching (Caffeine + Redis + Database), cache patterns (Cache-Aside, Write-Through, Write-Behind, Read-Through), TTL, eviction policies, and production-ready Spring Boot configuration.

April 8, 202615 min read3 views0 comments
Share:

Why Caching Matters

Imagine you are studying for a test. Every time you need to look up a fact, you walk all the way to the library, find the right shelf, pull out the book, flip to the right page, read the answer, then walk all the way back. That is what your application does when it hits the database for every single request.

Now imagine you keep your favorite book right on your desk. Need an answer? Just glance down. That is caching. It keeps frequently used data close by so your application does not waste time fetching it again and again.

Cache is like a cheat sheet during an exam — quick answers without looking through the whole textbook.

Without caching, a popular product page might query the database 10,000 times per minute. With caching, it queries once, stores the result, and serves the next 9,999 requests from memory. The difference? Database query: ~5-50 milliseconds. Cache read: ~0.1-1 millisecond. That is 50x to 500x faster.

Spring Cache Annotations

Spring Boot makes caching easy with four annotations. Think of them as simple labels you stick on your methods to tell Spring: "Hey, remember this result."

@Cacheable — "Remember this answer"

The first time someone asks for a product, Spring fetches it from the database. Every time after that, Spring returns the saved answer without touching the database. Like writing the answer on a sticky note.

@Service
public class ProductService {

    @Cacheable(value = "products", key = "#productId")
    public Product getProductById(Long productId) {
        // This only runs the FIRST time for each productId.
        // After that, Spring returns the cached result.
        log.info("Fetching product {} from database...", productId);
        return productRepository.findById(productId)
            .orElseThrow(() -> new ProductNotFoundException(productId));
    }

    // Conditional caching — only cache if price > 10
    @Cacheable(value = "products", key = "#productId",
               condition = "#productId > 0",
               unless = "#result.price < 10")
    public Product getExpensiveProduct(Long productId) {
        return productRepository.findById(productId).orElseThrow();
    }
}

@CachePut — "Update the sticky note"

When you update a product, you want the cache to have the latest version. @CachePut always runs the method AND updates the cache. Think of it as erasing the old sticky note and writing a new one.

@CachePut(value = "products", key = "#product.id")
public Product updateProduct(Product product) {
    // This ALWAYS runs — and the result replaces what is in the cache.
    log.info("Updating product {} in database and cache", product.getId());
    return productRepository.save(product);
}

@CacheEvict — "Throw away the sticky note"

When data is deleted or you are not sure what changed, just remove it from the cache. Next time someone asks, Spring will fetch fresh data from the database.

// Remove one product from cache
@CacheEvict(value = "products", key = "#productId")
public void deleteProduct(Long productId) {
    productRepository.deleteById(productId);
}

// Nuclear option: clear the ENTIRE cache
@CacheEvict(value = "products", allEntries = true)
public void refreshAllProducts() {
    log.info("Cleared entire products cache");
}

@Caching — "Do multiple cache things at once"

Sometimes one method affects multiple caches. @Caching lets you combine multiple cache operations.

@Caching(
    put = { @CachePut(value = "products", key = "#product.id") },
    evict = {
        @CacheEvict(value = "productList", allEntries = true),
        @CacheEvict(value = "productSearch", allEntries = true)
    }
)
public Product saveProduct(Product product) {
    return productRepository.save(product);
}

Redis Basics — Your Super-Fast Notebook

Redis stands for REmote DIctionary Server. Think of it as a giant notebook that lives in your computer's memory (RAM) instead of on a hard drive. Reading from RAM is like reading a sign right in front of you. Reading from a hard drive is like driving to another city to read the same sign.

Redis is an in-memory key-value store. You give it a name (key) and a value, and it remembers. Simple as a dictionary: look up the word, get the meaning.

# Basic Redis commands — like a super-fast dictionary
SET product:101 "{\"name\": \"Wireless Mouse\", \"price\": 29.99}"
GET product:101
# Returns: {"name": "Wireless Mouse", "price": 29.99}

DEL product:101
# Poof — gone from the notebook

# Set with expiration (TTL) — auto-delete after 300 seconds
SET weather:nyc "72°F, Sunny" EX 300

# Check how long until it expires
TTL weather:nyc
# Returns: 295 (seconds remaining)

Redis Data Structures — More Than Just Key-Value

Redis is not just a simple dictionary. It has five powerful data structures, each perfect for different jobs. Think of them as five different types of containers in your kitchen.

1. String — The Simple Jar

Stores a single value. Perfect for counters, simple values, or serialized objects.

# Simple value
SET user:profile:42 "{\"name\": \"Alice\", \"email\": \"alice@example.com\"}"

# Counter — like a tally counter
INCR page:views:homepage      # 1
INCR page:views:homepage      # 2
INCRBY page:views:homepage 10 # 12

2. Hash — The Filing Cabinet

Like a mini-dictionary inside a key. Perfect for storing objects with multiple fields without serializing to JSON.

# Store a user profile as a hash
HSET user:42 name "Alice" email "alice@example.com" age "30"

# Get one field
HGET user:42 name          # "Alice"

# Get all fields
HGETALL user:42            # name, Alice, email, alice@example.com, age, 30

# Update just one field — no need to rewrite the whole object
HSET user:42 age "31"

3. List — The Queue Line

An ordered list of values. Perfect for message queues, activity feeds, and recent items.

# Add to the left (newest first) — like people joining a line
LPUSH recent:searches "wireless mouse"
LPUSH recent:searches "bluetooth keyboard"
LPUSH recent:searches "usb hub"

# Get the 5 most recent searches
LRANGE recent:searches 0 4
# Returns: usb hub, bluetooth keyboard, wireless mouse

# Remove from the right (oldest first) — process the queue
RPOP recent:searches       # "wireless mouse"

4. Set — The Unique Stamp Collection

A collection where every item is unique. No duplicates allowed. Perfect for tags, unique visitors, and "who liked this."

# Track unique visitors to a page
SADD visitors:homepage "user:1" "user:2" "user:3"
SADD visitors:homepage "user:1"    # ignored — already there!

# How many unique visitors?
SCARD visitors:homepage            # 3

# Find users who visited BOTH pages (intersection)
SADD visitors:about "user:2" "user:3" "user:4"
SINTER visitors:homepage visitors:about
# Returns: user:2, user:3

5. Sorted Set — The Leaderboard

Like a Set, but every item has a score. Items are automatically sorted by score. Perfect for leaderboards, ranking, and top-N queries.

# Movie ratings leaderboard
ZADD movie:ratings 9.3 "The Shawshank Redemption"
ZADD movie:ratings 9.2 "The Godfather"
ZADD movie:ratings 8.8 "The Dark Knight"
ZADD movie:ratings 9.0 "Pulp Fiction"

# Top 3 movies (highest score first)
ZREVRANGE movie:ratings 0 2 WITHSCORES
# 1) The Shawshank Redemption — 9.3
# 2) The Godfather — 9.2
# 3) Pulp Fiction — 9.0

# What rank is "The Dark Knight"?
ZREVRANK movie:ratings "The Dark Knight"   # 3 (0-indexed)

Multi-Tier Caching — The Three-Level Defense

Think of it like this: L1 is the answer on the tip of your tongue (fastest). L2 is the cheat sheet in your pocket (fast). L3 is the textbook in your backpack (slow but has everything).

LevelTechnologySpeedCapacityWhere
L1Caffeine (JVM Heap)~1 nsSmall (100s of MBs)Inside your app
L2Redis~0.5 msMedium (GBs)Separate server
L3Database (PostgreSQL)~5-50 msLarge (TBs)Disk storage

When your app needs data, it checks L1 first. Not there? Check L2. Still not there? Go to L3 (database), then store the result in L2 and L1 for next time.

@Service
public class MultiTierCacheService {

    private final Cache<String, Product> localCache; // L1: Caffeine
    private final RedisTemplate<String, Product> redisTemplate; // L2: Redis
    private final ProductRepository productRepository; // L3: Database

    public MultiTierCacheService(RedisTemplate<String, Product> redisTemplate,
                                  ProductRepository productRepository) {
        this.redisTemplate = redisTemplate;
        this.productRepository = productRepository;

        // L1: Caffeine — holds 1,000 items, expires after 5 minutes
        this.localCache = Caffeine.newBuilder()
            .maximumSize(1_000)
            .expireAfterWrite(Duration.ofMinutes(5))
            .build();
    }

    public Product getProduct(String productId) {
        String cacheKey = "product:" + productId;

        // Step 1: Check L1 (Caffeine — in JVM memory)
        Product product = localCache.getIfPresent(cacheKey);
        if (product != null) {
            log.debug("L1 HIT for {}", productId);
            return product;
        }

        // Step 2: Check L2 (Redis — separate server)
        product = redisTemplate.opsForValue().get(cacheKey);
        if (product != null) {
            log.debug("L2 HIT for {}", productId);
            localCache.put(cacheKey, product); // Store in L1 for next time
            return product;
        }

        // Step 3: Fetch from L3 (Database — the source of truth)
        log.debug("L3 FETCH for {}", productId);
        product = productRepository.findById(productId).orElseThrow();

        // Store in both L2 and L1
        redisTemplate.opsForValue().set(cacheKey, product,
            Duration.ofMinutes(30));
        localCache.put(cacheKey, product);

        return product;
    }
}

Cache Patterns — Four Ways to Use Your Cache

There are four main strategies for how your application talks to the cache and database. Each has trade-offs. Pick the right one based on your needs.

1. Cache-Aside (Lazy Loading) — "Only cache what people actually ask for"

The application manages the cache itself. On a read, check the cache first. If it is a miss, fetch from the database and store in the cache. This is the most common pattern.

Analogy: You only write a phone number on your sticky note after someone calls and you had to look it up. If nobody ever asks for Uncle Bob's number, it never goes on the sticky note.

@Service
public class UserProfileService {

    private final RedisTemplate<String, UserProfile> redis;
    private final UserRepository userRepository;

    // Cache-Aside: Read
    public UserProfile getUserProfile(Long userId) {
        String key = "user:profile:" + userId;

        // 1. Check cache
        UserProfile cached = redis.opsForValue().get(key);
        if (cached != null) return cached;

        // 2. Cache miss — fetch from DB
        UserProfile profile = userRepository.findById(userId).orElseThrow();

        // 3. Store in cache for next time (TTL: 1 hour)
        redis.opsForValue().set(key, profile, Duration.ofHours(1));
        return profile;
    }

    // Cache-Aside: Write (invalidate, not update)
    public UserProfile updateProfile(Long userId, UserProfile updated) {
        UserProfile saved = userRepository.save(updated);
        redis.delete("user:profile:" + userId); // Invalidate — next read will re-cache
        return saved;
    }
}

Pros: Only caches data that is actually requested. Simple to understand. Cache failure does not break the app.

Cons: First request for each item is always slow (cache miss). Possible stale data if database is updated outside your app.

2. Write-Through — "Always update the cache AND database together"

Every write goes to the cache AND the database at the same time. The cache is always in sync with the database.

Analogy: Every time you update your address, you update both your phone contacts AND the paper address book at the same time.

@Service
public class WriteThroughProductService {

    public Product saveProduct(Product product) {
        // Write to database FIRST
        Product saved = productRepository.save(product);

        // Then write to cache — cache is always up to date
        String key = "product:" + saved.getId();
        redis.opsForValue().set(key, saved, Duration.ofHours(2));

        return saved;
    }

    public Product getProduct(String productId) {
        String key = "product:" + productId;
        // Cache is always fresh because every write updates it
        Product cached = redis.opsForValue().get(key);
        if (cached != null) return cached;

        // Fallback to DB (only needed for first load or after cache eviction)
        Product product = productRepository.findById(productId).orElseThrow();
        redis.opsForValue().set(key, product, Duration.ofHours(2));
        return product;
    }
}

Pros: Cache is always consistent with the database. No stale data.

Cons: Writes are slower (two writes instead of one). Caches data that may never be read.

3. Write-Behind (Write-Back) — "Update the cache now, database later"

Writes go to the cache immediately, and the cache writes to the database in the background (batched). Ultra-fast writes, but more complex.

Analogy: You jot notes on a whiteboard all day, then copy everything to your notebook at the end of the day.

@Service
public class WriteBehindService {

    private final Queue<Product> writeQueue = new ConcurrentLinkedQueue<>();

    // Write goes to cache immediately — super fast
    public Product saveProduct(Product product) {
        String key = "product:" + product.getId();
        redis.opsForValue().set(key, product);

        // Queue the database write for later
        writeQueue.add(product);
        return product;
    }

    // Background job flushes the queue to the database every 5 seconds
    @Scheduled(fixedRate = 5000)
    public void flushToDatabase() {
        List<Product> batch = new ArrayList<>();
        Product item;
        while ((item = writeQueue.poll()) != null) {
            batch.add(item);
        }
        if (!batch.isEmpty()) {
            productRepository.saveAll(batch);
            log.info("Flushed {} products to database", batch.size());
        }
    }
}

Pros: Extremely fast writes. Batching reduces database load.

Cons: Risk of data loss if the cache crashes before flushing. More complex to implement.

4. Read-Through — "The cache fetches from the database for you"

The application only talks to the cache. If the cache does not have the data, the cache itself goes to the database. The application never touches the database directly.

Analogy: You ask your assistant for a file. If the assistant does not have it, they go to the filing cabinet, get it, keep a copy, and give it to you. You never go to the filing cabinet yourself.

// With Caffeine's LoadingCache — it fetches on miss automatically
@Configuration
public class ReadThroughConfig {

    @Bean
    public LoadingCache<Long, Product> productCache(ProductRepository repo) {
        return Caffeine.newBuilder()
            .maximumSize(5_000)
            .expireAfterWrite(Duration.ofMinutes(15))
            .build(productId -> repo.findById(productId).orElse(null));
            // ^ This loader runs automatically on cache miss
    }
}

@Service
public class ReadThroughProductService {

    private final LoadingCache<Long, Product> productCache;

    public Product getProduct(Long productId) {
        // Just ask the cache — it handles everything
        return productCache.get(productId);
        // Cache HIT? Returns from memory.
        // Cache MISS? Calls the loader, stores result, returns it.
    }
}

Pros: Application code is super simple. Cache handles all the fetching logic.

Cons: Tightly couples the cache to the data source. Harder to debug when things go wrong.

TTL and Eviction Policies — When to Throw Away Old Data

Caches cannot grow forever. You need rules for when to remove old data. Think of your desk: if you pile on papers forever, you cannot find anything. You need rules for what to keep and what to toss.

TTL (Time To Live) — "This sticky note expires in 10 minutes"

Set how long data stays in the cache. After the time is up, it disappears automatically.

# Redis: key expires after 3600 seconds (1 hour)
SET weather:london "15°C, Cloudy" EX 3600

# Check remaining time
TTL weather:london    # 3542 (seconds left)
// Java: set TTL with RedisTemplate
redis.opsForValue().set("weather:london", weatherData,
    Duration.ofHours(1));

// Caffeine: expire after write
Caffeine.newBuilder()
    .expireAfterWrite(Duration.ofMinutes(10))
    .build();

Eviction Policies — "The desk is full. What do we remove?"

When the cache is full, Redis needs to decide what to remove. Redis supports these eviction policies:

PolicyWhat It DoesBest For
allkeys-lruRemove the least recently used keyGeneral purpose (recommended)
allkeys-lfuRemove the least frequently used keyWhen some keys are always popular
volatile-lruLRU, but only keys with a TTL setMix of persistent and cached data
volatile-ttlRemove keys closest to expiringWhen TTL reflects importance
noevictionReturn errors when memory is fullWhen data loss is unacceptable
# Set eviction policy in redis.conf
maxmemory 256mb
maxmemory-policy allkeys-lru

Cache Invalidation Strategies — The Hardest Problem in Computer Science

There are only two hard things in Computer Science: cache invalidation and naming things. Here is how to handle the first one.

1. Time-Based (TTL)

The simplest approach. Data expires after a set time. Works great when slightly stale data is acceptable (weather, product listings, blog posts).

@Cacheable(value = "weatherForecast", key = "#city")
public Weather getWeather(String city) {
    return weatherApi.fetchForecast(city);
}

// In application.yml — set TTL per cache
spring:
  cache:
    redis:
      time-to-live: 600000   # 10 minutes in milliseconds

2. Event-Based (Publish/Subscribe)

When data changes, publish an event. All interested services listen and invalidate their caches.

// When a product is updated, publish an event
@Service
public class ProductUpdatePublisher {

    private final RedisTemplate<String, String> redis;

    public void publishProductUpdate(Long productId) {
        redis.convertAndSend("product:updates",
            String.valueOf(productId));
    }
}

// Listener in another service — clears its local cache
@Component
public class ProductCacheListener {

    private final Cache<Long, Product> localCache;

    @EventListener
    public void onProductUpdate(ProductUpdateEvent event) {
        localCache.invalidate(event.getProductId());
        log.info("Cache invalidated for product {}", event.getProductId());
    }
}

3. Version-Based

Include a version number in the cache key. When data changes, bump the version. Old keys naturally expire via TTL.

@Service
public class VersionedCacheService {

    public Product getProduct(Long productId) {
        int version = getCurrentVersion(productId);
        String key = "product:" + productId + ":v" + version;

        Product cached = redis.opsForValue().get(key);
        if (cached != null) return cached;

        Product product = productRepository.findById(productId).orElseThrow();
        redis.opsForValue().set(key, product, Duration.ofHours(1));
        return product;
    }

    public void updateProduct(Long productId, Product updated) {
        productRepository.save(updated);
        incrementVersion(productId); // Old cache key is now orphaned — TTL cleans it up
    }
}

Spring Boot + Redis Configuration

Here is a complete, production-ready setup for Spring Boot with Redis caching.

Step 1: Add Dependencies

<!-- pom.xml -->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
    <groupId>com.github.ben-manes.caffeine</groupId>
    <artifactId>caffeine</artifactId>
</dependency>

Step 2: Configure application.yml

spring:
  data:
    redis:
      host: localhost
      port: 6379
      password: ${REDIS_PASSWORD:}
      timeout: 2000ms
      lettuce:
        pool:
          max-active: 16
          max-idle: 8
          min-idle: 2
          max-wait: 500ms
        shutdown-timeout: 200ms

  cache:
    type: redis
    redis:
      time-to-live: 3600000        # 1 hour default TTL
      cache-null-values: false     # Do not cache null results
      key-prefix: "myapp:"        # Prefix all keys
      use-key-prefix: true

logging:
  level:
    org.springframework.cache: DEBUG   # See cache HIT/MISS in logs

Step 3: Redis Configuration Class

@Configuration
@EnableCaching
public class RedisConfig {

    @Bean
    public RedisCacheManager cacheManager(RedisConnectionFactory factory) {

        // Default config: 1 hour TTL, JSON serialization
        RedisCacheConfiguration defaultConfig = RedisCacheConfiguration
            .defaultCacheConfig()
            .entryTtl(Duration.ofHours(1))
            .serializeKeysWith(
                SerializationPair.fromSerializer(new StringRedisSerializer()))
            .serializeValuesWith(
                SerializationPair.fromSerializer(
                    new GenericJackson2JsonRedisSerializer()))
            .disableCachingNullValues();

        // Custom TTLs per cache name
        Map<String, RedisCacheConfiguration> perCacheConfig = Map.of(
            "products",        defaultConfig.entryTtl(Duration.ofMinutes(30)),
            "userProfiles",    defaultConfig.entryTtl(Duration.ofHours(2)),
            "weatherForecast", defaultConfig.entryTtl(Duration.ofMinutes(10)),
            "movieRatings",    defaultConfig.entryTtl(Duration.ofHours(6))
        );

        return RedisCacheManager.builder(factory)
            .cacheDefaults(defaultConfig)
            .withInitialCacheConfigurations(perCacheConfig)
            .transactionAware()
            .build();
    }

    @Bean
    public RedisTemplate<String, Object> redisTemplate(
            RedisConnectionFactory factory) {
        RedisTemplate<String, Object> template = new RedisTemplate<>();
        template.setConnectionFactory(factory);
        template.setKeySerializer(new StringRedisSerializer());
        template.setValueSerializer(new GenericJackson2JsonRedisSerializer());
        template.setHashKeySerializer(new StringRedisSerializer());
        template.setHashValueSerializer(new GenericJackson2JsonRedisSerializer());
        template.afterPropertiesSet();
        return template;
    }
}

Step 4: Complete Service Example

@Service
@CacheConfig(cacheNames = "products") // Default cache name for this class
public class ProductCatalogService {

    private final ProductRepository productRepository;

    @Cacheable(key = "#productId")
    public Product findById(Long productId) {
        return productRepository.findById(productId)
            .orElseThrow(() -> new NotFoundException("Product not found"));
    }

    @Cacheable(key = "'all:' + #pageable.pageNumber")
    public Page<Product> findAll(Pageable pageable) {
        return productRepository.findAll(pageable);
    }

    @CachePut(key = "#result.id")
    public Product create(CreateProductRequest request) {
        Product product = new Product(request.name(), request.price());
        return productRepository.save(product);
    }

    @CachePut(key = "#productId")
    public Product update(Long productId, UpdateProductRequest request) {
        Product product = productRepository.findById(productId).orElseThrow();
        product.setName(request.name());
        product.setPrice(request.price());
        return productRepository.save(product);
    }

    @Caching(evict = {
        @CacheEvict(key = "#productId"),
        @CacheEvict(cacheNames = "products", key = "'all:*'",
                    allEntries = true)
    })
    public void delete(Long productId) {
        productRepository.deleteById(productId);
    }
}

Key Takeaways

Caching is essential: It can make your application 50x-500x faster for read-heavy workloads. Start with @Cacheable on your most-queried methods.

Redis is your best friend: An in-memory key-value store with rich data structures (String, Hash, List, Set, Sorted Set). It is fast, simple, and battle-tested.

Multi-tier caching: Use L1 (Caffeine) for the hottest data, L2 (Redis) for shared data across instances, L3 (Database) as the source of truth.

Pick the right pattern: Cache-Aside for simplicity, Write-Through for consistency, Write-Behind for speed, Read-Through for clean code.

Invalidation matters: Use TTL as your baseline, event-based for real-time accuracy, and version-based for complex scenarios. Always plan how stale data gets cleaned up.

Configure thoughtfully: Set different TTLs per cache, use JSON serialization for debugging, enable logging to watch HIT/MISS ratios, and set maxmemory-policy in Redis.

Frequently Asked Questions

1. When should I use caching and when should I avoid it?

Use caching when your data is read much more often than it is written (like a product catalog — read 10,000 times, updated once a day). Avoid caching for data that changes constantly (like a live stock ticker) or data that must be 100% accurate in real-time (like account balances during a transaction). A good rule of thumb: if the same query runs more than 10 times per minute with the same result, cache it.

2. What happens if Redis goes down? Does my application crash?

Not if you design it right. Always treat the cache as optional. If Redis is down, your application should fall back to querying the database directly — it will be slower, but it will still work. Spring Boot's @Cacheable handles this gracefully by default. In production, use Redis Sentinel (automatic failover) or Redis Cluster (data across multiple nodes) for high availability.

3. How do I decide the right TTL for my cache?

It depends on how stale your data can be. Weather data? 10 minutes is fine. User profile? 1-2 hours. Product listings? 30 minutes. Blog posts? 6-24 hours. Start with a conservative (shorter) TTL, monitor your cache hit ratio, and increase it if your data rarely changes. A good target is 90% or higher cache hit ratio.

4. What is the difference between Caffeine and Redis caching?

Caffeine runs inside your application's JVM memory — it is incredibly fast (nanoseconds) but each server has its own copy, and data is lost when the app restarts. Redis runs as a separate server — it is slightly slower (sub-millisecond network call) but all your app instances share the same cache, and data survives restarts. Use Caffeine for ultra-hot data that changes rarely, Redis for shared data across instances.

5. Can I use caching with a microservices architecture?

Absolutely. Redis is perfect for microservices because it acts as a shared cache that all services can access. Each service should namespace its keys (e.g., product-service:product:42, user-service:profile:99) to avoid collisions. Use event-based invalidation (Redis Pub/Sub or a message broker like Kafka) so when one service updates data, other services that cached it can invalidate their copies.


Comments


Login to join the conversation.

Loading comments…

More from Innovation