Skip to main content
Innovation|Innovation

Redis vs Hazelcast: A Practical Comparison for Java Developers

A practical, tool-by-tool comparison of Redis and Hazelcast for Java developers — architecture, data model, threading, coordination, compute, ops, cost — and clear guidance on which to pick when. Both are good; they are good at different things.

May 6, 202610 min read1 views0 comments
Share:

Both speed up reads. Both store key-value data in memory. Both have a Spring Boot starter. Past those three sentences they go in different directions, and the choice between them is mostly about what kind of system you are actually building.

Why This Comparison Comes Up

The question lands the same way every time: a Java team has a slow page, somebody mentions caching, and the next morning a half-formed plan exists on a whiteboard with two boxes — Redis and Hazelcast — and an arrow between them. Both are reasonable answers. They are also genuinely different tools, and picking by accident usually means living with constraints you did not know you were signing up for.

This post is the comparison I wish someone had given me the first time I had to make this choice. It assumes you have read the two earlier posts in this series, on Redis caching strategies and Hazelcast as an in-memory data grid. If not, skim them — the rest will land harder.

The One-Paragraph Answer

If your team needs a fast key-value cache and message broker, runs a polyglot stack, and wants the operational story to stay simple, pick Redis. If your team is mostly on the JVM, needs distributed coordination (locks, atomic counters, leader election), wants compute that runs alongside data, or wants the cache to be embedded in the application itself, pick Hazelcast. Both can do basic caching well. Past basic caching, they reward different bets.

Architecture — Two Different Shapes

Redis is a remote server. It is written in C, runs single-threaded per shard, and your application talks to it over a network protocol (RESP). It is fundamentally a process you connect to. Even when you scale it with Redis Cluster, the model is "an external service the application uses."

Hazelcast is a distributed system written in Java that runs as part of the cluster. In embedded mode, every JVM that starts up joins the same Hazelcast cluster — your application is a member. In client-server mode it looks more like Redis, but the underlying engine is still a JVM-native peer-to-peer grid.

This shape difference is the root of most other differences. Single-threaded vs multi-threaded. Remote vs embedded. Polyglot first vs JVM first. Once you see the shape, the rest of the comparisons stop being a list of features and start feeling inevitable.

Data Model and Operations

Both have a rich set of data structures. They overlap and they diverge.

  • Redis: strings, hashes, lists, sets, sorted sets, streams, hyperloglog, geo, bitmaps, pub/sub. Each comes with dozens of native commands. Sorted sets and streams are particularly differentiated — there are not great equivalents elsewhere.
  • Hazelcast: IMap, IQueue, ITopic, MultiMap, ReplicatedMap, IExecutorService, IAtomicLong, FencedLock, ISet, IList. Each looks like the equivalent java.util interface, but distributed.

What matters in practice: Redis gives you sharper specialised data structures (sorted sets, streams) but its API is a flat list of opaque commands you assemble. Hazelcast gives you familiar Java types backed by distributed engines — comfortable for JVM teams, less so for, say, a Python service.

Threading and Atomicity

Redis is single-threaded per shard. Every command runs to completion before the next one starts. This is the secret of its predictability — there are no race conditions inside Redis itself. The cost is that long-running commands block the whole shard, and CPU saturation hits a single-core ceiling fast.

Hazelcast is multi-threaded across all cores of every node. Operations on different keys run in parallel. The cost is the complexity of distributed concurrency — mitigated by primitives like EntryProcessor, which give you single-key atomicity by sending the code to the data.

For most caching workloads neither difference matters. For workloads that do cluster-wide coordination, Hazelcast's primitives are friendlier. For workloads that need a fast pub/sub system and not much else, Redis is hard to beat.

Persistence and Durability

Redis persists in two ways: snapshots (RDB — periodic point-in-time dumps) and append-only files (AOF — every write logged for replay). They are mature, well-understood, and let you trade durability against write performance.

Hazelcast has Persistence (formerly Hot Restart Store), where each member writes its partitions to local disk so a full cluster restart can recover. For "the database is the truth, the grid is the cache" workloads, you wire a MapStore to read-through and write-through to your real datastore.

Either can survive process restarts. Both have configurable trade-offs between durability and throughput. The difference is mostly about how the persistence story integrates with the rest of your stack — Redis treats it as a server problem; Hazelcast treats it as a per-map configuration.

Distributed Coordination

This is where the two pull apart.

Redis is not designed as a coordination layer. People use it that way — implementing distributed locks with SET NX and the Redlock algorithm, building leader election out of EXPIRE games, doing transactions with MULTI/EXEC or Lua scripts — but every one of those usages is "you, the developer, building correctness on top of a tool that does not promise it." For weak coordination (an advisory lock that occasionally fails) this is fine. For strong coordination (we must not have two instances thinking they are leader), it is dangerous.

Hazelcast ships with a CP subsystem built on Raft consensus, giving you FencedLock, IAtomicLong, IAtomicReference, and ISemaphore with linearisable semantics. These are the primitives you would otherwise reach for ZooKeeper or etcd to provide. Inside a Java service that already runs Hazelcast, this is a meaningful capability that does not require another piece of infrastructure.

Compute Where the Data Lives

Redis can run scripts via Lua and (more recently) Redis Functions. They run on the server, on the data, single-threaded — useful for atomic read-modify-write without round-trips. The constraint is that scripts must be short and non-blocking, and writing in Lua is a context switch from your application language.

Hazelcast ships EntryProcessor for single-entry atomic updates and IExecutorService for arbitrary tasks that can be routed to a specific key's owner, to a specific member, or to all members. The code is plain Java, can be substantial, and the result aggregation is built in. For analytics or bulk updates over a large dataset, this difference is significant — you ship the function to the data, not the data to the function.

Hazelcast Jet (now folded into core) takes this further with full streaming pipelines that read from sources, transform with state lookups against your IMaps, and write to sinks. There is no equivalent in Redis.

Language and Ecosystem Fit

Redis is genuinely polyglot. Every mainstream language has a solid client. The protocol is small and well-documented. If your stack has a Java service, a Python service, a Node service, and a Go service all needing the same cache, Redis is the obvious choice.

Hazelcast is JVM-first. There are clients for .NET, C++, Python, Node, and Go, but they are decidedly second-class compared to the Java experience. Distributed Java types (IMap, IExecutorService) and tasks shipped to data assume the data is, well, Java objects on a Java cluster. For mixed-language stacks, you can use Hazelcast — but you give up much of what makes it interesting.

Operations and Deployment

Two stories here, very different in practice.

Redis is a single binary, well-understood, with mature managed offerings (Amazon ElastiCache, Redis Enterprise, Upstash, Memorystore). Most teams do not run their own Redis — they rent it. Failover, clustering, monitoring are mature problems with mature answers. The operational learning curve is shallow once you have a managed cluster and a client library.

Hazelcast is a JVM, with the operational characteristics of a JVM — heap tuning, GC behaviour, observability through JMX. Hazelcast Cloud is the official managed offering; AWS, Azure, and Kubernetes have first-class discovery plugins. For embedded mode, you do not run Hazelcast separately at all — it lives in your application JVM. That can be a feature (one fewer thing to operate) or a complication (cluster restart is application restart). Pick deliberately.

Performance — A Note on Benchmarks

Both products will out-benchmark each other depending on the workload, the version, the hardware, the configuration, and the bias of the benchmarker. Useful generalisations:

  • For pure GET/SET on small values, raw throughput is in the same order of magnitude. Redis often wins per-instance because of single-threaded efficiency; Hazelcast often wins per-cluster because every core does work.
  • For complex queries against indexed maps, Hazelcast tends to win because the work is parallelised across partitions. For deep work on a single key, Redis tends to win because there is no coordination overhead.
  • For cross-region or cross-AZ deployment, both have replication stories with similar trade-offs.

The honest answer: benchmark with your data, your access pattern, and your hardware. Public numbers are usually one-sided enough to mistrust.

Cost

Both have free open-source editions and paid commercial editions. The dividing lines:

  • Redis OSS is BSD-licensed; Redis Stack (with extended modules) and Redis Enterprise are commercial. The recent license change to RSALv2/SSPL for Redis itself drove the Valkey fork — worth being aware of if "permissive open source forever" matters to you.
  • Hazelcast Community Edition is Apache 2.0 and includes most distributed data structures. Persistence, WAN replication, security features, and Hazelcast Cloud are commercial.

For most teams, the version they actually run is open source for both products. The cost story shows up only at enterprise scale or when specific commercial features matter.

When to Pick Redis

  • You need a key-value cache and pub/sub system, full stop.
  • Your stack is genuinely polyglot — multiple languages talking to the same cache.
  • You want a managed offering on day one (ElastiCache or similar) and not to think about it again.
  • You need sorted sets, streams, or Bloom filter-like data structures that Hazelcast does not match feature-for-feature.
  • The operational simplicity of "a separate cache server, well-understood, with mature managed options" is a hard requirement.

When to Pick Hazelcast

  • You are on the JVM and you need cluster-wide coordination — distributed locks with linearisable guarantees, atomic counters across services, a shared workflow state.
  • You want compute to run where the data lives — atomic updates with EntryProcessor, parallel queries across partitions, streaming pipelines through Jet.
  • You want an embedded grid — each application instance is also a grid member, no separate cache tier to operate.
  • You are building a stateful Java service (workflow engine, gateway, real-time analytics) where the cache and the application have the same lifecycle.
  • You want to retire ZooKeeper or etcd and consolidate coordination into something you already run.

Can You Run Both?

Yes, and many teams do — but think hard before you choose to. Two caches mean two failure modes, two operational stories, two consistency models, and twice as many decisions about where each piece of state lives. The pattern that works in practice: pick the primary tool for your shape (Redis for polyglot caches and pub/sub; Hazelcast for JVM-native coordination and grids) and use the other only if you have a specific narrow need it does not cover. "We use Hazelcast for in-process maps and Redis for cross-language pub/sub" is fine. "We use Redis for some caches and Hazelcast for other caches" is usually a sign someone made a decision twice.

A Last Word

Tool choices look like technology choices and are mostly architecture choices in disguise. Redis pulls you toward a layered architecture with a remote cache. Hazelcast pulls you toward a stateful JVM service with the grid embedded. Both are fine architectures. Picking by feature checklist usually picks the wrong one — picking by what your system already wants to be tends to pick the right one.

For most teams in 2026, the deciding question is not "which is faster" but "what kind of system are we building, and which tool gets out of the way?" Answer that, and the choice usually announces itself.

Frequently Asked Questions

Is one of these going to be obsolete in five years?

Almost certainly not. They have different niches, large user bases, and active development. The recent Redis license change made the Valkey fork interesting, and there are good arguments for switching the open-source community there if licensing matters to you — but the underlying technology is going nowhere. Hazelcast has been steadily evolving as well. Bet on either; do not bet on a binary "one wins" outcome.

Can Hazelcast replace Kafka for event streaming?

Partially. Hazelcast Jet (in core since version 5) gives you streaming pipelines and Hazelcast itself can hold ordered durable streams via JetService. For high-throughput event log workloads where retention, replay, and consumer-group semantics matter, Kafka is still the dedicated tool. For latency-sensitive enrichment pipelines that need state lookups against grid data, Jet is genuinely competitive.

How does this comparison change if I am evaluating Valkey instead of Redis?

For practical purposes, very little — Valkey is a fork of Redis 7.2 maintained by the Linux Foundation, with the same API and protocol. Everything in this post about Redis applies to Valkey, with the differences being licensing, governance, and (over time) whichever new features each project chooses to add. If license-purity matters to you, Valkey is the answer that lets you keep using everything you know.

Does the choice change if I am running on Kubernetes?

It changes the operational story but not the architectural one. Redis on Kubernetes is mature with operators and managed offerings. Hazelcast on Kubernetes is also well supported, with first-class discovery via the Kubernetes plugin and an official Helm chart. Either runs cleanly. The deciding factors are still the same — coordination needs, embedded vs separate tier, polyglot vs JVM-only.

I already use Spring Boot — does that push me toward one?

Both have first-class Spring Boot integration. Spring Cache works with either via @Cacheable and friends. Spring Session has implementations for both. The Spring Boot starter for Hazelcast configures an embedded cluster automatically; the Spring Data Redis starter configures a Redis client connection. Spring is not the deciding factor — the architecture is.


Comments


Login to join the conversation.

Loading comments…

More from Innovation