Using different event journal configurations in one actor system

If you use multiple persistent actors with Akka, you normally configure one event journal, but in case you want to configure different event journals for different persistent actors, you need to be careful to configure everything correctly, otherwise the events are persisted, but when the persistent actor is reinstantiated, the state isn’t recovered because the events aren’t replayed.

A use case for wanting to have different configurations is e.g. that you’re using Cassandra as the event store and want different keyspaces for different persistent actors.

To do this, you can make the configuration as usual, overriding the cassandra-journal and cassandra-query-journal settings in the application.conf if necessary.

In the section of the application.conf for your persistent actor (in this example named user), you make the following definitions:

user {
  event-journal = ${cassandra-journal}
  event-journal.keyspace = "user"
  event-journal.query-plugin = "user.custom-cassandra-query-plugin"

  custom-cassandra-query-plugin = ${cassandra-query-journal}
    write-plugin = "user.event-journal"

The configuration of the query plugin is very important, without it the events are stored but not replayed. Note that the values of these settings must use the full name of the setting in the application.conf file.

In the cassandra-journal settings in the application.conf, you need to set the keyspace setting to an existing keyspace.

You can now make another configuration with a different keyspace for another persistent actor.

In the persistent actor, you override the journalPluginId and give it as value the full name of the event-journal settings. In the example above, it would be:

override def journalPluginId = "user.event-journal"

With these settings, different persistent actors use different settings within one actor system.

Using Akka Streams with Kafka and Avro

The easiest way to use Akka Streams with Kafka is by using the Alpakka Kafka Connector.  The documentation page describes in detail how to use Akka Streams both as a Kafka producer and as a consumer.

Even though Kafka is agnostic regarding to the message format, the preferred message format is Avro and the preferred solution is using it in combination with a Schema Registry.

Avro has a schema so unlike when using JSON, it is known which fields the message contains. Normally the schema is included with the data itself with Avro. Including the schema with every message can have a significant overhead, but when the Schema Registry is used, the schema is registered in the Schema Registry and the message contains a reference to the schema.

You can configure backwards compatibility for your schema and messages with a new schema that isn’t backwards compatible will then be rejected. So you have the guarantee that consumers continue to be able to read messages even though they expect a lower schema version.

Since I couldn’t find a complete example on how to use Akka Streams with Kafka and a Schema Registry for messages in Avro format, I created a sample project which you can find at

The of the project contains information on how to run the project.