There will not be anything revolutionary in this post, as even the Akka documentation on Actor Systems mentions that

The first possibility is especially well-suited for resources which are single-threaded in nature, like database handles which traditionally can only execute one outstanding query at a time and use internal synchronization to ensure this. A common pattern is to create a router for N actors, each of which wraps a single DB connection and handles queries as sent to the router. The number N must then be tuned for maximum throughput, which will vary depending on which DBMS is deployed on what hardware.

This is exactly what the Asyncpools library does, except that it is a more generic worker pool, where the pooled resource can be configured. But the library actually comes with a db pool implementation, based on Slick, Typesafe's database library. The pooled resources in this case are Slick sessions, and db queries (jobs) can be sent to the pool for asynchronous execution. The pool immediately returns Future's, thus mimicking non-blocking behaviour (not blocking the main thread), and providing a nice way to contain and hide JDBC's blocking nature from the rest of the application, which can then be wrote in a reactive fashion. Let's look at a simple example.

First, let's create a "read" and a "write" db pool.

Now, let's create a repository and use these two pools to read from and write to the database.

And finally, let's use this repository in a Play controller method, which actually expects a Future.

As you can see, everything in the code, apart from the repository, can be mapping or flatMapping on a Future, or can be a callback, or something similarly reactive.


The last missing part of this example is the configuration, which lets us to actually create these two pools. So let's add the config for our pools to application.conf (Asyncpools uses Typesafe Config).

Asyncpools has its own actor system, so its behaviour can be further customized with standard Akka configuration. The following example shows how to configure a different dispatcher for one of these pools we have just created.

The balancing dispatcher, by the way, redistributes work within a pool from busy workers to available ones, by sharing a single mailbox between all workers, so instead of each worker having its own mailbox, the pool itself has one single mailbox. It's an interesting technicality, but highlights another aspect of Asyncpools, that it's a pool with queues, meaning that it handles peaks more gracefully.


Here at Kinja, we already use Asyncpools in production, currently only in one of the components of our system, but potentially it may become one of our more important instruments to refactor our code in a reactive fashion. I will follow up on our observations and experience with Asyncpools in another post.