Creating an ORM-less framework

Years ago, I enjoyed working with Durable Functions on Azure. There were some aspects that I fell in love with, like how “close” I felt to pure DDD, how simple it was to scale, and how little boilerplate was required. I just had one “problem”[1], I wanted to use it in PHP.

I’ve been a fan of Domain Driven Design since 2007, and one of the things I enjoyed about Durable Function was the ability to focus on the domain instead of so much boilerplate code in the API/db layers. I wanted this in my PHP life, where most frameworks are quite heavy or opinionated. However, years ago, I lacked the drive to work on something like that; until about a month ago… when I started on “Durable PHP.”

Durable PHP is nowhere near production ready, but it has fascinated me to no end. In this post, I want to share some of the main components of the framework, some challenges I overcame during implementation and a brief overview of how it works.


Essentially, the state is calculated from events, so the entire backend is event-sourcing CQRS. The current state is simply a projection of all processed events. Initially, building such a system in a shared-nothing architecture seemed impossible. I started with Redis Streams, but getting Redis set up in a scalable way is not as straightforward as I anticipated.

In the end, after some soul-searching, I settled on using RethinkDB for the initial database implementation for several reasons:

  1. I already use RethinkDB for some stuff.
  2. Change feeds are simply powerful and scalable.
  3. I maintain a modern fork of the PHP SDK.
  4. RethinkDB is still maintained and “just works.”

From there, the next hardest issue was figuring out how to handle messages asynchronously. PHP is notorious for being single-threaded-shared-nothing. I needed to break this model for my own code while providing a single-threaded experience in user code. PHP Fibers provided a nice abstraction for that, while the Parallel Extension could provide a way of using native threading in PHP while allowing user code to block without consequence.

Oh boy, was that fun to implement! It was, naturally, a disaster. There are a ton of Gotchas. While researching an issue I ran into, I discovered AmpPHP and its Parallel Worker library. From there, I started rewriting everything to take advantage of Amp, Futures, and Tasks.

Once threading was properly implemented, I ended up with something like this:

This turns out to be fairly scalable for n-node deployments. After that, I implemented the ability to create logical partitions and logically pin certain types of work to partitions. Thus, with a little bit of foresight on the RethinkDB and Kubernetes side, executing partitions can be physically near their data and move around the cluster freely.

Implementing Orchestrations

In Durable Functions, Orchestrations are the lifeblood of applications. They allow you to (literally) orchestrate things — in DDD parlance, they are Sagas or Process Managers. They’re essentially idempotent methods that determine what needs to happen next.

As an example, here’s an example Orchestration implementing magic links:

function(OrchestrationContextInterface $context) {
		$input = Serializer::deserialize($context->getInput(), LoginUserInput::class);
		$user = $context->createEntityProxy(UserInterface::class, $input->userId->toEntityId());
		$lock = $context->lockEntity($input->userId->toEntityId());

		$code = $context->callActivity(GenerateCode::class, [$input->forceCode]);

		$timer = $context->createTimer($context->getCurrentTime()->add($context->createInterval(minutes: 20)));
		$context->setCustomStatus('waiting for attempt');
		$attempt = $context->waitForExternalEvent('loginAttempt');
		$result = $context->waitAny($timer, $attempt);
		if($result === $timer) {
			return Serializer::serialize(new LoginResult(LoginResultEnum::Timeout, []));
		[$enteredCode] = $attempt->getResult();
		if($enteredCode === $context->waitOne($code)) {
			return Serializer::serialize(new LoginResult(LoginResultEnum::Success, $user->getRoles()));
		return Serializer::serialize(new LoginResult(LoginResultEnum::Failure, []));

This is using distributed locks to prevent multiple concurrent login sessions, it isn’t ideal, it’s just an advanced example.

But wow, was it so awesome to implement this using event sourcing. Essentially, the above method would run multiple times, but execution can be interrupted and serialized while paused, with results memoized. This allows the server to process other tasks. It has the same constraints as Orchestrations in Durable Functions: no I/O or non-deterministic code should run in them. Instead, that should be delegated to Activities (we’ll get to that in a second).


I ended up getting stuck on locking for a while, especially multiple concurrent locks. I eventually came up with a solution that was so simple, essentially I called it ‘cooperative locking’ and it works by sending a locking request to the first entity, which locks, then sends a request to the next entity, and so on, until the last entity to lock tells the orchestration the lock was successful.

At first, I thought this was pretty novel (I didn’t really pay attention in this part of CS classes, so I really didn’t know if it was novel) so I took a peak at how it was implemented in Durable Functions … it’s basically the same implementation, give or take.

That gave me some confidence that this is the best solution, regardless of whether or not it was novel.

Memoizing Task Results

The next challenge was to memoize task results and handle out-of-order events. For example, in the example above, it would be possible to receive the user’s code before we’re actually waiting on it. We need to be able to handle that case and immediately resolve the Future (there’s no point in pausing execution in this case).

My first implementation of this was fairly complex. Then I ended up rewriting it several times before resolving on a simple and elegant-ish solution.

At this point, it is worth talking a little about how events work in this system. There are only a few core events (like TaskCompleted, TaskFailed, RaiseEvent, etc), but each event may be ‘decorated’ during runtime with addresses, reply-to’s, lock requests, etc). This provides a great deal of flexibility and allows sending the same event to multiple targets or replying to multiple targets. It’s pretty neat.

Back to how task results work, when we send an event with a reply-to, we register the event’s id and when we receive an event, we look to see if we’re expecting that event. If we are, we enqueue that event using the original id as the key. When we receive an event we aren’t expecting, we simply enqueue it using the received event id.

When we create a Future, we also add a subscriber to the queue that knows how to recognize an event. This leans pretty heavily on V5 GUIDs to create an id that is stable.

When we actually attempt to resolve an unresolved Future, we process the queue, and any subscriber may ‘take’ an event as a match. Once the queue is processed, we put any unmatched events back in the queue, and then see if we can actually resolve the Future (i.e., waitForAny will resolve if at least one Future resolves, while waitForAll waits for all Futures).

Order is preserved so that WaitForAny will always return the first result.


In Durable Functions, and this framework, activities are run exactly-once and return a result. Blocking and I/O is allowed. Activities have an identity, but they are not pinned to a specific partition and may even be migrated to other partitions if there is another partition that can process it sooner. However, if too many activities start taking too long, there can be worker starvation, so it’s worth trying to use asynchronous libraries like AmPHP to process things and yield work.

There is still a lot to do to improve Activities, but these were the simplest to implement.


Entities are essentially Virtual Actors. Like Activities, they have no rules and allow an OOP-like state. Just like in Durable Functions, they have a few constraints.

You may signal an entity (essentially ‘fire-and-forget’ a method), or you may call an entity (wait for a response). However, you may only call a method if and only if you are in an orchestration. From everywhere else, you may only signal an entity.

Entities are ‘single threaded’ meaning that if two Orchestrations call the same entity twice, one Orchestration will have to wait for the other Orchestration’s call to complete.

Orchestrations can provide an Entity Proxy to call/signal methods based on the return type if given an interface. This gives you quite a bit of flexibility to have Entities implement multiple interfaces or however, it makes sense in your domain.

Current State

I call this project Durable PHP, and you should check it out on GitHub. It’s nowhere near production ready, and there are still a lot of problems to solve. If this is your cup of tea, and it interests you, please give it a try using a simple application.

I’d love to provide some test helpers to allow easier implementation of unit testing Orchestrations and Entities. Right now, it is a very manual process, and intricate details of how things work are required.


I’ve run some basic benchmarks (based on Netherite’s). Keep in mind that this project is nowhere near done or optimized. However, I’m already beating the non-Netherite implementation on a single node. The PHP implementation is able to process ~1k events per second, and I only expect it to get faster. I don’t expect it to beat Netherite + FASTER anytime soon.

Not The End

There’s still so much to build in this framework, entire things yet to implement. However, every day that I work on it, I imagine more and more code that can be deleted in my other projects. A world where serialization is a thing of the past, where unit testing is easy, and most importantly, where databases no longer live.

[1]: I came to love modern PHP more than C#

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.