Hi! This post is pretty old (like when FrankenPHP was alpha software) and needs to be redone! All the memory leaks have been squashed, and can do about 15k+ reqs per second and can completely saturate a network/cpu package, as of 1.0.3.
The original post is below.
If you live in the PHP world, you’ve probably heard of FrankenPHP. As soon as I saw the project, I fell in love with it. I tend to go back and forth between several languages, even mixing them when the problem calls for it. But here’s FrankenPHP, a beautiful mix of Go, C, and PHP.
FrankenPHP is a new SAPI for PHP. Since it’s so new, it has some rough edges, but I’ve been quite blown away with the performance. I wanted to understand how to make the performance even better, and what follows is an unofficial benchmark, a guide to getting and keeping FrankenPHP performant, and some other misc. notes.
First of all, let’s start with some benchmarks. I’ve deployed MySQL and a WordPress installation for Apache, PHP-FPM, and FrankenPHP to a Kubernetes environment. There are 40 cores available, and 192 gigabytes of ram total (~50% is being used for other tasks). In each of the following tests, I’ve deployed 10 instances of each type of SAPI with the goal of saturating my pitiful network.
Let’s first deploy Apache in it’s default configuration (according to the WordPress image).
In finding the maximum, I pretty much maxed out my available bandwidth of the load balancer. So this isn’t the theoretical maximum of PHP, but the maximum I can expect to serve per second with this configuration: ~266 requests per second.
The total memory utilized by apache to serve these connections peaked at 15gb, and ended up settling around 3.4gb.
The performance is quite amazing, as long as we’re reusing connections… when we use a new connection for each transfer, it looks more like this:
Memory usage went up to 12.6gb total.
Now lets take a look at PHP-FPM. I’ll go ahead and not that the FPM configuration shipped with the WordPress image sucks. It isn’t tuned, like at all, and is overly conservative. So, I don’t really trust these numbers, such as they are.
In this case, I was able to achieve 300 requests per second when reusing connections, utilizing only 2.6gb of ram total.
And when creating a new connection on each request, 300 requests per second is unsustainable. I was only able to handle about 209 requests per second:
Memory usage maxed out at 2.65gb.
Measuring FrankenPHP was far more interesting than the other two. It was able to sustain a burst up to 300 requests per second for ~45 seconds without impacting latency too much (this might also be my ISP allowing this, but it didn’t happen with the other two servers). However, I could really only sustain ~220 requests per second when reusing connections:
Just like with PHP-FPM, I was only able to handle about 209 requests per second:
Memory usage in both cases peaked at 6.89gb total.
|Peak Memory Usage (GB)
These probably don’t mean much to everyone, as I mentioned earlier, my load balancer isn’t the best and my home network is limited to a single gigabit and we’re sending an entire HTML page. Perhaps a better test would be to just perform a HEAD request to understand the true maximum requests per second.
As you may have noticed, the performance of FrankenPHP is not exactly the best. I would probably put it somewhere between Apache and PHP-FPM, however, you don’t exactly get JUST a SAPI; you also get Caddy, Mecure, HTTP3, and Early Hints too (something PHP-FPM cannot support btw). Not to mention that FrankenPHP hasn’t even been thoroughly optimized yet, so I only expect it to get better.
For me, I generally don’t need to worry about this sort of load, so getting http3 through Caddy is more important than surviving an insane load. Note that we didn’t test http3 here, though I may test that in the future.
Other things Tested
I also tested how this performs when dealing with bad networks, particularly ones that require a lot of TCP retransmissions. I didn’t take a screenshot or record those test results as I didn’t really find them that remarkable.
Out of the box, FrankenPHP will not instantly be this performant, at least under a sustained load. For most workloads, you probably won’t notice any issues. However, if you are worried about a fully loaded server, it is probably worth tuning
GOMEMLIMIT for your workload. Additionally, if you are experiencing GC pauses in something like kubernetes, using a readiness probe can direct traffic to healthy pods when it GCs with a simple HTTP probe. Be aware that you do run a risk of every single pod GCing at the same time, but if there are enough pods, that should be relatively unlikely.