feat: multiple workers with same file (#1856)

* Allow multiple workers with the same file.

* Fix formatting of duplicate filename check

* Adds docs.

* suggestions by @alexandre-daubois.

* Update performance.md

---------

Co-authored-by: Kévin Dunglas <kevin@dunglas.fr>
This commit is contained in:
Alexander Stecher
2025-09-09 14:27:00 +02:00
committed by GitHub
parent 984f0a0211
commit 960dd209f7
2 changed files with 37 additions and 1 deletions

View File

@@ -264,7 +264,10 @@ func (f *FrankenPHPModule) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {
if _, ok := fileNames[w.FileName]; ok {
return fmt.Errorf(`workers in a single "php_server" block must not have duplicate filenames: %q`, w.FileName)
}
fileNames[w.FileName] = struct{}{}
if len(w.MatchPath) == 0 {
fileNames[w.FileName] = struct{}{}
}
}
return nil

View File

@@ -155,3 +155,36 @@ In particular:
For more details, read [the dedicated Symfony documentation entry](https://symfony.com/doc/current/performance.html)
(most tips are useful even if you don't use Symfony).
## Splitting The Thread Pool
It is common for applications to interact with slow external services, like an
API that tends to be unreliable under high load or consistently takes 10+ seconds to respond.
In such cases, it can be beneficial to split the thread pool to have dedicated "slow" pools.
This prevents the slow endpoints from consuming all server resources/threads and
limits the concurrency of requests going towards the slow endpoint, similar to a
connection pool.
```caddyfile
{
frankenphp {
max_threads 100 # max 100 threads shared by all workers
}
}
example.com {
php_server {
root /app/public # the root of your application
worker index.php {
match /slow-endpoint/* # all requests with path /slow-endpoint/* are handled by this thread pool
num 10 # minimum 10 threads for requests matching /slow-endpoint/*
}
worker index.php {
match * # all other requests are handled separately
num 20 # minimum 20 threads for other requests, even if the slow endppoints start hanging
}
}
}
```
Generally it's also advisable to handle very slow endpoints asynchronously, by using relevant mechanisms such as message queues.