Zum Inhalt

CCM19 in the cluster - Highly available

In principle, CCM19 can also be operated in a cluster without any problems. The CCM19-Cloud also runs in a high-availability-cluster with a load balancer and various servers behind it, which together carry the load. We generally recommend the use of Ansible for the management of the server(s).

This technology can also be mapped with the standard agency-version, but requires a few adjustments.

Basically, you have to decide which structure you want to use. CCM19 runs both with a file-based approach with JSON-files or with a MongoDB.

If you plan to manage more than 1000 domains at some point, you should work directly with a MongoDB - even if you are only using one server to start with. Moving from JSON to MongoDB is not trivial and requires a lot of work.

For the installation and administration, we assume that the servers used have an Apache web server with PHP. If MongoDB is used, this database must of course also be installed with the corresponding PHP libs. Please note that the requirements in terms of traffic are high and optimize access on your servers accordingly.

It makes sense to use

  • php-fpm
  • Caching-module for Apache, cache-memory in RAM e.g. tmpfs
  • Deprecate unnecessary PHP- and Apache-modules
  • Optimization of kernel-settings, e.g. number of max. possible processes, number of possible connections via the network card

File-based cluster

A file-based cluster is possible with distributed file systems such as GlusterFS.

For this, the .env.local must be added to the ccm19-Zip-file before installation:

First, the cluster-members. This is necessary so that the instances in the cluster can synchronize their caches, for example. For example, in the case of two cluster-members at and, where the CCM19-installation is installed directly in the HTTP-root:


Several URLs are separated by spaces, but it also works with just one. In principle, only the URLs of the other instances in the cluster are required.


This indicates that the central data is stored in the directoryvar/config-bundle . This directory must then be kept synchronized between all instances via e.g. GlusterFS.

Thesession_save_path()should then be placed in the directory /var/config-bundle/tmp/. This allows you to log in and the login status is retained even when changing access to the different cluster-parts.

Cluster with MongoDB

The installation with MongoDB is already carried out via the installation-interface and does not require any special settings. We recommend setting up the MongoDB-replicas directly on the web servers on which CCM19 is also installed. This means that read access can always take place directly via Unix-domain-sockets, which significantly speeds up access.

The .env.local must also be added to all instances in the cluster for this variant (see above for details):


For maximum reliability, the addresses of all MongoDB-replicas should also be specified in preference order in the .env.local of each cluster-instance. For example, for two MongoDB-replicas on and each on port 27017, additionally accessible via the Unix-domain-socket "/var/run/mongodb/mongod.sock" on the respective web server:


The initial configuration of a MongoDB-Replica-cluster is explained in detail in the MongoDB documentation and is therefore not discussed further here.

Server-side cronjobs

By default, tasks to be executed regularly (cronjobs) are triggered by randomly selected frontend-requests. In a cluster-installation, this can lead to some cronjobs being triggered too rarely, depending on the distribution of requests to the servers.

We therefore recommend setting up server-side cron jobs in clusters (e.g. in the crontab or as a systemd-timer) in which bin/console app:cron:run -- timeout 15 is executed on each instance every 2-5 minutes. The timeout-parameter specifies the maximum runtime of each run. The cron jobs should be set up in such a way that they are staggered and preferablynotrun exactly simultaneously on all servers.

As soon as the server-side cronjobs have been set up and tested, the frontend-triggering of the cronjobs can be deactivated by an entry in .env.local: