Backup CPU and Memory Footprint Config

Simon Bennett
Simon Bennett ยท Dec 18, 2020

A number of our customers have been requesting a way to limit the amount of CPU a backup process consumes. This is important to make sure during a backup the server does not dedicate t0o many resources to backups, and that other processes don't suffer, for example, you would not wish for your WordPress sites to become slow.

We are proud to release a solution to this issue with an improvement to our new backup engine. At the server level now customers are free to set an uploading bandwidth limit, as well as a chunked upload limit.

Backup Bandwidth Limit

By setting the backup bandwidth upload limit on a server level, each backup job under that server will not exceed this limit. This, in turn, limits the speed of which processes like compression can execute, thus stopping the CPU maxing out. Each server has different performance levels, so customers are free to specify the limit in MBps. By default, each server has no upload limit.

Most customers will see an effect in the 1-10MB range

Backup Memory Limit

As all our backups currently using a streaming upload technique and do not to use local disk space during the creation of a backup, we use a chunking upload method with rclone. As the backup process, pipes it outputs to Rclone, Rclone will store the data in memory and upload it in chunks.

You can now set the chunk size per server, by default its set to 60MB but you can reduce down to 10MB if you want to have a smaller server memory footprint, this will slow the backup down and lead to more HTTP requests.

You could also set this to a maximum of 1000M. Larger chunk size will significantly improve the upload speed, however, if a chunk fails to upload it will have to try and reupload the piece again.

Our default 60MB is a good starting place for most servers.

It's worth noting this limit is only the upload process, the actual backup will still consume memory.

-Simon