So normally I don’t share. But I figured something like this would be helpful to others.
Now this assumes you have a fair bit of knowledge on the use of a Linux command line and the associated programs available to you. I won’t be offering much support with this. There’s always google and chatgpt, so there ya go.
The long and skinny of it…
As a server owner, I enjoy the backend development just as much as the shiny frontend development of scripts for the server users. And even though I have copies of all my work scattered across a dozen folders on my PC, sometimes the code gets…lost.
But when it comes to the server, well that’s something else entirely. I believe in doing backups. I live by the 3-2-1 Backup Rule. What’s that?
In layman’s terms:
3 copies. Create one primary backup and 2 copies of that backup, including 1 to the cloud.
2 media types: Save one copy to an external drive and another on a provider’s server
1 off-site backup: keep one copy in a different location entirely. Even if it’s on a thumb drive in a safe-deposit box at the bank.
Seems paranoid, but hardware fails.
So back to the script I wrote to backup my database and server files. I created this bash script to dump the database to a compressed file, then copy the server files to a temporary location on the server sans (without) the server cache. Why would you backup the cache? The server will just rebuild it anyway.
So here’s my server architecture. I host on Linode, but I would assume you can do it elsewhere. Linode is easiest for me because they literally have everything I need. You could try to do this on Zap, but without actually backend ssh access, just let them worry about your data.
My main server is on a dedicated box. Can’t have other stuff hogging up all the system resources. Then I have a bucket (think offsite storage) in another geographical location from the server. The server is running Ubuntu 24.04.
Here’s what I need to be able to do my backup.
- rclone (remote cloning software–install it using apt)
- bash (comes standard)
- nano (text based editor, you could use vim, or if you’re feeling froggy, just use vscode/sublime on your local machine. They all work)
I’m going to post the entire script here. It’s pretty well commented. I’ll sanitize any sensitive info.
Code:
#!/usr/bin/env bash
set -Eeuo pipefail
IFS=$'\n\t'
# ===== CONFIG =====
BACKUP_DIR="/usr/fxserver/backups"
DATA_DIR="/usr/fxserver/txData"
DB_NAME="Qbox_D6D9B4"
DB_USER="[redacted]"
DB_PASS="[redacted]"
DB_HOST="localhost"
RETAIN=2
LOG_FILE="/var/log/fivem-backup.log"
SNAP_PREFIX="snapshot"
# ===== SAFETY / LOGGING =====
mkdir -p "$BACKUP_DIR"
touch "$LOG_FILE"
chmod 640 "$LOG_FILE"
log() { printf '%s %s\n' "$(date -Iseconds)" "$*" | tee -a "$LOG_FILE" >&2; }
trap 'log "ERROR: backup failed on line $LINENO"; exit 1' ERR
# prevent concurrent runs
exec 9>"$BACKUP_DIR/.backup.lock"
if ! flock -n 9; then
log "Another backup is already running. Exiting."
exit 0
fi
# required tools
for bin in rsync mysqldump gzip tar; do
command -v "$bin" >/dev/null || { log "Missing dependency: $bin"; exit 1; }
done
NOW="$(date +"%Y-%m-%d_%H-%M-%S")"
SNAP_TMP="$(mktemp -d "${BACKUP_DIR}/.${SNAP_PREFIX}-${NOW}.XXXXXX")"
SNAP_FINAL="${BACKUP_DIR}/${SNAP_PREFIX}-${NOW}"
log "Starting backup → $SNAP_FINAL"
# 1) Copy txData (excluding caches)
log "Rsync txData → $SNAP_TMP/txData (excluding caches)"
mkdir -p "$SNAP_TMP/txData"
# Exclude caches anywhere inside txData (root or nested profiles)
rsync -aHAX --delete \
--exclude='cache' \
--exclude='cache/**' \
--exclude='**/cache' \
--exclude='**/cache/**' \
--exclude='cache_priv' \
--exclude='cache_priv/**' \
--exclude='**/cache_priv' \
--exclude='**/cache_priv/**' \
"$DATA_DIR/" "$SNAP_TMP/txData/"
# 2) Dump MySQL (compressed)
log "Dumping MySQL → ${SNAP_TMP}/fivem_db.sql.gz"
MYSQL_AUTH=(-u "$DB_USER" -p"$DB_PASS" -h "$DB_HOST")
mysqldump "${MYSQL_AUTH[@]}" \
--single-transaction --quick --routines --events --triggers \
"$DB_NAME" | gzip -9 > "${SNAP_TMP}/fivem_db.sql.gz"
# 3) Safety sweep: ensure no cache dirs slipped through (use TEMP path)
log "Sweeping any residual cache directories before publish"
if [[ -d "$SNAP_TMP/txData" ]]; then
find "$SNAP_TMP/txData" -type d \( -name 'cache' -o -name 'cache_priv' \) -print0 \
| xargs -0 -r rm -rf -- || true
fi
# 4) Atomically publish snapshot
mv "$SNAP_TMP" "$SNAP_FINAL"
log "Snapshot complete: $SNAP_FINAL"
# 5) Compress to .tar.gz
ARCHIVE="${SNAP_FINAL}.tar.gz"
log "Compressing → $ARCHIVE"
tar -czf "$ARCHIVE" -C "$BACKUP_DIR" "$(basename "$SNAP_FINAL")"
# 6a) Verify archive integrity before deleting uncompressed folder
log "Verifying archive integrity..."
if tar -tzf "$ARCHIVE" >/dev/null 2>&1; then
log "Verification OK — removing uncompressed snapshot."
rm -rf -- "$SNAP_FINAL"
else
log "ERROR: Archive verification failed. Keeping uncompressed directory for safety."
fi
# 6b) Upload to object storage
REMOTE="linodecrypt:$(hostname -s)/"
log "Uploading $ARCHIVE to $REMOTE"
if rclone copy --transfers 4 --checkers 8 --immutable --s3-no-check-bucket "$ARCHIVE" "$REMOTE"; then
log "Upload complete: $REMOTE$(basename "$ARCHIVE")"
else
log "ERROR: Upload failed; leaving local archive in place."
fi
# 7) Retention — keep only the last $RETAIN archives
log "Pruning old snapshots (keeping $RETAIN)"
shopt -s nullglob
mapfile -t snaps < <(ls -dt -- "${BACKUP_DIR}/${SNAP_PREFIX}-"*.tar.gz)
if ((${#snaps[@]} > RETAIN)); then
for ((i=RETAIN; i<${#snaps[@]}; i++)); do
log "Deleting old snapshot: ${snaps[$i]}"
rm -f -- "${snaps[$i]}"
done
else
log "No old snapshots to prune."
fi
log "Backup finished successfully."
So there’s a lot going on here. I’ll step you through it.
First we set some variables. Most of them should be self-explanatory. A couple of them I’ll explain.
RETAIN=2 This one is what you use to set how many previous copies (including the latest) you want on the server, you know in case you really bork something up. 2 is a good number, that’s a 2 day archive. You can set this to whatever you wish, but be aware, space is an issue. You figure you will need how ever much raw temp space that your txData folder takes up minus the cache, then the actual compressed file. But the temp directory will be deleted later on.
SNAP_PREFIX=“snapshot” – This one is merely what you want to call the backup. It’ll append a date and time to the end of the prefix.
LOG_FILE=‘/var/log/fivem-backup.log’ – This is important, as the script runs in the background as a cron job. If something goes amiss, it’ll be in the log.
BACKUP_DIR – this one is self explanatory. It’s the folder where you’ll actually work inside of and where it stores the backups onsite.
DATA_DIR – this one is the txData directory, or if you don’t care about your server.cfg, permission.cfg and other root files, simply point it to your resources directory.
Safety/logging is just that. It’s logging what it does as it goes.
“prevent concurrent runs” – this just makes sure it’s not running multiple times on the server no need for extra load on the cpu.
“required tools” – this just checks for rsync, mysqldump, gzip, and tar. If you don’t have them, simply install them with apt or your package manager of choice.
We set some other variables like timestamp in NOW, make a temp folder, and a final folder. The temp folder will get renamed to SNAP_FINAL.
Step 1 (copy txData) → does as it says and logs it when it does it. It will exclude all the cache files and folders to keep the backup small.
Step 2 (dump mysql) → Dumps the database to a compressed file. This way the data will be current for the backup.
Step 3 (Safety sweep) → simply makes sure we didn’t accidentally copy the cache folders
Step 4 (Publish snapshot) → simply moves the temp directory to the final directory we’ll work from now. Seems redundant, but the backup can happen while the server is live, so best to work with a static set of files.
Step 5 (Compress it to a gzipped tarball (.tar.gz)) → makes the big clump of files smaller and into a single file.
Step 6a (yeah…I added stuff along the way and didn’t want to renumber the comments) → This one just verifies the integrity of the compressed file before we remove the working directory.
Step 6b (we copy it offsite) → I use linode buckets. You can skip this part if you want to manually download them to your desktop. And you can also configure rclone to upload to a google drive, onedrive, Amazon S3/AWS, and other options.
Step 7 (archive pruning) → we basically just remove the oldest archives and directories to save space.
Here’s a ChatGPT fueled tutorial on rclone: https://chatgpt.com/share/68ec892c-63b4-8010-a042-f10024721168 (I’m including it). While you’re there, you can ask it for help on your particular setup.
I know this was long winded, but I wanted to share it, just in case someone could find it useful.
Peace, Love, and Axle-Grease.
–HT