A data backup for MySQL databases and MongoDB can be performed using scheduled cron job. For backup job, a NFS export must has been already configured.
MySQL Backup
MySQL backup is performed daily, the backup is named as monthly for every first day of month and a regular one as daily.
-
Review MySQL backup configuration
vi myapp/mysql-backup.yaml
apiVersion: batch/v1 kind: CronJob metadata: name: mysql-backup namespace: myapp labels: app: myapp-backup-job spec: schedule: "55 23 * * *" timeZone: Asia/Jakarta successfulJobsHistoryLimit: 1 jobTemplate: spec: template: spec: containers: - name: mysql-backup image: docker.io/debian:bookworm-slim command: - /bin/bash - -c - | cp /backup/mysql-backup.sh ~/mysql-backup.sh chmod +x ~/mysql-backup.sh ~/mysql-backup.sh env: - name: BACKUP_ID valueFrom: configMapKeyRef: name: mysql-backup-data key: backup_id - name: APP_TIMEZONE value: "Asia/Jakarta" volumeMounts: - name: backup-data mountPath: /backup - name: backup-repository mountPath: /backup/storage volumes: - name: backup-data configMap: name: mysql-backup-data items: - key: mysql-backup.sh path: mysql-backup.sh - key: mysql-backup.var path: mysql-backup.var - name: backup-repository persistentVolumeClaim: claimName: mysql-backup-pvc restartPolicy: Never backoffLimit: 1 --- apiVersion: v1 kind: ConfigMap metadata: name: mysql-backup-data namespace: myapp data: backup_id: myapp mysql-backup.sh: | #!/bin/bash . /backup/mysql-backup.var # set timezone if [ -n "$APP_TIMEZONE" ]; then ln -sf /usr/share/zoneinfo/${APP_TIMEZONE} /etc/localtime dpkg-reconfigure -f noninteractive tzdata fi # configure apt [ -f /etc/apt/sources.list.d/debian.sources ] && \ sed -i -e "s/deb.debian.org/kartolo.sby.datautama.net.id/g" /etc/apt/sources.list.d/debian.sources apt-get update>/dev/null apt-get install -y curl gnupg p7zip-full>/dev/null # import MySQL GPG public key mkdir -p /etc/apt/keyrings curl -fsSL https://repo.mysql.com/RPM-GPG-KEY-mysql-2023 | gpg --dearmor -o /etc/apt/keyrings/mysql.gpg # bootstrap mysql-community-client cat <
/etc/apt/sources.list.d/mysql.list ### THIS FILE IS AUTOMATICALLY CONFIGURED ### # You may comment out entries below, but any other modifications may be lost. # Use command 'dpkg-reconfigure mysql-apt-config' as root for modifications. deb [signed-by=/etc/apt/keyrings/mysql.gpg] http://repo.mysql.com/apt/debian/ bookworm mysql-apt-config deb [signed-by=/etc/apt/keyrings/mysql.gpg] http://repo.mysql.com/apt/debian/ bookworm mysql-8.0 deb [signed-by=/etc/apt/keyrings/mysql.gpg] http://repo.mysql.com/apt/debian/ bookworm mysql-tools #deb [signed-by=/etc/apt/keyrings/mysql.gpg] http://repo.mysql.com/apt/debian/ bookworm mysql-tools-preview deb-src [signed-by=/etc/apt/keyrings/mysql.gpg] http://repo.mysql.com/apt/debian/ bookworm mysql-8.0 EOF apt-get update>/dev/null apt-get install -y mysql-community-client>/dev/null MYSQLDUMP=$(which mysqldump) if [ -n "$MYSQLDUMP" ]; then # prepare backup storage BACKUPDIR=/backup/storage/Backup/MySQL/${BACKUP_ID} if [ date +%d
= "01" ]; then DAILY="false" BACKUPDIR=$BACKUPDIR/monthly else DAILY="true" BACKUPDIR=$BACKUPDIR/daily fi BACKUPDIR=$BACKUPDIR/$(date +%Y%m%d) mkdir -p $BACKUPDIR # execute backup for DB in $DB_BACKUPS; do DB_BACKUP=$BACKUPDIR/$DB.sql.7z echo "Creating MySQL database dump for $DB..." [ -f "$DB_BACKUP" ] && mv "$DB_BACKUP" "$DB_BACKUP~" $MYSQLDUMP --single-transaction --routines --quick --set-gtid-purged=OFF -h $DB_HOST -P $DB_PORT -u $DB_USER -p$DB_PASSWORD $DB | \ 7z a -si "$DB_BACKUP" done # cleanup daily backup if [ "$DAILY" = "true" ]; then BACKUPDIR=$(dirname $BACKUPDIR) DIRS=$(ls $BACKUPDIR) if [ -n "$DIRS" ]; then echo "Found daily backup entries [$DIRS]..." MAXBACKUP=7 N=0 for DIR in $DIRS; do if [ -d "$BACKUPDIR/$DIR" ]; then ((N++)) fi done for DIR in $DIRS; do if [ -d "$BACKUPDIR/$DIR" -a $N -gt $MAXBACKUP ]; then echo "Cleaning up backup $DIR..." rm -rf $BACKUPDIR/$DIR ((N--)) fi done fi fi fi sleep 10 mysql-backup.var: | DB_HOST=mysql-instances DB_PORT=3306 DB_USER=user DB_PASSWORD=password DB_BACKUPS="db1 db2 db3" --- apiVersion: v1 kind: PersistentVolume metadata: name: myapp-mysql-backup-pv spec: capacity: storage: 1000Gi accessModes: - ReadWriteMany nfs: server: 10.0.0.100 path: /mnt/data mountOptions: - nfsvers=4.2 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-backup-pvc namespace: myapp spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 1000Gi volumeName: myapp-mysql-backup-pv -
Apply backup job
kubectl apply -f myapp/mysql-backup.yaml
cronjob.batch/mysql-backup created configmap/mysql-backup-data created persistentvolume/myapp-mysql-backup-pv created persistentvolumeclaim/mysql-backup-pvc created
MongoDB Backup
MongoDB backup is performed weekly, if there is no backup exist then it will perform full backup, otherwise a delta backup is performed.
-
Review MongoDB backup configuration
vi myapp/mongodb-backup.yaml
apiVersion: batch/v1 kind: CronJob metadata: name: mongodb-backup namespace: myapp labels: app: myapp-backup-job spec: schedule: "30 23 * * 0" timeZone: Asia/Jakarta successfulJobsHistoryLimit: 1 jobTemplate: spec: template: spec: containers: - name: mongodb-backup image: docker.io/debian:bookworm-slim command: - /bin/bash - -c - | cp /backup/mongodb-backup.sh ~/mongodb-backup.sh chmod +x ~/mongodb-backup.sh ~/mongodb-backup.sh env: - name: BACKUP_ID valueFrom: configMapKeyRef: name: mongodb-backup-data key: backup_id - name: APP_TIMEZONE value: "Asia/Jakarta" volumeMounts: - name: backup-data mountPath: /backup - name: backup-repository mountPath: /backup/storage volumes: - name: backup-data configMap: name: mongodb-backup-data items: - key: mongodb-backup.sh path: mongodb-backup.sh - key: mongodb-backup.var path: mongodb-backup.var - name: backup-repository persistentVolumeClaim: claimName: mongodb-backup-pvc restartPolicy: Never backoffLimit: 1 --- apiVersion: v1 kind: ConfigMap metadata: name: mongodb-backup-data namespace: myapp data: backup_id: myapp mongodb-backup.sh: | #!/bin/bash . /backup/mongodb-backup.var # set timezone if [ -n "$APP_TIMEZONE" ]; then ln -sf /usr/share/zoneinfo/${APP_TIMEZONE} /etc/localtime dpkg-reconfigure -f noninteractive tzdata fi # configure apt [ -f /etc/apt/sources.list.d/debian.sources ] && \ sed -i -e "s/deb.debian.org/kartolo.sby.datautama.net.id/g" /etc/apt/sources.list.d/debian.sources apt-get update>/dev/null apt-get install -y curl gnupg>/dev/null # setup mongodb repository, see https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-debian/ curl -fsSL https://pgp.mongodb.com/server-7.0.asc | \ gpg -o /usr/share/keyrings/mongodb-server-7.0.gpg --dearmor echo "deb [ signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] http://repo.mongodb.org/apt/debian bullseye/mongodb-org/7.0 main" | \ tee /etc/apt/sources.list.d/mongodb-org-7.0.list apt-get update>/dev/null apt-get install -y mongodb-org-tools mongodb-mongosh>/dev/null MONGODUMP=$(which mongodump) if [ -n "$MONGODUMP" ]; then # prepare backup storage TOPDIR=/backup/storage/Backup/MongoDB/${BACKUP_ID} BACKUP="delta" if [ -d $TOPDIR/full ]; then DIRS=$(ls $TOPDIR/full) [ -z "$DIRS" ] && BACKUP="full" else BACKUP="full" fi BACKUPDIR=$TOPDIR/$BACKUP/$(date +%Y%m%d) # get backup last date LASTDATE="" if [ "$BACKUP" != "full" ]; then # get last date from delta backup XDIR=$(dirname $BACKUPDIR) DIRS=$(ls $XDIR) if [ -n "$DIRS" ]; then echo "Found backup entries [$DIRS]..." for DIR in $DIRS; do LASTDATE=$DIR done fi # get last date from full backup if [ -z "$LASTDATE" ]; then DIRS=$(ls $TOPDIR/full) if [ -n "$DIRS" ]; then for DIR in $DIRS; do LASTDATE=$DIR done fi fi fi # create backup directory mkdir -p $BACKUPDIR # execute backup if [ "$BACKUP" = "full" ]; then echo "Doing mongodb database full dump..." $MONGODUMP --host $DB_HOST:$DB_PORT -u $DB_USER -p $DB_PASSWORD --authenticationDatabase=admin -j 1 --gzip --out=$BACKUPDIR elif [ -n "$LASTDATE" ]; then DATE1="${LASTDATE:0:4}-${LASTDATE:4:2}-${LASTDATE:6:2}T23:59:59.999Z" DATE2="
date +%Y-%m-%d
T23:59:59.999Z" echo "Last backup date $LASTDATE..." echo "Delta backup performed from $DATE1 to $DATE2..." for DB in $DB_BACKUPS; do echo "Creating mongodb database delta dump for $DB..." QUERY=/tmp/query.json SCRIPT=/tmp/ids.js # backup fs.files collection cat << EOF > $QUERY {"uploadDate":{"\$gt":{"\$date":"$DATE1"},"\$lte":{"\$date":"$DATE2"}}} EOF $MONGODUMP -h $DB_HOST:$DB_PORT -u $DB_USER -p $DB_PASSWORD --authenticationDatabase=admin -j 1 --gzip --out=$BACKUPDIR \ --db=$DB --collection=fs.files --queryFile=$QUERY # backup fs.chunks collection cat << EOF > $SCRIPT db = db.getSiblingDB('$DB'); res = []; c = db.fs.files.find({"uploadDate":{"\$gt":ISODate("$DATE1"),"\$lte":ISODate("$DATE2")}}); while (c.hasNext()) { res.push('{"\$oid":"_ID_"}'.replace(/_ID_/, c.next()._id)); } print(res.join(',')); EOF IDS=mongosh -u $DB_USER -p $DB_PASSWORD --authenticationDatabase=admin --quiet mongodb://$DB_HOST:$DB_PORT/$DB $SCRIPT
if [ -n "$IDS" ]; then cat << EOF > $QUERY {"files_id":{"\$in":[$IDS]}} EOF $MONGODUMP -h $DB_HOST:$DB_PORT -u $DB_USER -p $DB_PASSWORD --authenticationDatabase=admin -j 1 --gzip --out=$BACKUPDIR \ --db=$DB --collection=fs.chunks --queryFile=$QUERY fi done fi fi sleep 10 mongodb-backup.var: | DB_HOST=mongodb-svc DB_PORT=27017 DB_USER=user DB_PASSWORD=password DB_BACKUPS="db1 db2 db3" --- apiVersion: v1 kind: PersistentVolume metadata: name: myapp-mongodb-backup-pv spec: capacity: storage: 1000Gi accessModes: - ReadWriteMany nfs: server: 10.0.0.100 path: /mnt/data mountOptions: - nfsvers=4.2 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mongodb-backup-pvc namespace: myapp spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 1000Gi volumeName: myapp-mongodb-backup-pv -
Apply backup job
kubectl apply -f myapp/mongodb-backup.yaml
cronjob.batch/mongodb-backup created configmap/mongodb-backup-data created persistentvolume/myapp-mongodb-backup-pv created persistentvolumeclaim/mongodb-backup-pvc created