rsync application example

Reference: https://www.cnblogs.com/noxy/p/8986164.html    detailed explanation of rsync usage

 

1. Small test parameters

# rsync -avzP [email protected]::web1 /data/test/ --Enter the password 123; synchronize the files in the server web1 module to /data/test, parameter description:

-a --parameter, equivalent to -rlptgoD,

-r - is recursion

-l - is the link file, which means to copy the link file

-i - List files in the rsync server

-p - means to keep the original permissions of the file

-t - keep the original time of the file

-g - keep the original user group of the file

-o - keep the original owner of the file

-D - equivalent to block device file

-z - Compress during transmission

-P --Transfer progress

-v --Progress and other information during transmission, somewhat related to -P

# rsync -avzP --delete [email protected]::web1 /data/test/ --Let the client and server keep the same, --delete

# rsync -avzP --delete /data/test/ [email protected]::web1 --Upload client file to server

# rsync -avzP --delete /data/test/ [email protected]::web1/george --Upload client files to the george directory of the server

# rsync -ir --password-file=/tmp/rsync.password [email protected]::web1 - Recursively list the files of the server web1 module

# rsync -avzP --exclude="*3*" --password-file=/tmp/rsync.password [email protected]::web1 /data/test/ --Synchronization except that the path and file name contain "3 "*All files

2. Synchronize via password file

# echo "123"> /tmp/rsync.password

# chmod 600 /tmp/rsync.password

# rsync -avzP --delete --password-file=/tmp/rsync.password [email protected]::web1 /data/test/ --call password file

3. The client automatically synchronizes

# crontab -e

10 0 * * * rsync -avzP --delete --password-file=/tmp/rsync.password [email protected]::web1 /data/test/

# crontab -l

 

Real-time data synchronization

Environment: Rsync + Inotify-tools

1、inotify-tools

It is a set of C development interface library functions provided for the inotify file monitoring tool under linux, and a series of command line tools are also provided. These tools can be used to monitor file system events

inotify-tools is written in C, and does not depend on others except that the kernel supports inotify

inotify-tools provides two tools: one is inotifywait, which is used to monitor changes in files or directories, and the other is inotifywatch, which is used to count the number of file system accesses

2. Install inotify-tools

Download link: http://github.com/downloads/rvoicilas/inotify-tools/inotify-tools-3.14.tar.gz

# yum install --y gcc --install dependencies

# mkdir /usr/local/inotify

# tar -xf inotify-tools-3.14.tar.gz

# cd inotify-tools-3.14

# ./configure --prefix=/usr/local/inotify/

# make && make install

3. Set environment variables

# vim /root/.bash_profile

export PATH=/usr/local/inotify/bin/:$PATH

# source /root/.bash_profile

# echo'/usr/local/inotify/lib' >> /etc/ld.so.conf --load library file

# ldconfig

# ln -s /usr/local/inotify/include /usr/include/inotify

4. Common parameters

-m --Always keep the monitoring state, the default trigger event is to exit -r --Recursively query the directory -q --Print out the monitoring event -e --Define the monitored event, available parameters: access --Access the file modify --modify File attrib --attribute change open --open file delete --delete file create --new file move --file move --fromfile --read the file to be monitored or excluded from the file, one file per line, excluded The file starts with @ --timefmt --time format --format --output format --exclude --Regular matches the files that need to be excluded, case sensitive --excludei --Regular matches the files that need to be excluded, ignoring the case %y %m%d %H%M --year, month, day and clock%T%w%f%e --time path file name status

5. Test one

Check whether there are the following actions in the source directory: modify, create, move, delete, attrib; once it occurs, publish to the target machine; the method is ssh

src: 192.168.22.11(Rsync + Inotify-tools) dest: 192.168.22.12

Two machines need to do SSH password-free login

# mdkir / data / test / dest / --dest 机器

# mdkir /data/test/src/ --src machine

# rsync -av --delete /data/test/src/ 192.168.22.12:/data/test/dest --test command

# vim /data/test/test.sh

#!/bin/bash

/usr/local/inotify/bin/inotifywait -mrq -e modify,create,move,delete,attrib /data/test/src | while read events

do

rsync -a --delete /data/test/src/ 192.168.22.12:/data/test/dest

echo "`date +'%F %T'` Event occurred: $events" >> /tmp/rsync.log 2>&1

done

# chmod 755 /data/test/test.sh

# /data/test/test.sh &

# echo'/data/test/test.sh &'>> /etc/rc.local --Set the boot up automatically

*******We can also write a script like this on the target machine: rsync -a --delete /data/test/dest/ 192.168.22.11:/data/test/src; this can achieve two-way synchronization

 

Use rsync to mirror

Using rsync to mirror a directory is actually a full backup without history archive. Here is an example of mirroring a remote Web site.
The author maintains 3 Dokuwiki sites on dreamhost. In order to backup these 3 sites, I use rsync for mirroring. The directory structure of the remote site is as follows:

~
|-- sinosmond.com
| `-- dokuwiki
|-- smartraining.cn
| `-- dokuwiki
`-- symfony-project.cn
`-- dokuwiki

The directory structure of each Dokuwiki is as follows:

dokuwiki 
|-- bin 
|-- inc 
|-- conf --- Directory for storing configuration files 
| |-- acl.auth.php --- Access control configuration file ★ 
| |-- local.php --- Local configuration File★ 
| |-- users.auth.php --- user password file★ 
| `-- ……………… 
|-- data --- directory for storing data 
| |-- attic --- storing WIKI version Information ★ 
| |-- cache --- store data cache 
| |-- index --- store site index 
| |-- locks --- store locked files when editing pages 
| |-- media --- store pictures, etc. ★ 
| |-- meta --- store meta so that the system can read this information to generate pages★ 
| `-- pages --- store wiki pages★ 
`-- lib 
|-- plugins --- store plugins directory☆ 
|- -tpl --- Directory for storing templates☆ 
`-- ………………

In order to reduce network traffic, only synchronize the directories or files marked with ★. If a plug-in is newly installed or a template is changed during the operation of the site, the directories marked with ☆ should also be synchronized. To this end, write the following rule file /root/bin/backup/dw-exclude.txt:

- dokuwiki/bin/
- dokuwiki/inc/
- dokuwiki/data/cache/
- dokuwiki/data/locks/
- dokuwiki/data/index/
+ dokuwiki/conf/acl.auth.php
+ dokuwiki/conf/local.php
+ dokuwiki/conf/users.auth.php
- dokuwiki/conf/*
+ dokuwiki/lib/plugins/

# 不同步系统默认安装的插件
- dokuwiki/lib/plugins/acl/
- dokuwiki/lib/plugins/config/
- dokuwiki/lib/plugins/importoldchangelog/
- dokuwiki/lib/plugins/importoldindex/
- dokuwiki/lib/plugins/info/
- dokuwiki/lib/plugins/plugin/
- dokuwiki/lib/plugins/revert/
- dokuwiki/lib/plugins/usermanager/
- dokuwiki/lib/plugins/action.php
- dokuwiki/lib/plugins/admin.php
- DokuWiki / lib / plugins / syntax.php
+ dokuwiki/lib/tpl 

# Do not synchronize the default templates installed by the system 
-dokuwiki/lib/tpl/default/ 
-dokuwiki/lib/* 
-dokuwiki/COPYING 
-dokuwiki/doku.php 
-dokuwiki/feed.php 
-dokuwiki/index. php 
-dokuwiki/install* 
-dokuwiki/README 
-dokuwiki/VERSION

Below is the synchronization script /root/bin/backup/rsync-dw.sh

#!/bin/bash
#####################################
# mirror dokuwiki website
# $1 --- domain (ex: smartraining.cn)
# $2 --- full or update
#####################################
# declare some variable
RmtUser=osmond
RmtIP=208.113.163.110
RmtPath=$1/dokuwiki
BackupRoot=/backups/$1
Excludes="--exclude-from=/root/bin/backup/dw-exclude.txt"

# use rsync for mirror
if [ "$2" == "full" ]
then

[ -d /backups/$1 ] || mkdir -p /backups/$1
excludesfile="/tmp/first-excludes"
cat > ${excludesfile} << EOF
+ dokuwiki/data/cache/_dummy
- dokuwiki/data/cache/*
+ dokuwiki/data/locks/_dummy
- dokuwiki/data/locks/*
+ dokuwiki/data/index/_dummy
- dokuwiki/data/index/*
EOF
/usr/bin/rsync -avzP --exclude-from=${excludesfile} \
$RmtUser@$RmtIP:$RmtPath $BackupRoot

else
/usr/bin/rsync -avzP --delete $Excludes \
$RmtUser@$RmtIP:$RmtPath $BackupRoot

fi

For the first backup, you can use a command similar to the following (in order to keep a complete copy locally):

# /root/bin/backup/rsync-dw.sh smartraining.cn full
# /root/bin/backup/rsync-dw.sh sinosmond.com full
# /root/bin/backup/rsync-dw.sh symfony-project.cn full

You can schedule cron tasks for future updates:

# crontab -e
05 1 * * * /root/bin/backup/rsync-dw.sh smartraining.cn
25 1 * * * /root/bin/backup/rsync-dw.sh sinosmond.com
45 1 * * * /root/bin/backup/rsync-dw.sh symfony-project.cn

Ordinary incremental backup

Use rsync to do incremental backups. rsync provides the -b --backup-dir option. This option can be used to update the changed files and save the old version in the specified directory to achieve incremental backup. The following is a step-by-step description of the incremental backup of /home:

#第0次 Backup 
# First copy the contents of the /home directory to the backup directory /backups/daily/home.0, 
# rsync -a /home/ /backups/daily/home.0 
# /backups/daily/home.0 total It is synchronized to the latest state, and it can be done at regular intervals (such as a week) 
# Pack and compress its content to generate an archive file (full backup) and save it in /backups/archive/. 

# First backup (this is the core operation) 
# Synchronize the contents of the /home directory to the directory /backups/daily/home.0, 
# and save the old version of the changed file to /backups/daily/home.1 , 
# If executed once a day, the directory /backups/daily/home.1 saves the state of the changed files one day ago. 
# rsync -a --delete -b --backup-dir=/backups/daily/home.1 /home/ /backups/daily/home.0 

# Second backup 
# The backup directory /backups/daily/home. 1 Renamed to /backups/daily/home. 2 
# mv /backups/daily/home.1 /backups/daily/home.2 
# Perform the core operations of the 

first backup第 n次 Backup 
# # Change the previous backup directory/ backups/daily/home.n to /backups/daily/home.1
# Change the name to /backups/daily/home.(n+1) in turn to /backups/daily/home.2 
# Perform the core operation of the first backup

An example script for incremental backup is given below.

#!/bin/bash
#========================
# 您可以安排 cron 任务执行本脚本
# > crontab -e
#
# daily : 1 1 * * * /path/to/script/rsync-backup.sh
#========================
mydate="`date '+%Y%m%d.%H%M'`"

# Define rmt location
RmtUser=root
RmtHost=192.168.0.55
RmtPath=/home/
BackupSource="${RmtUser}@${RmtHost}:${RmtPath}"
#BackupSource="/home/"             # 若进行本地备份则用本地路径替换上面的行
# Define location of backup
BackupRoot="/backups/$RmtHost/"
# BackupRoot="/backups/localhost/" # 若进行本地备份则用本地路径替换上面的行
LogFile="${BackupRoot}/backup.log"
ExcludeList="/root/backup/backup-exclude-list.txt"
BackupName='home'
BackupNum="7"                      # 指定保留多少个增量备份(适用于每周生成归档文件)
#BackupNum="31"                    # 指定保留多少个增量备份(适用于每月生成归档文件)

# 定义函数检查目录 $1 是否存在,若不存在创建之
checkDir() {
    if [ ! -d "${BackupRoot}/$1" ] ; then
        mkdir -p "${BackupRoot}/$1"
    fi
}
# 定义函数实现目录滚动
# $1 -> 备份路径
# $2 -> 备份名称
# $3 -> 增量备份的数量
rotateDir() {
    for i in `seq $(($3 - 1)) -1 1`
    do
        if [ -d "$1/$2.$i" ] ; then
            /bin/rm -rf "$1/$2.$((i + 1))"
            mv "$1/$2.$i" "$1/$2.$((i + 1))"
        fi
    done
}
# Call the function checkDir to ensure that the directory exists 
checkDir "archive" 
checkDir "daily"

#======= Backup Begin =================
# S1: Rotate daily.
rotateDir "${BackupRoot}/daily" "$BackupName" "$BackupNum"

checkDir "daily/${BackupName}.0/"
checkDir "daily/${BackupName}.1/"

mv ${LogFile} ${BackupRoot}/daily/${BackupName}.1/

cat >> ${LogFile} <<_EOF
===========================================
    Backup done on: $mydate
===========================================
_EOF

# S2: Do the backup and save difference in ${BackupName}.1
rsync -av --delete \
    -b --backup-dir=${BackupRoot}/daily/${BackupName}.1 \
    --exclude-from=${ExcludeList} \
    $BackupSource ${BackupRoot}/daily/${BackupName}.0 \
    1>> ${LogFile} 2>&1

# S3: Create an archive backup every week
if [ `date +%w` == "0" ] # 每周日做归档
# if [ `date +%d` == "01" ] # 每月1日做归档
then
    tar -cjf ${BackupRoot}/archive/${BackupName}-${mydate}.tar.bz2 \
      -C ${BackupRoot}/daily/${BackupName}.0 .
fi

You can properly modify the variables in the above script:

RmtPath="$1/"
#BackupSource="$1/"
BackupName="$1"

Then pass script parameters to back up other directories. For example, to back up /www, you can use the following command:

./rsync-backup.sh /www

Snapshot incremental backup

Use rsync to do Snapshot incremental backups. Each snapshot is equivalent to a full backup. The core idea is to copy files with changes; create hard links to files without changes to reduce disk usage.
The following is a step-by-step description of the snapshot incremental backup of /home:

#第0次 Backup 
# First copy the contents of the /home directory to the backup directory /backups/home.0 
# rsync -a /home/ /backups/home.0 

# The first backup (this is the core operation) 
# By hard link Copy the form /backups/home.0 to /backups/home.1 
# cp -al /backups/home.0 /backups/home.1 
# Synchronize the contents of the /home directory to the directory /backups/home.0 
# (rsync When a changed file is found, delete it first, and then create the file) 
# rsync -a --delete /home/ /backups/home.0 

#第2次 Backup 
# Rename the backup directory /backups/home.1 /backups/home.2 
# mv /backups/home.1 /backups/home.2 
# Perform the core operation of the 

first backup 
# n-th backup # Change the previous backup directory /backups/home.n to /backups/ home.1 
# Change the name to /backups/home.(n+1 ) to /backups/home.2 
# Perform the core operation of the first backup

After rsync 2.5.6 version, the ––link-dest option is provided, as follows two core operation commands:

cp -al /backups/home.0 /backups/home.1
rsync -a --delete /home/ /backups/home.0

Can be simplified to the following command:

rsync -a --delete --link-dest=/backups/home.1 /home/ /backups/home.0

Here's an example snapshot incremental backup script from the HTTP: // www.mikerubel.org/computers/rsync_snapshots/contributed/peter_schneider-kamp

#!/bin/bash
# ----------------------------------------------------------------------
# mikes handy rotating-filesystem-snapshot utility
# ----------------------------------------------------------------------
# RCS info: $Id: make_snapshot.sh,v 1.6 2002/04/06 04:20:00 mrubel Exp $
# ----------------------------------------------------------------------
# this needs to be a lot more general, but the basic idea is it makes
# rotating backup-snapshots of /home whenever called
# ----------------------------------------------------------------------

# ------------- system commands used by this script --------------------
ID='/usr/bin/id';
ECHO='/bin/echo';

MOUNT='/bin/mount';
RM='/bin/rm';
MV='/bin/mv';
CP='/bin/cp';
TOUCH='/usr/bin/touch';

RSYNC='/usr/bin/rsync';

# ------------- file locations -----------------------------------------

MOUNT_DEVICE=/dev/hdb1;
SNAPSHOT_RW=/root/snapshots;
EXCLUDES=/etc/snapshot_exclude;

# ------------- backup configuration------------------------------------

BACKUP_DIRS="/etc /home"
NUM_OF_SNAPSHOTS=3
BACKUP_INTERVAL=hourly

# ------------- the script itself --------------------------------------

# make sure we're running as root
if (( `$ID -u` != 0 )); then { $ECHO "Sorry, must be root. Exiting..."; exit; } fi

echo "Starting snapshot on "`date`

# attempt to remount the RW mount point as RW; else abort
$MOUNT -o remount,rw $MOUNT_DEVICE $SNAPSHOT_RW ;
if (( $? )); then
{
	$ECHO "snapshot: could not remount $SNAPSHOT_RW readwrite";
	exit;
}
fi;

# rotating snapshots
for BACKUP_DIR in $BACKUP_DIRS
do
	NUM=$NUM_OF_SNAPSHOTS
	# step 1: delete the oldest snapshot, if it exists:
	if [ -d ${SNAPSHOT_RW}${BACKUP_DIR}/${BACKUP_INTERVAL}.$NUM ] ; then \
	$RM -rf ${SNAPSHOT_RW}${BACKUP_DIR}/${BACKUP_INTERVAL}.$NUM ; \
	fi ;
	NUM=$(($NUM-1))
	# step 2: shift the middle snapshots(s) back by one, if they exist
	while [[ $NUM -ge 1 ]]
	do
		if [ -d ${SNAPSHOT_RW}${BACKUP_DIR}/${BACKUP_INTERVAL}.$NUM ] ; then \
			$MV ${SNAPSHOT_RW}${BACKUP_DIR}/${BACKUP_INTERVAL}.$NUM ${SNAPSHOT_RW}${BACKUP_DIR}/${BACKUP_IN}
		fi;
		NUM=$(($NUM-1))
	done

	# step 3: make a hard-link-only (except for dirs) copy of the latest snapshot,
	# if that exists
	if [ -d ${SNAPSHOT_RW}${BACKUP_DIR}/${BACKUP_INTERVAL}.0 ] ; then \
		$CP -al ${SNAPSHOT_RW}${BACKUP_DIR}/${BACKUP_INTERVAL}.0 ${SNAPSHOT_RW}${BACKUP_DIR}/${BACKUP_INTERVAL}
	fi;
	# step 4: rsync from the system into the latest snapshot (notice that
	# rsync behaves like cp --remove-destination by default, so the destination
	# is unlinked first. If it were not so, this would copy over the other
	# snapshot(s) too!
	$RSYNC \
		-va --delete --delete-excluded \
		--exclude-from="$EXCLUDES" \
		${BACKUP_DIR}/ ${SNAPSHOT_RW}${BACKUP_DIR}/${BACKUP_INTERVAL}.0 ;
	# step 5: update the mtime of ${BACKUP_INTERVAL}.0 to reflect the snapshot time
	$TOUCH ${SNAPSHOT_RW}${BACKUP_DIR}/${BACKUP_INTERVAL}.0 ;
done

# now remount the RW snapshot mountpoint as readonly

$MOUNT -o remount,ro $MOUNT_DEVICE $SNAPSHOT_RW ;
if (( $? )); then
{
	$ECHO "snapshot: could not remount $SNAPSHOT_RW readonly";
	exit;
} fi;

 

Guess you like

Origin blog.csdn.net/JineD/article/details/111872003