redis学习总结之主从复制

redis的主从复制配置相当的简单,只需要两句话,通过主从复制可以允许多个slave server拥有和master server相同的数据库副本。

这样做的好处就是可以提升IO的速率,并且主从通信的效率超高,按照自己的理解是这样的,在配置文件中可以看到允许主从通信的,这样就是master会启动一个后台进程,将数据快照保存到文件中,同时还会有进程来收集新写入master的命令,然后又master发给slave,而slave则是将文件持久化到磁盘后,在加载到内存中去。

这里在做实验的时候,我是直接在slave中配置的:

# slaveof <masterip> <masterport>
slaveof 10.5.110.239 6379
# If the master is password protected (using the requirepass configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth <master-password>
masterauth chen
然后启动slave的server,进行查看的时候就已经可以看到master中存放的所有数据项了,接着又查看了日志文件:

[18343] 08 Jul 19:36:19 - Accepted 10.5.110.234:41968
[18343] 08 Jul 19:36:19 * Slave ask for synchronization
[18343] 08 Jul 19:36:19 * Starting BGSAVE for SYNC
[18343] 08 Jul 19:36:19 * Background saving started by pid 22405
[22405] 08 Jul 19:36:19 * DB saved on disk
[18343] 08 Jul 19:36:19 * Background saving terminated with success
[18343] 08 Jul 19:36:19 * Synchronization with slave succeeded
[18343] 08 Jul 19:36:23 - DB 0: 5 keys (0 volatile) in 8 slots HT.
[18343] 08 Jul 19:36:23 - DB 1: 6 keys (0 volatile) in 8 slots HT.
[18343] 08 Jul 19:36:23 - 1 clients connected (1 slaves), 565440 bytes in use
[18343] 08 Jul 19:36:28 - DB 0: 5 keys (0 volatile) in 8 slots HT.
[18343] 08 Jul 19:36:28 - DB 1: 6 keys (0 volatile) in 8 slots HT.
感觉是slave启动后,主动地去跟master联系,主动地拉数据,就好像mysql中的主从复制一样,一个IO thread去找master的binlog日志文件,然后写自己的binlog日志。然后在sql thread去写磁盘。这里就是我刚启动slave server,master server这里就有反映了。

这里我仔细看了下日志文件,发现一个现象就是master会隔5秒的时间去找一下看有没有动作或是连接,这里我在master的客户端删除了一个数据日志文件是这样的:

[18343] 08 Jul 19:53:44 - 0 clients connected (1 slaves), 557064 bytes in use
[18343] 08 Jul 19:53:45 - Accepted 127.0.0.1:21599
[18343] 08 Jul 19:53:49 - DB 0: 6 keys (0 volatile) in 8 slots HT.
[18343] 08 Jul 19:53:49 - DB 1: 6 keys (0 volatile) in 8 slots HT.
[18343] 08 Jul 19:53:49 - 1 clients connected (1 slaves), 565520 bytes in use
[18343] 08 Jul 19:53:54 - DB 0: 6 keys (0 volatile) in 8 slots HT.
[18343] 08 Jul 19:53:54 - DB 1: 6 keys (0 volatile) in 8 slots HT.
[18343] 08 Jul 19:53:54 - 1 clients connected (1 slaves), 565520 bytes in use
[18343] 08 Jul 19:53:59 - DB 0: 5 keys (0 volatile) in 8 slots HT.
[18343] 08 Jul 19:53:59 - DB 1: 6 keys (0 volatile) in 8 slots HT.
[18343] 08 Jul 19:53:59 - 1 clients connected (1 slaves), 565424 bytes in use
大家看很清楚的,就是每5秒检测一次,在检测到19:53:59发现我删除了一条数据,可是之前就已经有连接,然后我又看一下后面的情况,发现到了19:59:59时候又是0连接,也就是说不管从库主库有什么动作,它都会发起连接的动作,保证数据的一致性。并且连接时间能在5分钟左右,没有动作就自动断开。这里我们在做一个实验:

我在从库里增加删除一条数据看看主库log和从库log的变化情况:

在从库里面增加删除一条数据主库是没有变化的,连接也没有,对,好像就是这样的原理,这里是从库的变化情况:

[2539] 08 Jul 20:13:23 - Accepted 127.0.0.1:44902
[2539] 08 Jul 20:13:23 - DB 0: 5 keys (0 volatile) in 8 slots HT.
[2539] 08 Jul 20:13:23 - DB 1: 6 keys (0 volatile) in 8 slots HT.
[2539] 08 Jul 20:13:23 - 2 clients connected (0 slaves), 565440 bytes in use
[2539] 08 Jul 20:13:28 - DB 0: 5 keys (0 volatile) in 8 slots HT.
[2539] 08 Jul 20:13:28 - DB 1: 6 keys (0 volatile) in 8 slots HT.
[2539] 08 Jul 20:13:28 - 2 clients connected (0 slaves), 565440 bytes in use
[2539] 08 Jul 20:13:33 - DB 0: 5 keys (0 volatile) in 8 slots HT.
[2539] 08 Jul 20:13:33 - DB 1: 6 keys (0 volatile) in 8 slots HT.
[2539] 08 Jul 20:13:33 - 2 clients connected (0 slaves), 565440 bytes in use
[2539] 08 Jul 20:13:38 - DB 0: 5 keys (0 volatile) in 8 slots HT.
[2539] 08 Jul 20:13:38 - DB 1: 6 keys (0 volatile) in 8 slots HT.
[2539] 08 Jul 20:13:38 - 2 clients connected (0 slaves), 565440 bytes in use
[2539] 08 Jul 20:13:43 - DB 0: 5 keys (0 volatile) in 8 slots HT.
[2539] 08 Jul 20:13:43 - DB 1: 6 keys (0 volatile) in 8 slots HT.
[2539] 08 Jul 20:13:43 - 2 clients connected (0 slaves), 565440 bytes in use
[2539] 08 Jul 20:13:48 - DB 0: 6 keys (0 volatile) in 8 slots HT.
[2539] 08 Jul 20:13:48 - DB 1: 6 keys (0 volatile) in 8 slots HT.
[2539] 08 Jul 20:13:48 - 2 clients connected (0 slaves), 565512 bytes in use
很多不明白的地方,好好研究下这个原理
相关文章
相关标签/搜索