Docker-compose部署redis-cluster集群
参考链接:https://www.jianshu.com/p/b7dea62bcd8b
参考镜像:https://github.com/publicisworldwide/docker-stacks/tree/master/oracle-linux/environments/storage/redis-cluster
1.创建数据目录
1 2 3 4 5 6
| root@rede: /data/redis root@rede: root@rede: root@rede: 7001 7002 7003 7004 7005 7006 docker-compose.yml
|
2.创建一个docker-compose.yml文件,内容如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56
| version: '3'
services: redis1: image: publicisworldwide/redis-cluster network_mode: host restart: always volumes: - /data/redis/7001/data:/data environment: - REDIS_PORT=7001
redis2: image: publicisworldwide/redis-cluster network_mode: host restart: always volumes: - /data/redis/7002/data:/data environment: - REDIS_PORT=7002
redis3: image: publicisworldwide/redis-cluster network_mode: host restart: always volumes: - /data/redis/7003/data:/data environment: - REDIS_PORT=7003
redis4: image: publicisworldwide/redis-cluster network_mode: host restart: always volumes: - /data/redis/7004/data:/data environment: - REDIS_PORT=7004
redis5: image: publicisworldwide/redis-cluster network_mode: host restart: always volumes: - /data/redis/7005/data:/data environment: - REDIS_PORT=7005
redis6: image: publicisworldwide/redis-cluster network_mode: host restart: always volumes: - /data/redis/7006/data:/data environment: - REDIS_PORT=7006
|
这里用的别人一个现成的镜像 publicisworldwide/redis-cluster ,自己制作也可以,后续会发布制作过程,不过不建议自己做,有什么需要更改的可以在现有的镜像基础上进行更改之后commit。镜像可以自己下载,也可以等启动服务的时候他自动下载。
这里用的是host网络模式,redis挂载到之前创建的目录下。
如果不想使用主机模式可以文章开头的链接,具体我这里就不写了,自己看。至于docker-compose的安装网上一大堆文章,这里不赘述。
3.创建文件之后,启动服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
| root@rede: docker-compose up -d Creating redis_redis1_1 ... done Creating redis_redis5_1 ... done Creating redis_redis4_1 ... done Creating redis_redis3_1 ... done Creating redis_redis6_1 ... done Creating redis_redis2_1 ... done root@rede: docker-compose ps Name Command State Ports --------------------------------------------------------------- redis_redis1_1 /usr/local/bin/entrypoint. ... Up redis_redis2_1 /usr/local/bin/entrypoint. ... Up redis_redis3_1 /usr/local/bin/entrypoint. ... Up redis_redis4_1 /usr/local/bin/entrypoint. ... Up redis_redis5_1 /usr/local/bin/entrypoint. ... Up redis_redis6_1 /usr/local/bin/entrypoint. ... Up
|
状态都为Up,说明服务均正常启动,如果有状态不正常的容器,看看自己服务器的端口是不是被占用了。
4.redis-cluster集群配置
上述只是启动了6个redis容器,并没有设置集群,随便进入一个redis容器,执行下面的命令,IP替换成自己宿主机的IP
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66
| root@rede:/data/redis ▶ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a0c7047c2a0e publicisworldwide/redis-cluster "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes redis_redis3_1 f165a7c962ca publicisworldwide/redis-cluster "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes redis_redis1_1 8ce8ec23974e publicisworldwide/redis-cluster "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes redis_redis4_1 1c37f7b71a21 publicisworldwide/redis-cluster "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes redis_redis6_1 810b0ff90710 publicisworldwide/redis-cluster "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes redis_redis2_1 e5a38e1cc280 publicisworldwide/redis-cluster "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes redis_redis5_1 df94ddfb5d1f zookeeper:3.4.11 "/docker-entrypoint.…" 20 hours ago Up 20 hours 2888/tcp, 3888/tcp, 0.0.0.0:2182->2181/tcp zookeeper_2 5eb43213b27a zookeeper:3.4.11 "/docker-entrypoint.…" 20 hours ago Up 20 hours 2888/tcp, 3888/tcp, 0.0.0.0:2183->2181/tcp zookeeper_3 d17a6180907e zookeeper:3.4.11 "/docker-entrypoint.…" 20 hours ago Up 20 hours 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp zookeeper_1
root@rede:# docker exec -ti d3a904cc0d5a bash root@rede:/data# redis-cli --cluster create 192.168.1.100:7001 192.168.1.100:7002 192.168.1.100:7003 192.168.1.100:7004 192.168.1.100:7005 192.168.1.100:7006 --cluster-replicas 1 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 192.168.1.100:7004 to 192.168.1.100:7001 Adding replica 192.168.1.100:7005 to 192.168.1.100:7002 Adding replica 192.168.1.100:7006 to 192.168.1.100:7003 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master M: 7237efa105ff4246c02da08c0e65c18246755f2f 192.168.1.100:7001 slots:[0-5460] (5461 slots) master M: 241d6eb2abcd9a6d41db5231fcd80fac1d452bee 192.168.1.100:7002 slots:[5461-10922] (5462 slots) master M: 493083f2733bc181295f5b9a9fa9bc6fdf821958 192.168.1.100:7003 slots:[10923-16383] (5461 slots) master S: b60ef6cbea0ae06566d17db4a1b4452e56543419 192.168.1.100:7004 replicates 241d6eb2abcd9a6d41db5231fcd80fac1d452bee S: b48e546e963f694eac91d49b285040bb2bec5513 192.168.1.100:7005 replicates 493083f2733bc181295f5b9a9fa9bc6fdf821958 S: 259de2d6be917e64cdc01604521da45cc1f8d842 192.168.1.100:7006 replicates 7237efa105ff4246c02da08c0e65c18246755f2f Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join .... >>> Performing Cluster Check (using node 192.168.1.100:7001) M: 7237efa105ff4246c02da08c0e65c18246755f2f 192.168.1.100:7001 slots:[0-5460] (5461 slots) master 1 additional replica(s) M: 241d6eb2abcd9a6d41db5231fcd80fac1d452bee 192.168.1.100:7002 slots:[5461-10922] (5462 slots) master 1 additional replica(s) M: 493083f2733bc181295f5b9a9fa9bc6fdf821958 192.168.1.100:7003 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: b60ef6cbea0ae06566d17db4a1b4452e56543419 192.168.1.100:7004 slots: (0 slots) slave replicates 241d6eb2abcd9a6d41db5231fcd80fac1d452bee S: 259de2d6be917e64cdc01604521da45cc1f8d842 192.168.1.100:7006 slots: (0 slots) slave replicates 7237efa105ff4246c02da08c0e65c18246755f2f S: b48e546e963f694eac91d49b285040bb2bec5513 192.168.1.100:7005 slots: (0 slots) slave replicates 493083f2733bc181295f5b9a9fa9bc6fdf821958 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
|
至此,基于docker部署的redis-cluster集群就已经完成