CentOS 7 glusterfs配置
####CentOS 安装 glusterfs 非常的简单 #####echo -e "172.16.3.232 h232\n172.16.3.233 h233\n172.16.3.234 h234\n172.16.3.235 h235" >>/etc/hosts cat >/etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.3.232 h232 172.16.3.233 h233 172.16.3.234 h234 172.16.3.235 h235 172.16.3.236 h236 EOF ###在所有节点 systemctl stop firewalld systemctl disable firewalld sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config setenforce 0 每一个节点安装GlusterFS yum install -y centos-release-gluster yum install glusterfs-server -y 在三个节点都安装glusterfs ##yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma 配置 GlusterFS 集群: 启动 glusterFS systemctl restart glusterd.service systemctl enable glusterd.service 在 swarm-manager 节点上配置,将 节点 加入到 集群中。 gluster peer probe h232 gluster peer probe h233 gluster peer probe h234 gluster peer probe h235 查看集群状态: gluster peer status #创建数据存储目录: mkdir -p /gluster/gv1 ##创建GlusterFS磁盘: gluster volume create gv1 replica 2 transport tcp h232:/gluster/gv1 h233:/gluster/gv1 h234:/gluster/gv1 h235:/gluster/gv1 force ####gluster volume create models replica 2 h232:/gluster/gv1 h233:/gluster/gv1 force 启动 gv1 [root@swarm-manager ~]#gluster volume start gv1 volume start: models: success ###再查看 volume 状态: [root@swarm-manager ~]#gluster volume info Volume Name: gv1 Type: Distributed-Replicate Volume ID: 5cb382aa-4438-4743-b8f3-b5152dd997ef Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: h232:/gluster/gv1 Brick2: h233:/gluster/gv1 Brick3: h234:/gluster/gv1 Brick4: h235:/gluster/gv1 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off mkdir -p /gfsmnt_gv1 mount -t glusterfs localhost:gv1 /gfsmnt_gv1/ echo 'localhost:/gv1 /gfsmnt_gv1/ glusterfs _netdev,rw,acl 0 0' >>/etc/fstab ############################################## ##gluster 性能调优: #开启 指定 volume 的配额 gluster volume quota gv1 enable #限制 volume 中 / (既总目录) 最大使用 80GB 空(不能大于空闲磁盘容量,不然mount时会报错) gluster volume quota gv1 limit-usage / 10GB #设置 cache 4GB(不能大于空闲内存,不然mount时会报错) gluster volume set gv1 performance.cache-size 4GB #开启 异步 , 后台操作 gluster volume set gv1 performance.flush-behind on ##开启预读 gluster volume set gv1 performance.read-ahead on #设置 io 线程 32 gluster volume set gv1 performance.io-thread-count 32 #设置 回写 (写数据时间,先写入缓存内,再写入硬盘) gluster volume set gv1 performance.write-behind on [root@h232 ~]# gluster volume info Volume Name: gv1 Type: Distributed-Replicate Volume ID: 61b624a0-29a3-490c-81fb-bc07e1af4b86 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: h232:/gluster/gv1 Brick2: h233:/gluster/gv1 Brick3: h234:/gluster/gv1 Brick4: h235:/gluster/gv1 Options Reconfigured: performance.io-thread-count: 32 performance.write-behind: on performance.flush-behind: on performance.cache-size: 4GB features.quota-deem-statfs: on features.inode-quota: on features.quota: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off ############################################################################## ####客户端挂载volume yum install -y centos-release-gluster yum install -y glusterfs glusterfs-fuse mkdir -p /gfsmnt_gv1 mount -t glusterfs localhost:gv1 /gfsmnt_gv1/