moosefs搭建与应用
MooseFS简介:MooseFS是一个具备冗余容错功能的分布式网络文件系统,它将数据分别存放在多个物理服务器单独磁盘或分区上,确保一份数据有多个备份副本。因此MooseFS是一中很好的分布式存储。接下来我们通过搭建moosefs,并了解的使用。
环境主机-centos7:搭建moosefs需要五台主机
node1172.25.0.29mfsmaster
node2172.25.0.30Metalogger
node3172.25.0.31check servers
node4172.25.0.32check servers
node5172.25.0.33挂载客户端
一、mfsmaster的安装:
node1上
1、下载3.0包
1
2
#yum install zlib-devel -y##下载环境包
# wget https://github.com/moosefs/moosefs/archive/v3.0.96.tar.gz
2、安装master:
1
2
3
4
5
6
7
# useradd mfs
# tar -xf v3.0.96.tar.gz
# cd moosefs-3.0.96/
# ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount
#make && make install
# ls /usr/local/mfs/
binetcsbinsharevar
注:etc和var目录里面存放的是配置文件和MFS的数据结构信息,因此请及时做好备份,防止灾难损毁。后面做了 Master Server双机之后,就可以解决这个问题。
3、配置master
1
2
3
4
# pwd
/usr/local/mfs/etc/mfs
# ls
mfsexports.cfg.samplemfsmaster.cfg.samplemfsmetalogger.cfg.samplemfstopology.cfg.sample
##要把需要的重命名成.cfg文件:
1
2
# cp mfsexports.cfg.sample mfsexports.cfg
# cp mfsmaster.cfg.samplemfsmaster.cfg
4、修改控制文件:
1
2
3
#vim mfsexports.cfg
* / rw,alldirs,mapall=mfs:mfs,password=xiaoluo
* . rw
注:##mfsexports.cfg 文件中,每一个条目就是一个配置规则,而每一个条目又分为三个部分,其中第一部分是mfs客户端的ip地址或地址范围,第二部分是被挂载的目录,第三个部分用来设置mfs客户端可以拥有的访问权限。
5、开启元数据文件默认是empty文件
1
#cp /usr/local/mfs/var/mfs/metadata.mfs.empty /usr/local/mfs/var/mfs/metadata.mfs
6、启动master:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# /usr/local/mfs/sbin/mfsmaster start
open files limit has been set to: 16384
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmaster modules ...
exports file has been loaded
mfstopology configuration file (/usr/local/mfs/etc/mfstopology.cfg) not found - using defaults
loading metadata ...
metadata file has been loaded
no charts data file - initializing empty charts
master <-> metaloggers module: listen on *:9419
master <-> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly
7、检查进程是否启动:
1
2
3
# ps -ef | grep mfs
mfs 8109 15 18:40 ? 00:00:02 /usr/local/mfs/sbin/mfsmaster start
root 8123 13070 18:41 pts/0 00:00:00 grep --color=auto mfs
8、查看端口:
1
2
3
4
5
6
# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9419 0.0.0.0:* LISTEN 8109/mfsmaster
tcp 0 0 0.0.0.0:9420 0.0.0.0:* LISTEN 8109/mfsmaster
tcp 0 0 0.0.0.0:9421 0.0.0.0:* LISTEN 8109/mfsmaster
二、安装Metalogger Server:
前面已经介绍了,Metalogger Server 是 Master Server 的备份服务器。因此,Metalogger Server 的安装步骤和 Master Server 的安装步骤相同。并且,最好使用和 Master Server 配置一样的服务器来做 Metalogger Server。这样,一旦主服务器master宕机失效,我们只要导入备份信息changelogs到元数据文件,备份服务器可直接接替故障的master继续提供服务。
1、从master把包copy过来:
1
2
# scp /usr/local/src/v3.0.96.tar.gz node2:/usr/local/src/
v3.0.96.tar.gz
1
2
3
4
5
# tar zxvf v3.0.96.tar.gz
# useradd mfs
# yum install zlib-devel -y
# ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs--disable-mfschunkserver --disable-mfsmount
# make && make install
2、配置Metalogger Server:
1
2
3
4
5
6
# cd /usr/local/mfs/etc/mfs/
# ls
mfsexports.cfg.samplemfsmaster.cfg.samplemfsmetalogger.cfg.samplemfstopology.cfg.sample
#cp mfsmetalogger.cfg.sample mfsmetalogger.cfg
#vim mfsmetalogger.cfg
MASTER_HOST = 172.25.0.29
3、启动Metalogger Server:
1
2
3
4
5
6
#/usr/local/mfs/sbin/mfsmetalogger start
open files limit has been set to: 4096
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmetalogger modules ...
mfsmetalogger daemon initialized properly
1
2
# netstat -lantp|grep metalogger
tcp 0 0 172.25.0.30:45620 172.25.0.29:9419 ESTABLISHED 1751/mfsmetalogger
4、查看一下生成的日志文件:
1
2
# ls /usr/local/mfs/var/mfs/
changelog_ml_back.0.mfschangelog_ml_back.1.mfsmetadata.mfs.emptymetadata_ml.mfs.back
三、安装check servers:
在node3和node4上操作,我这里在node3上演示:
1、从master把包copy过来:
1
2
# scp /usr/local/src/v3.0.96.tar.gz node3:/usr/local/src/
v3.0.96.tar.gz
1
2
3
4
5
6
# useradd mfs
# yum install zlib-devel -y
# cd /usr/local/src/
# tar zxvf v3.0.96.tar.gz
# ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs--disable-mfsmaster --disable-mfsmount
# make && make install
2、配置check server:
1
2
3
4
5
# cd /usr/local/mfs/etc/mfs/
You have new mail in /var/spool/mail/root
# cp mfschunkserver.cfg.sample mfschunkserver.cfg
# vim mfschunkserver.cfg
MASTER_HOST = 172.25.0.29
3、配置mfshdd.cfg主配置文件
mfshdd.cfg该文件用来设置你将 Chunk Server 的哪个目录共享出去给 Master Server进行管理。当然,虽然这里填写的是共享的目录,但是这个目录后面最好是一个单独的分区。
1
2
3
4
# cp /usr/local/mfs/etc/mfs/mfshdd.cfg.sample /usr/local/mfs/etc/mfs/mfshdd.cfg
You have new mail in /var/spool/mail/root
# vim /usr/local/mfs/etc/mfs/mfshdd.cfg
/mfsdata
4、启动check Server:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# mkdir /mfsdata
# chown mfs:mfs /mfsdata/
You have new mail in /var/spool/mail/root
# /usr/local/mfs/sbin/mfschunkserver start
open files limit has been set to: 16384
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
setting glibc malloc arena max to 4
setting glibc malloc arena test to 4
initializing mfschunkserver modules ...
hdd space manager: path to scan: /mfsdata/
hdd space manager: start background hdd scanning (searching for available chunks)
main server module: listen on *:9422
no charts data file - initializing empty charts
mfschunkserver daemon initialized properly
###检查监听端口:
1
2
# netstat -lantp|grep 9420
tcp 0 0 172.25.0.31:45904 172.25.0.29:9420 ESTABLISHED 9896/mfschunkserver
四、客户端挂载文件安装:
node5上
1、安装FUSE:
1
2
3
4
5
# lsmod|grep fuse
# yum install fuse fuse-devel-y
# modprobe fuse
# lsmod |grep fuse
fuse 918740
2、安装挂载客户端
1
2
3
4
5
6
# yum install zlib-devel -y
# useradd mfs
# tar -zxvf v3.0.96.tar.gz
# cd moosefs-3.0.96/
# ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster --disable-mfschunkserver --enable-mfsmount
# make && make install
3、在客户端上挂载文件系统,先创建挂载目录:
1
2
3
4
5
# mkdir /mfsdata
# chown -R mfs:mfs /mfsdata/
# /usr/local/mfs/bin/mfsmount /mfsdata -H 172.25.0.29 -p
MFS Password:
mfsmaster accepted connection with parameters: read-write,restricted_ip,map_all ; root mapped to mfs:mfs ; users mapped to mfs:mfs
1
2
3
4
5
6
7
8
9
10
# df -h
Filesystem SizeUsed Avail Use% Mounted on
/dev/mapper/cl-root 18G1.9G 17G11% /
devtmpfs 226M 0226M 0% /dev
tmpfs 237M 0237M 0% /dev/shm
tmpfs 237M4.6M232M 2% /run
tmpfs 237M 0237M 0% /sys/fs/cgroup
/dev/sda1 1014M139M876M14% /boot
tmpfs 48M 0 48M 0% /run/user/0
172.25.0.29:9421 36G4.2G 32G12% /mfsdata
4、我们写入本地文件测试一下:
1
2
3
4
5
# cd /mfsdata/
# touch xiaozhang.txt
# echo "test" > xiaozhang.txt
#catxiaozhang.txt
test
发现可以写入成功。也证明我们的mfs已经搭建完成的。
总结:现在我们我可以发现,mfs的master只有一台,很明显得单点缺陷,因此为了解决这个问题,我也查阅了很多资料,终于解决了这个单点的问题
不错很详细
页:
[1]