4591566 发表于 2019-2-1 08:49:27

MooseFS的一些机制和东西【更新完毕】

  error:no pkg-config - can't check FUSE version #'
#yum install -y fuse-devel  

  error:checking for zlibVersion in -lz... no
#yum install -y zlib-devel  

  packs:
wget http://sourceforge.net/projects/moosefs/files/moosefs/1.6.27/mfs-1.6.27-1.tar.gz/download  configure:
  --disable-mfsmaster
  --disable-mfschunkserver
  --disable-mfsmount
  

  MfsMaster/loger Install:
yum install -y fuse-devel zlib-devel
useradd mfs -s /sbin/nologin
./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfsmount --disable-mfschunkserver
make && make install  

  MfsMount Install:
yum install -y fuse-devel zlib-devel
useradd mfs -s /sbin/nologin
./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster --disable-mfsmount
make && make install  

  MfsMount Install:
yum install -y fuse-devel zlib-devel
useradd mfs -s /sbin/nologin
./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster --disable-mfschunkserver
make && make install  

  记录:
  

  目录架构:
  # tree
  .
  ├── etc#配置文件(因为这只是安装了master没装mfsmout和mfschunks)
  │ └── mfs
  │ ├── mfsexports.cfg #访问权限文件
  │ ├── mfsmaster.cfg #主配置文件
  │ ├── mfsmetalogger.cfg #日志记录文件
  │ └── mfstopology.cfg #
  ├── sbin
  │ ├── mfscgiserv
  │ ├── mfsmaster
  │ ├── mfsmetadump
  │ ├── mfsmetalogger
  │ └── mfsmetarestore
  ├── share#帮助文档以及cgi,WEB接口
  │ ├── man
  │ │ ├── man1
  │ │ ├── man5
  │ │ │ ├── mfsexports.cfg.5
  │ │ │ ├── mfsmaster.cfg.5
  │ │ │ ├── mfsmetalogger.cfg.5
  │ │ │ └── mfstopology.cfg.5
  │ │ ├── man7
  │ │ │ ├── mfs.7
  │ │ │ └── moosefs.7
  │ │ └── man8
  │ │ ├── mfscgiserv.8
  │ │ ├── mfsmaster.8
  │ │ ├── mfsmetalogger.8
  │ │ └── mfsmetarestore.8
  │ └── mfscgi
  │ ├── chart.cgi
  │ ├── err.gif
  │ ├── favicon.ico
  │ ├── index.html
  │ ├── logomini.png
  │ ├── mfs.cgi
  │ └── mfs.css
  └── var#日志记录
  └── mfs
  ├── changelog.0.mfs
  ├── changelog.1.mfs
  ├── metadata.mfs.back
  ├── metadata.mfs.back.1
  ├── sessions.mfs
  └── stats.mfs
  

  ./var/mfs
  # ls
  changelog.0.mfs changelog.1.mfs metadata.mfs.back metadata.mfs.back.1 sessions.mfs stats.mfs
  

  # head -n 10 changelog.1.mfs
0: 1383311888|SESSION():1
1: 1383311894|ACCESS(1)
2: 1383311972|ACCESS(1)
3: 1383311975|ACCESS(1)
4: 1383312013|CREATE(1,www.img,f,420,999,999,0):2
5: 1383312013|ACQUIRE(2,1)
6: 1383312013|WRITE(2,0,1):1
7: 1383312013|LENGTH(2,1556480)
8: 1383312013|UNLOCK(1)
9: 1383312013|WRITE(2,0,0):1  # head -n 10 changelog.0.mfs
84495: 1383315313|ACCESS(1)
84496: 1383315316|ACCESS(1)
84497: 1383315328|CREATE(1,190.txt,f,420,999,999,0):3
84498: 1383315328|ACQUIRE(3,1)
84499: 1383315328|WRITE(3,0,1):314
84500: 1383315328|LENGTH(3,11)
84501: 1383315328|UNLOCK(314)
84502: 1383315346|ACCESS(1)
84503: 1383315349|ACCESS(1)
84504: 1383315351|ACCESS(1)  

  # tail -n 1 changelog.1.mfs
  84494: 1383314326|ACCESS(1)
  # head -n 1 changelog.0.mfs
  84495: 1383315313|ACCESS(1)
  

  煮酒品茶:可以看出changelog.1.mfs的结尾是0的开始,也可以看到文件创建的过程。Client不管chunks断没断掉,都是可以挂载的而且有目录信息,目录信息存在Master的内存中。
  

  Chunks
  # ls
00 08 10 18 20 28 30 38 40 48 50 58 60 68 70 78 80 88 90 98 A0 A8 B0 B8 C0 C8 D0 D8 E0 E8 F0 F8
01 09 11 19 21 29 31 39 41 49 51 59 61 69 71 79 81 89 91 99 A1 A9 B1 B9 C1 C9 D1 D9 E1 E9 F1 F9
02 0A 12 1A 22 2A 32 3A 42 4A 52 5A 62 6A 72 7A 82 8A 92 9A A2 AA B2 BA C2 CA D2 DA E2 EA F2 FA
03 0B 13 1B 23 2B 33 3B 43 4B 53 5B 63 6B 73 7B 83 8B 93 9B A3 AB B3 BB C3 CB D3 DB E3 EB F3 FB
04 0C 14 1C 24 2C 34 3C 44 4C 54 5C 64 6C 74 7C 84 8C 94 9C A4 AC B4 BC C4 CC D4 DC E4 EC F4 FC
05 0D 15 1D 25 2D 35 3D 45 4D 55 5D 65 6D 75 7D 85 8D 95 9D A5 AD B5 BD C5 CD D5 DD E5 ED F5 FD
06 0E 16 1E 26 2E 36 3E 46 4E 56 5E 66 6E 76 7E 86 8E 96 9E A6 AE B6 BE C6 CE D6 DE E6 EE F6 FE
07 0F 17 1F 27 2F 37 3F 47 4F 57 5F 67 6F 77 7F 87 8F 97 9F A7 AF B7 BF C7 CF D7 DF E7 EF F7 FF  # ll
  total 65544
  -rw-r----- 1 mfs mfs 67113984 Nov 1 22:04 chunk_00000 00000 0000E F_00000 001.mfs
  # pwd
  /mfs/mfschunks1/EF
  

  煮酒品茶:16进制两位数命名文件夹,单层目录,存储方式为chunk_16位16制制数_8位数.mfs
  

  # ll
total 25067521
-rw-r--r-- 1 999 999   33 Nov 1 22:16 190.txt
-rw-r--r-- 1 999 999 4697620480 Nov 1 22:24 20g1.img
-rw-r--r-- 1 999 999 20971520000 Nov 1 21:26 www.img  

  # ll
total 25067521
-rw-r--r-- 1 mfs mfs   33 Nov 1 22:16 190.txt
-rw-r--r-- 1 mfs mfs 4697620480 Nov 1 22:24 20g1.img
-rw-r--r-- 1 mfs mfs 20971520000 Nov 1 21:26 www.img  

  煮酒品茶:应用程序的UID,GID要对的上。
  煮酒品茶:MooseFS 还真是不能kill掉主进程,kill就得恢复。可以试试:/usr/local/mfs/sbin/mfsmetarestore -a。当然有人说是最后一条日志就行。
  

  下面过程参考:http://sery.blog.运维网.com/10037/263515
  1、 安装新的MFS元数据服务器。
  2、 从当前的元数据服器(master)或日志备份服务器(mfsmetalogger)复制备份文件 metadata.mfs.back/metadate_ml.mfs.back到新的元服务器目录(metadata.mfs.back需要定时用crontab备份).
  3、 从当前的元数据服器(master)或日志备份服务器(mfsmetalogger)复制元数据服务器数据目录(/usr/local/mfs/var/mfs)到这个新的元数据服务器。
  4、 停止原先的那个元数据服务器(关闭计算机或停止它的网络服务)。
  5、 更改新的元数据服务器的ip为原来那个服务器的ip.
  6、 执行数据恢复操作,其命令为:mfsmetarestore -m metadata.mfs.back -o metadata.mfs changelog_ml.*.mfs 恢复成功后再执行启动新的元数据服务操作。
  7、 启动新的元数据服务 /usr/local/mfs/sbin/mfsmaster start
  8、 在MFS客户端检查MFS存储的数据是否跟恢复前一致?能否正常访问等等。
  

  测试:
# tar cvzf mfs.tar.gz mfs
mfs/
mfs/metadata.mfs.back
mfs/sessions.mfs
mfs/.mfsmaster.lock
mfs/changelog.0.mfs
# ls
mfs mfs.tar.gz
# pwd
/usr/local/mfs/var/mfs
# /sbin/pidof mfsmaster
5778
# kill -9 `/sbin/pidof mfsmaster`
# ps aux |grep mfsmaster |grep -v grep
#
# /usr/local/mfs/sbin/mfsmaster start
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmaster modules ...
loading sessions ... ok
sessions file has been loaded
exports file has been loaded
mfstopology: incomplete definition in line: 7
mfstopology: incomplete definition in line: 7
mfstopology: incomplete definition in line: 22
mfstopology: incomplete definition in line: 22
mfstopology: incomplete definition in line: 28
mfstopology: incomplete definition in line: 28
topology file has been loaded
loading metadata ...
can't open metadata file'
if this is new instalation then rename /usr/local/mfs/var/mfs/metadata.mfs.empty as /usr/local/mfs/var/mfs/metadata.mfs
init: file system manager failed !!!
error occured during initialization - exiting  

  恢复:
# mv mfs mfs.back
# tar zxvf mfs.tar.gz
mfs/
mfs/metadata.mfs.back
mfs/sessions.mfs
mfs/.mfsmaster.lock
mfs/changelog.0.mfs
# ls
mfs mfs.back mfs.tar.gz
# /usr/local/mfs/sbin/mfsmetarestore -a
loading objects (files,directories,etc.) ... ok
loading names ... ok
loading deletion timestamps ... ok
loading chunks data ... ok
checking filesystem consistency ... ok
connecting files and chunks ... L
C
ok
store metadata into file: /usr/local/mfs/var/mfs/metadata.mfs
# /usr/local/mfs/sbin/mfsmaster start
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmaster modules ...
loading sessions ... ok
sessions file has been loaded
exports file has been loaded
mfstopology: incomplete definition in line: 7
mfstopology: incomplete definition in line: 7
mfstopology: incomplete definition in line: 22
mfstopology: incomplete definition in line: 22
mfstopology: incomplete definition in line: 28
mfstopology: incomplete definition in line: 28
topology file has been loaded
loading metadata ...
loading objects (files,directories,etc.) ... ok
loading names ... ok
loading deletion timestamps ... ok
loading chunks data ... ok
checking filesystem consistency ... ok
connecting files and chunks ... ok
all inodes: 1
directory inodes: 1
file inodes: 0
chunks: 0
metadata file has been loaded
no charts data file - initializing empty charts
mastermetaloggers module: listen on *:9419
masterchunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly
#  

  

  数据存储过程:
  数据存储服务器才是真正存储用户数据的服务器,在存储文件时,首先把文件分成块,然后将这些块在数据存储服务器之间互相复制,同时,数据存储服务器还负责连接管理服务器,听从管理服务器调度,并为客户提供数据传输。数据存储服务器可以有多个,并且数量越多,可靠性越高,MFS可用的磁盘空间也越大。
  

  客户端挂载后写数据。
  # dd if=/dev/zero of=OneBig.img bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 23.7381 s, 44.2 MB/s  

  客户端chunks
read(10, "\0", 1)      = 1
read(16, "\0\0\0\324\0\1\0\30", 8)= 8
read(16, "\0\0\0\0\0\0\1\201\0\0\2&\2%\0\0\0\1\0\0\327\227\216\353\0\0\0\0\0\0\0\0"..., 65560) = 65560
futex(0x852c248, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x852c244, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x852c2a4, FUTEX_WAKE_PRIVATE, 1) = 1
write(16, "\0\0\0\323\0\0\0\r\0\0\0\0\0\0\1\201\0\0\2$\0", 21) = 21
write(16, "\0\0\0\323\0\0\0\r\0\0\0\0\0\0\1\201\0\0\2%\0", 21) = 21
poll([{fd=5, events=POLLIN}, {fd=13, events=POLLIN}, {fd=15, events=POLLIN}, {fd=8, events=POLLIN}, {fd=10, events=POLLIN}, {fd=16, events=POLLIN}], 6, 50) = 2 ([{fd=10, revents=POLLIN}, {fd=16, revents=POLLIN}])
gettimeofday({1383559161, 232654}, NULL) = 0
read(10, "\0", 1)      = 1
read(16, "\0\0\0\324\0\1\0\30", 8)= 8
read(16, "\0\0\0\0\0\0\1\201\0\0\2'\2&\0\0\0\1\0\0\327\227\216\353\0\0\0\0\0\0\0\0"..., 65560) = 65560
futex(0x852c248, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x852c244, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x852c2a4, FUTEX_WAKE_PRIVATE, 1) = 1
poll([{fd=5, events=POLLIN}, {fd=13, events=POLLIN}, {fd=15, events=POLLIN}, {fd=8, events=POLLIN}, {fd=10, events=POLLIN}, {fd=16, events=POLLIN|POLLOUT}], 6, 50) = 2 ([{fd=10, revents=POLLIN}, {fd=16, revents=POLLIN|POLLOUT}])
gettimeofday({1383559161, 233732}, NULL) = 0
read(10, "\0", 1)      = 1
read(16, "\0\0\0\324\0\1\0\30", 8)= 8
read(16, "\0\0\0\0\0\0\1\201\0\0\2(\2'\0\0\0\1\0\0\327\227\216\353\0\0\0\0\0\0\0\0"..., 65560) = 65560
futex(0x852c248, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x852c244, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x852c2a4, FUTEX_WAKE_PRIVATE, 1^C   

  ChunkServer1:
poll([{fd=5, events=POLLIN}, {fd=14, events=POLLIN}, {fd=16, events=POLLIN}, {fd=8, events=POLLIN}, {fd=11, events=POLLIN}, {fd=18, events=POLLIN}], 6, 50) = 1 ([{fd=18, revents=POLLIN}])
read(18, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 3448) = 1448
poll([{fd=5, events=POLLIN}, {fd=14, events=POLLIN}, {fd=16, events=POLLIN}, {fd=8, events=POLLIN}, {fd=11, events=POLLIN}, {fd=18, events=POLLIN}], 6, 50) = 1 ([{fd=18, revents=POLLIN}])
read(18, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 2000) = 1448
poll([{fd=5, events=POLLIN}, {fd=14, events=POLLIN}, {fd=16, events=POLLIN}, {fd=8, events=POLLIN}, {fd=11, events=POLLIN}, {fd=18, events=POLLIN}], 6, 50) = 1 ([{fd=18, revents=POLLIN}])
read(18, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 552) = 552
futex(0xd4f16c, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0xd4f168, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0xd4f1c8, FUTEX_WAKE_PRIVATE, 1) = 1
poll([{fd=5, events=POLLIN}, {fd=14, events=POLLIN}, {fd=16, events=POLLIN}, {fd=8, events=POLLIN}, {fd=11, events=POLLIN}, {fd=18, events=POLLIN}], 6, 50) = 1 ([{fd=11, revents=POLLIN}])
read(11, "\0", 1)      = 1
poll([{fd=5, events=POLLIN}, {fd=14, events=POLLIN}, {fd=16, events=POLLIN}, {fd=8, events=POLLIN}, {fd=11, events=POLLIN}, {fd=18, events=POLLIN|POLLOUT}], 6, 50) = 1 ([{fd=18, revents=POLLOUT}])  

  写单个小文件:
poll([{fd=5, events=POLLIN}, {fd=14, events=POLLIN}, {fd=16, events=POLLIN}, {fd=8, events=POLLIN}, {fd=11, events=POLLIN}], 5, 50) = 0 (Timeout)
poll([{fd=5, events=POLLIN}, {fd=14, events=POLLIN}, {fd=16, events=POLLIN}, {fd=8, events=POLLIN}, {fd=11, events=POLLIN}], 5, 50) = 0 (Timeout)
poll([{fd=5, events=POLLIN}, {fd=14, events=POLLIN}, {fd=16, events=POLLIN|POLLOUT}, {fd=8, events=POLLIN}, {fd=11, events=POLLIN}], 5, 50) = 1 ([{fd=16, revents=POLLOUT}])
write(16, "\0\0\0\0\0\0\0\0", 8)= 8  煮酒品茶:证明其是异步复制数据。
  请参考概念:http://cwtea.blog.运维网.com/4500217/1318037
  

  所以MFS的机制应该是这样子(读写过程请看官方图):
  写数据:
  Client(本地写到挂载目录中)-->Master(将文件分成块,相互复制到各个chunks(可定义))
  

品茶 10:44:17
他这个切片是一个文件分几个切片然后切片分开提交到各个chunks?然后再去异步去同步各个chunks?
牧野清风(260114787) 10:45:22
它现在是一个文件的多个切片都放到一台chunks上,然后由这台存储异步到另外一个台chunks备份  

  读数据:
  Client(挂载目录读) -->Master(分配读Chunks地址)-->Chunks(交换数据)
  

  煮酒品茶:因为是半分布式,所以余下最后一个结点时文件正确,而写数据和读数据过程要多一步,那么单文件会比传统的要慢一点。多个文件优势就出来了,可以分配不同的chunks去读。
  

  

  测试下:
  写本地数据和写mfs数据(虚拟机,所以所有都是自己硬盘IO,所以不准。用poll没用epoll也会有点影响。)
  分析下:(速度是少了三倍)
  物理磁盘IO/V1-master 写日志和metedata
  V2-Chunks 这里也要写
  V3-mountandchunks write() 这里要写
  

  煮酒品茶:上面的观点一部分是错的,文件分片,分片后往不同的chunks上面写。由于只有3个chunks,副本数为3,所以不清楚到底是按副本数来分片分开写还是往所有的上面写,应该不是分开往所有上面写吧。
  Two:如果分开往所有上面写,那么相互同步的时候只要左左同步其中一个(共三份数据就成)。
  看附件:
  

  日志:
  master:
  # ls
changelog.5.mfs changelog.7.mfs metadata.mfs.back.1 stats.mfs
changelog.6.mfs metadata.mfs.back sessions.mfs  

  logger:
  # ls
changelog_ml_back.0.mfs csstats.mfs   metadata_ml.mfs.back
changelog_ml_back.1.mfs metadata.mfs.empty sessions_ml.mfs  

  启动即进行一次同步:
# pwd
/usr/local/mfs/var/mfs
# cd ..
# mv mfs mfs.back
# mkdir mfs
# chown mfs.mfs mfs
# ll
total 8
drwxr-xr-x 2 mfs mfs 4096 Nov 5 00:10 mfs
drwxr-xr-x 2 mfs mfs 4096 Nov 5 00:10 mfs.back
# /usr/local/mfs/sbin/mfsmetalogger start
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmetalogger modules ...
mfsmetalogger daemon initialized properly
# ls
mfs mfs.back
# cd mfs
# ls
changelog_ml_back.0.mfs metadata_ml.mfs.back sessions_ml.mfs
changelog_ml_back.1.mfs metadata_ml.mfs.back.1  

  比较大小:
# du -sh metadata.mfs.back metadata.mfs.back.1
24Kmetadata.mfs.back
24Kmetadata.mfs.back.1
# du -sh metadata_ml.mfs.back metadata_ml.mfs.back.1
24Kmetadata_ml.mfs.back
24Kmetadata_ml.mfs.back.1  

  日志服务器不备份日志,只备份数据。所以主服务器的时间日志要保存好。主服器的back关闭就停止。是一个改名的过程。所以不能Kill,要让他自己终止。
# ll metadata.mfs.back*
-rw-r----- 1 mfs mfs 21771 Nov 5 00:00 metadata.mfs.back
-rw-r----- 1 mfs mfs 21771 Nov 4 23:00 metadata.mfs.back.1
# ps aux |grep mfs
root1428 0.0 1.5 185544 15280 ?S Nov04 0:11 python /usr/local/mfs/sbin/mfscgiserv start
mfs5817 0.1 10.0 118428 101972 ?S< Nov04 0:33 /usr/local/mfs/sbin/mfsmaster start
root6017 0.0 0.0 103236 852 pts/0 S+ 00:13 0:00 grep mfs
# /usr/local/mfs/sbin/mfsmaster stop
sending SIGTERM to lock owner (pid:5817)
waiting for termination ... terminated
# ps aux |grep mfs
root1428 0.0 1.5 185544 15280 ?S Nov04 0:11 python /usr/local/mfs/sbin/mfscgiserv start
root6020 0.0 0.0 103236 848 pts/0 S+ 00:14 0:00 grep mfs
# ll metadata.mfs*
-rw-r----- 1 mfs mfs 21771 Nov 5 00:14 metadata.mfs
-rw-r----- 1 mfs mfs 21771 Nov 5 00:00 metadata.mfs.back.1  

  拿日志恢复试试:
# cp -rfp mfs mfs.back
# ls
mfs mfs.back
# ps aux |grep mfs
mfs15845 0.1 9.3 109800 94720 ?S< 00:44 0:00 /usr/local/mfs/sbin/mfsmaster start
root15848 0.1 1.4 184784 14648 ?S 00:45 0:00 python /usr/local/mfs/sbin/mfscgiserv start
root15854 0.0 0.0 103236 852 pts/0 S+ 00:48 0:00 grep mfs
# kill -9 15845
# pwd
/usr/local/mfs/var  

  客户端卡住:
  恢复:
# mv mfs mfs.revo
# ls
mfs.back mfs.revo
# mkdir mfs
# chown -R mfs.mfs mfs
# ll mfs -d
drwxr-xr-x 2 mfs mfs 4096 Nov 5 00:50 mfs  

  metadata:
# scp root@192.168.0.200:/usr/local/mfs/var/mfs/metadata_ml.mfs.back ./
root@192.168.0.200's password:'
metadata_ml.mfs.back            100% 950.1KB/s 00:00
# ls
metadata_ml.mfs.back
# chown mfs.mfs *
# ll
total 4
-rw-r----- 1 mfs mfs 95 Nov 5 00:55 metadata_ml.mfs.back  

  loger:
# cp -rfp ../mfs.revo/changelog.0.mfs ./
# ll
total 8
-rw-r----- 1 mfs mfs 235 Nov 5 00:47 changelog.0.mfs
-rw-r----- 1 mfs mfs 95 Nov 5 00:55 metadata_ml.mfs.back  

  恢复:
# /usr/local/
bin/games/ include/ lib64/ mfs/sbin/ src/
etc/gateone/ lib/libexec/ nagios/ share/
# /usr/local/mfs/
etc/ sbin/ share/ var/
# /usr/local/mfs/
etc/ sbin/ share/ var/
# /usr/local/mfs/sbin/mfs
mfscgiservmfsmastermfsmetadumpmfsmetalogger mfsmetarestore
# /usr/local/mfs/sbin/mfsmetarestore -a
file 'metadata.mfs.back' not found - will try 'metadata_ml.mfs.back' instead
loading objects (files,directories,etc.) ... ok
loading names ... ok
loading deletion timestamps ... ok
loading chunks data ... ok
checking filesystem consistency ... ok
connecting files and chunks ... L
C
ok
store metadata into file: /usr/local/mfs/var/mfs/metadata.mfs  

  启动:
# /usr/local/mfs/sbin/mfsmaster start
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmaster modules ...
loading sessions ... file not found
if it is not fresh installation then you have to restart all active mounts !!!
exports file has been loaded
mfstopology: incomplete definition in line: 7
mfstopology: incomplete definition in line: 7
mfstopology: incomplete definition in line: 22
mfstopology: incomplete definition in line: 22
mfstopology: incomplete definition in line: 28
mfstopology: incomplete definition in line: 28
topology file has been loaded
loading metadata ...
loading objects (files,directories,etc.) ... ok
loading names ... ok
loading deletion timestamps ... ok
loading chunks data ... ok
checking filesystem consistency ... ok
connecting files and chunks ... ok
all inodes: 1
directory inodes: 1
file inodes: 0
chunks: 0
metadata file has been loaded
no charts data file - initializing empty charts
mastermetaloggers module: listen on *:9419
masterchunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly  

  恢复后会丢失一部分数据。日志间隔时间内。
  

  其它:
  mfsgetgoal:获取mfs目录、文件的副本数
  mfssetgoal:设置mfs目录、文件的副本数
  mfscheckfile:查看副本数简单
  mfsfileinfo:查看详细的副本数,chunks/分片
  mfsdirinfo:以数量的方法显示mfsfileinfo
  mfsgettrashtime:获取垃圾箱的定隔时间
  mfssettrashtime:设置垃圾箱的定隔时间(和memcached类)
  mfsmakesnapshot:快照
  等
  

  mfsgetgoal:
# touch good
# ll
total 0
-rw-r--r-- 1 502 502 0 Nov 5 01:03 good
# ln /usr/local/mfs/bin/mfs
mfsappendchunks mfsfileinfomfsgettrashtime mfsrgettrashtime mfssetgoal
mfscheckfilemfsfilerepairmfsmakesnapshot mfsrsetgoalmfssettrashtime
mfsdeleattrmfsgeteattrmfsmount   mfsrsettrashtime mfssnapshot
mfsdirinfomfsgetgoalmfsrgetgoalmfsseteattrmfstools
# ln /usr/local/mfs/bin/mfs* /usr/local/bin
# mfsget
mfsgeteattrmfsgetgoalmfsgettrashtime
# mfsgetgoal
get objects goal (desired number of copies)
usage: mfsgetgoal [-nhHr] name
-n - show numbers in plain format
-h - "human-readable" numbers using base 2 prefixes (IEC 60027)
-H - "human-readable" numbers using base 10 prefixes (SI)
-r - do it recursively
# mfsgetgoal good
good: 1  

  # dd if=/dev/zero of=bigfile.img bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 209.794 s, 50.0 MB/s  煮酒品茶:应该是一份分散存chunks
# mfsgetgoal /www/bigfile.img
/www/bigfile.img: 1
# mfsfileinfo /www/bigfile.img
/www/bigfile.img:
chunk 0: 00000000000004F7_00000001 / (id:1271 ver:1)
copy 1: 192.168.0.200:9422
chunk 1: 00000000000004F8_00000001 / (id:1272 ver:1)
copy 1: 192.168.0.190:9422
chunk 2: 00000000000004F9_00000001 / (id:1273 ver:1)
copy 1: 192.168.0.200:9422
chunk 3: 00000000000004FA_00000001 / (id:1274 ver:1)
copy 1: 192.168.0.190:9422
chunk 4: 00000000000004FB_00000001 / (id:1275 ver:1)
copy 1: 192.168.0.200:9422
chunk 5: 00000000000004FC_00000001 / (id:1276 ver:1)
copy 1: 192.168.0.190:9422
chunk 6: 00000000000004FD_00000001 / (id:1277 ver:1)
copy 1: 192.168.0.200:9422
chunk 7: 00000000000004FE_00000001 / (id:1278 ver:1)
copy 1: 192.168.0.190:9422
chunk 8: 00000000000004FF_00000001 / (id:1279 ver:1)
...........  煮酒品茶:设为三份
  # mfssetgoal 3 /www/bigfile.img
  /www/bigfile.img: 3
  # mfsgetgoal /www/bigfile.img
  /www/bigfile.img: 3
  煮酒品茶:等待他生成,通过strace查看一直在处理分片。三个chunks使用率22左右。等待中.....
  

  三分钟过去后,这个10G的文件分片完成:三个chunks使用率在28、31、30。所以表面看到文件在,但是打不开就是因为分片的原因,而我们看到的只是master上面显示给你的信息。
  

  设置一个目录试试(递归设置 -r)
# mfssetgoal -r 4 testgoaltime/
testgoaltime/:
inodes with goal changed:    1
inodes with goal not changed:   0
inodes with permission denied:   0
# touch goal4 testgoaltime/
# ls testgoaltime/
# ll
total 10240000
-rw-r--r-- 1 502 502 10485760000 Nov 5 01:08 bigfile.img
-rw-r--r-- 1 502 502   0 Nov 5 01:22 goal4
-rw-r--r-- 1 502 502   0 Nov 5 01:03 good
drwxr-xr-x 2 502 502   0 Nov 5 01:22 testgoaltime
# touch testgoaltime/goal4.1
# ll testgoaltime/
total 0
-rw-r--r-- 1 502 502 0 Nov 5 01:23 goal4.1
# mfsgetgoal testgoaltime/goal4.1
testgoaltime/goal4.1: 4  

  

  mfscheckfile
# mfscheckfile bigfile.img
bigfile.img:
chunks with 2 copies:   157
空文件不显示
# mfscheckfile good
good:
# echo "1" >good
# mfscheckfile good
good:
chunks with 1 copy:    1  

  mfsfileinfo:(也一样,空文件不显示,这里对应chunks目录里的文件)
# mfsfileinfo testgoaltime/goal4.1
testgoaltime/goal4.1:
# echo "111" testgoaltime/goal4.1
111 testgoaltime/goal4.1
# mfsfileinfo testgoaltime/goal4.1
testgoaltime/goal4.1:
# cat testgoaltime/goal4.1
# echo "111" >testgoaltime/goal4.1
#
#
#
# mfsfileinfo testgoaltime/goal4.1
testgoaltime/goal4.1:
chunk 0: 0000000000000595_00000001 / (id:1429 ver:1)
copy 1: 192.168.0.190:9422
copy 2: 192.168.0.200:9422  

  mfsdirinfo:
# mfsdirinfo good
good:
inodes:       1
directories:   0
files:       1
chunks:       1
length:       2
size:      70656
realsize:   70656
# ls
bigfile.img goal4 good testgoaltime
# mfsdirinfo testgoaltime
testgoaltime:
inodes:       2
directories:   1
files:       1
chunks:       1
length:       4
size:      70656
realsize:   282624  

  mfsgettrashtime: 垃圾箱
  1、删除文件在垃圾箱间隔时间内可以找回。
  2、册除文件
  3、停止挂载
  4、以-m的方式挂载
  5、找到trash中的16进制命名+文件名
  6、移动到undel目录中
  

  

# rm testgoaltime/goal4.1
# rm -rf good
# ps aux |grep mfsmount
root6341 5.1 3.1 204960 60764 ?S
页: [1]
查看完整版本: MooseFS的一些机制和东西【更新完毕】