〇、拓扑规划 本案例将为两节点OpenStack部署增加一个块存储节点,安装OpenStack块存储服务。 块存储服务可以与计算服务协作,用来为虚拟机实例提供云硬盘。块存储服务可管理云硬盘、云硬盘快照和云硬盘类型。OpenStack使用Cinder组件提供块存储服务。 下图为三节点OpenStack案例拓扑图,图中包含控制节点和块存储节点,(图中没有画出计算节点,是因为计算节点的软硬件规划与两节点OpenStack部署相同。)块存储节点包含一个网卡eth0,用于管理网络。 在控制节点上添加OpenStack Cinder组件,在块存储节点上安装iSCSI目标服务器和OpenStack Cinder组件,通过LVM逻辑卷服务提供云硬盘卷。 一、安装块存储节点 在VMware Workstation中新建CentOS 64位虚拟机。为虚拟机分配1GB内存,虚拟硬盘大小为100GB,选择CentOS-6.6-x86_64-bin-DVD1.iso作为安装光盘。为虚拟机配置一块网卡,网络连接方式为NAT。 从光盘安装操作系统,将主机名设置为block,为eth0网卡手工配置IP地址、子网掩码、默认网关和DNS服务器,使虚拟机可以连接到Internet。在这里将eth0的IP地址配置为192.168.8.33。 在分区界面,使用自定义分区。 创建一个500MB的主分区,挂载到/boot。创建一个50GB的分区用来作为物理卷,使用该物理卷创建一个卷组vg_block,在卷组vg_block中创建两个逻辑卷。其中一个逻辑卷为4GB,用来作为交换分区,另一个逻辑卷使用卷组vg_block中所有剩余空间,挂载到 / 目录。剩余50699MB将来给Cinder云硬盘卷使用。 在软件包选择界面,使用Minimal安装方式。操作系统安装完毕后,需要安装VMware Tools,安装过程与控制节点相同。 二、基本环境配置 1、在控制节点和计算节点上配置hosts 在控制节点和计算节点上,修改/etc/hosts文件,添加有关block节点的地址解析配置。 [iyunv@controller ~]# vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhostlocalhost.localdomain localhost6 localhost6.localdomain6 192.168.8.11 controller 192.168.8.22 compute 192.168.8.33 block [iyunv@compute ~]# vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.8.11 controller 192.168.8.22 compute 192.168.8.33 block 2、在控制节点和计算节点上测试连通性 [iyunv@controller ~]# ping -c 4 block PING block (192.168.8.33) 56(84) bytes ofdata. 64 bytes from block (192.168.8.33):icmp_seq=1 ttl=64 time=1.09 ms 64 bytes from block (192.168.8.33):icmp_seq=2 ttl=64 time=0.531 ms 64 bytes from block (192.168.8.33):icmp_seq=3 ttl=64 time=0.570 ms 64 bytes from block (192.168.8.33):icmp_seq=4 ttl=64 time=0.499 ms --- block ping statistics --- 4 packets transmitted, 4 received, 0%packet loss, time 3011ms rtt min/avg/max/mdev =0.499/0.672/1.090/0.243 ms [iyunv@compute ~]# ping -c 4 block PING block (192.168.8.33) 56(84) bytes ofdata. 64 bytes from block (192.168.8.33):icmp_seq=1 ttl=64 time=0.464 ms 64 bytes from block (192.168.8.33):icmp_seq=2 ttl=64 time=0.537 ms 64 bytes from block (192.168.8.33):icmp_seq=3 ttl=64 time=0.492 ms 64 bytes from block (192.168.8.33):icmp_seq=4 ttl=64 time=0.492 ms --- block ping statistics --- 4 packets transmitted, 4 received, 0%packet loss, time 3005ms rtt min/avg/max/mdev =0.464/0.496/0.537/0.030 ms 3、在块存储节点上配置网络 (1)编辑网卡配置文件,确认网卡的IP地址配置,使CentOS能够连接到Internet。 [root @block ~]# vi/etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=Ethernet UUID=440ae27d-54d3-49be-b4a8-48768085a31e ONBOOT=yes NM_CONTROLLED=no # 不使用NetworkManager控制此网卡,通常只需要修改这一项 BOOTPROTO=none HWADDR=00:0C:29:14:BE:68 IPADDR=192.168.8.33 PREFIX=24 GATEWAY=192.168.8.2 DNS1=8.8.8.8 DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System eth0" [iyunv@block ~]# service network restart Shutting down interface eth0: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining if ip address 192.168.8.33 isalready in use for device eth0... [ OK ] (2)配置本地名称解析,实现controller、 compute和block节点的本地地址解析。 [iyunv@block ~]# vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.8.11 controller 192.168.8.22 compute 192.168.8.33 block (3)测试网络连通性 [iyunv@block ~]# ping -c 4 controller PING controller (192.168.8.11) 56(84) bytesof data. 64 bytes from controller (192.168.8.11):icmp_seq=1 ttl=64 time=0.416 ms 64 bytes from controller (192.168.8.11):icmp_seq=2 ttl=64 time=0.525 ms 64 bytes from controller (192.168.8.11):icmp_seq=3 ttl=64 time=0.574 ms 64 bytes from controller (192.168.8.11):icmp_seq=4 ttl=64 time=0.602 ms --- controller ping statistics --- 4 packets transmitted, 4 received, 0%packet loss, time 3007ms rtt min/avg/max/mdev =0.416/0.529/0.602/0.072 ms [iyunv@block ~]# ping -c 4 compute PING compute (192.168.8.22) 56(84) bytes ofdata. 64 bytes from compute (192.168.8.22):icmp_seq=1 ttl=64 time=0.343 ms 64 bytes from compute (192.168.8.22):icmp_seq=2 ttl=64 time=0.481 ms 64 bytes from compute (192.168.8.22):icmp_seq=3 ttl=64 time=0.472 ms 64 bytes from compute (192.168.8.22):icmp_seq=4 ttl=64 time=0.485 ms --- compute ping statistics --- 4 packets transmitted, 4 received, 0%packet loss, time 3011ms rtt min/avg/max/mdev =0.343/0.445/0.485/0.061 ms [iyunv@block ~]# ping -c 4 block PING block (192.168.8.33) 56(84) bytes ofdata. 64 bytes from block (192.168.8.33):icmp_seq=1 ttl=64 time=0.015 ms 64 bytes from block (192.168.8.33):icmp_seq=2 ttl=64 time=0.054 ms 64 bytes from block (192.168.8.33):icmp_seq=3 ttl=64 time=0.050 ms 64 bytes from block (192.168.8.33):icmp_seq=4 ttl=64 time=0.050 ms --- block ping statistics --- 4 packets transmitted, 4 received, 0%packet loss, time 3001ms rtt min/avg/max/mdev =0.015/0.042/0.054/0.016 ms 4、在块存储节点上配置防火墙和SELinux (1)关闭iptables防火墙 [iyunv@block ~]# service iptables stop [iyunv@block ~]# chkconfig iptables off (2)将SELinux配置为允许模式 [iyunv@block ~]# setenforce 0 [iyunv@block ~]# vi /etc/sysconfig/selinux # This file controls the state of SELinuxon the system. # SELINUX= can take one of these threevalues: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=permissive # 将SELinux配置为允许模式 # SELINUXTYPE= can take one of these twovalues: # targeted - Targeted processes are protected, # mls - Multi Level Security protection. SELINUXTYPE=targeted [iyunv@block ~]# shutdown -r now 重启完毕后执行getenforce检查SELinux的模式。 [iyunv@block ~]# getenforce Permissive 5、在块存储节点上配置NTP服务 (1)安装NTP服务器 [iyunv@block ~]# yum install ntp (2)编辑NTP服务器的主配置文件。 [iyunv@block ~]# vi /etc/ntp.conf 编辑以下配置: restrict 192.168.8.11 # 放行NTP服务器来源 #server 0.centos.pool.ntp.org iburst # 去掉默认的Internet上层NTP服务器 #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst server 192.168.8.11 # 将上层NTP服务器配置为controller的IP地址 (3)启动NTP服务器并将服务配置为开机自动启动。 [iyunv@block ~]# service ntpd start Starting ntpd: [ OK ] [iyunv@block ~]# chkconfig ntpd on (4)检查NTP服务的配置结果。 等待5-15分钟,检查block是否已经与controller的NTP服务器同步 [iyunv@block ~]# ntpstat synchronised to NTP server (192.168.8.11)at stratum 4 time correct to within 197 ms polling server every 128 s [iyunv@block ~]# ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================== *controller 202.112.29.82 3 u 39 128 377 0.470 7.401 19.945 6、在块存储节点上安装MySQL-python [iyunv@block ~]# yum install MySQL-python 7、在块存储节点上配置OpenStack软件源 (1)安装yum-plugin-priorities插件 [iyunv@block ~]# yum installyum-plugin-priorities (2)配置OpenStack软件源 下载http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm,将文件通过SFTP或其他方式传输到CentOS的/root目录中。 使用浏览器浏览到https://repos.fedorapeople.org/repos/openstack/openstack-icehouse网站,下载rpm文件rdo-release-icehouse-4.noarch.rpm,将文件通过SFTP或其他方式传输到CentOS的/root目录中。 使用浏览器浏览到http://mirrors.aliyun.com网站,点击文件列表中epel右边的help,查看帮助信息。 使用浏览器浏览到http://mirrors.aliyun.com/repo网站,下载epel-6.repo,将文件通过SFTP或其他方式传输到CentOS的/root目录中。 [iyunv@block ~]# rpm -ivh rdo-release-icehouse-4.noarch.rpm warning: rdo-release-icehouse-4.noarch.rpm:Header V4 RSA/SHA1 Signature, key ID 0e4fbd28: NOKEY Preparing... ########################################### [100%] 1:rdo-release ########################################### [100%] [iyunv@block ~]# rpm -ivhepel-release-6-8.noarch.rpm warning: epel-release-6-8.noarch.rpm:Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY Preparing... ########################################### [100%] 1:epel-release ########################################### [100%] [iyunv@block ~]# cd /etc/yum.repos.d/ [iyunv@block yum.repos.d]# mv epel.repoepel.repo.bak [iyunv@block yum.repos.d]# cp/root/epel-6.repo . [iyunv@block yum.repos.d]# ls CentOS-Base.repo CentOS-fasttrack.repo CentOS-Vault.repo epel.repo.bak foreman.repo rdo-release.repo CentOS-Debuginfo.repo CentOS-Media.repo epel-6.repo epel-testing.repo puppetlabs.repo (3)更新软件包列表。 [iyunv@block yum.repos.d]# yum makecache (4)安装openstack-utils软件包。 [iyunv@block yum.repos.d]# yum installopenstack-utils (5)安装openstack-selinux软件包。 [iyunv@block yum.repos.d]# yum installopenstack-selinux (6)升级系统软件包。 [iyunv@block yum.repos.d]# yum upgrade 升级完毕后重新启动系统。 三、配置Cinder块存储服务 1、在控制节点上安装配置Cinder (1)安装cinder软件包 [iyunv@controller ~]# yum installopenstack-cinder (2)配置块存储服务使用控制节点的MySQL数据库,将CINDER_DBPASS替换为MySQL数据库用户cinder的密码。 [iyunv@controller ~]# openstack-config --set/etc/cinder/cinder.conf database connection mysql://cinder:CINDER_DBPASS@controller/cinder (3)在MySQL数据库中创建用户cinder。 [iyunv@controller ~]# mysql -u root -p Enter password: # 输入MySQL数据库用户root的密码123456 mysql> CREATE DATABASE cinder; mysql> GRANT ALL PRIVILEGES ON cinder.*TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS'; mysql> GRANT ALL PRIVILEGES ON cinder.*TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS'; mysql> exit (4)为块存储服务创建数据库表 [iyunv@controller ~]# su -s /bin/sh -c"cinder-manage db sync" cinder /usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57:PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attackvulnerability. _warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attackvulnerability.", PowmInsecureWarning) #安全风险提示,暂忽略 (5)应用admin用户的环境变量 [iyunv@controller ~]# source admin-openrc.sh (6)在keystone中创建用户cinder,并将用户关联到service租户和admin角色。将CINDER_PASS替换为用户cinder的密码。 [iyunv@controller ~]# keystone user-create--name=cinder --pass=CINDER_PASS --email=cinder@localhost [iyunv@controller ~]# keystone user-role-add--user=cinder --tenant=service --role=admin (7)配置cinder使用keystone认证,将CINDER_PASS替换为用户cinder的密码。 # openstack-config --set/etc/cinder/cinder.conf DEFAULT auth_strategy keystone # openstack-config --set /etc/cinder/cinder.confkeystone_authtoken \ auth_uri http://controller:5000 # openstack-config --set/etc/cinder/cinder.conf keystone_authtoken auth_host controller # openstack-config --set/etc/cinder/cinder.conf keystone_authtoken auth_protocol http # openstack-config --set/etc/cinder/cinder.conf keystone_authtoken auth_port 35357 # openstack-config --set/etc/cinder/cinder.conf keystone_authtoken admin_user cinder # openstack-config --set/etc/cinder/cinder.conf keystone_authtoken \ admin_tenant_name service # openstack-config --set/etc/cinder/cinder.conf keystone_authtoken \ admin_password CINDER_PASS (8)配置块存储服务使用Qpid消息代理 [iyunv@controller ~]# openstack-config --set/etc/cinder/cinder.conf DEFAULT rpc_backend qpid [iyunv@controller ~]# openstack-config --set/etc/cinder/cinder.conf DEFAULT qpid_hostname controller (9)向keystone注册块存储服务cinder,创建API Endpoint,包括v1和v2两个版本。 [iyunv@controller ~]# keystoneservice-create --name=cinder --type=volume --description="OpenStack BlockStorage" [iyunv@controller ~]# keystoneendpoint-create \ --service-id=$(keystone service-list | awk '/ volume / {print $2}') \ --publicurl=http://controller:8776/v1/%\(tenant_id\)s \ --internalurl=http://controller:8776/v1/%\(tenant_id\)s \ --adminurl=http://controller:8776/v1/%\(tenant_id\)s [iyunv@controller ~]# keystoneservice-create --name=cinderv2 --type=volumev2 --description="OpenStackBlock Storage v2" [iyunv@controller ~]# keystoneendpoint-create \ --service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') \ --publicurl=http://controller:8776/v2/%\(tenant_id\)s \ --internalurl=http://controller:8776/v2/%\(tenant_id\)s \ --adminurl=http://controller:8776/v2/%\(tenant_id\)s (10)启动块存储服务并将服务配置为开机自动启动 [iyunv@controller ~]# serviceopenstack-cinder-api start Starting openstack-cinder-api: [ OK ] [iyunv@controller ~]# serviceopenstack-cinder-scheduler start Starting openstack-cinder-scheduler: [ OK ] [iyunv@controller ~]# chkconfigopenstack-cinder-api on [iyunv@controller ~]# chkconfigopenstack-cinder-scheduler on 2、在块存储节点上安装配置cinder (1)使用fdisk对磁盘分区,创建一个LVM类型分区,在本案例中将使用/dev/sda3。 [iyunv@block ~]# fdisk /dev/sda WARNING: DOS-compatible mode is deprecated.It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p # 显示当前分区 Disk /dev/sda: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054cylinders Units = cylinders of 16065 * 512 = 8225280bytes Sector size (logical/physical): 512 bytes /512 bytes I/O size (minimum/optimal): 512 bytes / 512bytes Disk identifier: 0x000c1bda Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinderboundary. /dev/sda2 64 6591 52428800 8e Linux LVM Command (m for help): n # 创建新分区 Command action e extended p primary partition (1-4) p # 主分区 Partition number (1-4): 3 # 分区编号为3 First cylinder (6591-13054, default 6591): # 起始柱面直接回车 Using default value 6591 Last cylinder, +cylinders or +size{K,M,G}(6591-13054, default 13054): # 结束柱面直接回车 Using default value 13054 Command (m for help): t # 转换分区类型 Partition number (1-4): 3 # 分区编号为3 Hex code (type L to list codes): 8e # 转换为Linux LVM分区 Changed system type of partition 3 to 8e(Linux LVM) Command (m for help): w # 保存退出 The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition tablefailed with error 16: Device or resource busy. The kernel still uses the old table. Thenew table will be used at the next reboot or after you runpartprobe(8) or kpartx(8) Syncing disks. [iyunv@block ~]# partx -a /dev/sda # 使内核重新读取分区表 BLKPG: Device or resource busy error adding partition 1 BLKPG: Device or resource busy error adding partition 2 [iyunv@block ~]# fdisk -l Disk /dev/sda: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054cylinders Units = cylinders of 16065 * 512 = 8225280bytes Sector size (logical/physical): 512 bytes /512 bytes I/O size (minimum/optimal): 512 bytes / 512bytes Disk identifier: 0x000c1bda Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinderboundary. /dev/sda2 64 6591 52428800 8e Linux LVM /dev/sda3 6591 13054 51914431 8e Linux LVM (2)创建物理卷,将物理卷配置为卷组cinder-volumes。 [iyunv@block ~]# pvcreate /dev/sda3 Physical volume "/dev/sda3" successfully created [iyunv@block ~]# vgcreate cinder-volumes/dev/sda3 Volume group "cinder-volumes" successfully created (3)编辑文件/etc/lvm/lvm.conf,在devices部分配置LVM将持续扫描的虚拟机实例所使用的设备。 devices { ... filter = ["a/sda2/", "a/sda3/", "r/.*/"] # 添加这一行 ... } a开头的为允许扫描的设备,r开头的为拒绝扫描的设备。在这里,"a/sda2/", "a/sda3/", 表示扫描/dev/sda2和/dev/sda3,r/.*/表示不扫描所有其它设备。其中sda2为操作系统所在的物理卷设备,sda3为cinder卷所使用的物理卷设备。 (4)安装cinder和iSCSI目标服务器 [iyunv@block ~]# yum install openstack-cinderscsi-target-utils (5)配置cinder使用keystone认证,将CINDER_PASS替换为用户cinder的密码。 # openstack-config --set/etc/cinder/cinder.conf DEFAULT auth_strategy keystone # openstack-config --set/etc/cinder/cinder.conf keystone_authtoken \ auth_uri http://controller:5000 # openstack-config --set/etc/cinder/cinder.conf keystone_authtoken auth_host controller # openstack-config --set/etc/cinder/cinder.conf keystone_authtoken auth_protocol http # openstack-config --set/etc/cinder/cinder.conf keystone_authtoken auth_port 35357 # openstack-config --set/etc/cinder/cinder.conf keystone_authtoken admin_user cinder # openstack-config --set/etc/cinder/cinder.conf keystone_authtoken \ admin_tenant_name service # openstack-config --set /etc/cinder/cinder.confkeystone_authtoken \ admin_password CINDER_PASS (6)配置cinder使用Qpid消息代理 [iyunv@block ~]# openstack-config --set/etc/cinder/cinder.conf DEFAULT rpc_backend qpid [iyunv@block ~]# openstack-config --set/etc/cinder/cinder.conf DEFAULT qpid_hostname controller (7)配置块存储服务使用控制节点的MySQL数据库,将CINDER_DBPASS替换为MySQL数据库用户cinder的密码。 [iyunv@block ~]# openstack-config --set/etc/cinder/cinder.conf database connectionmysql://cinder:CINDER_DBPASS@controller/cinder (8)配置块存储节点的管理IP地址,在本案例中为192.168.8.33。 [iyunv@block ~]# openstack-config --set/etc/cinder/cinder.conf DEFAULT my_ip 192.168.8.33 (9)配置cinder使用控制节点的glance映像服务 [iyunv@block ~]# openstack-config --set/etc/cinder/cinder.conf DEFAULT glance_host controller (10)配置cinder使用tgtadm iSCSI服务 [iyunv@block ~]# openstack-config --set/etc/cinder/cinder.conf DEFAULT iscsi_helper tgtadm (11)编辑iSCSI目标服务器的配置文件/etc/tgt/targets.conf,添加以下配置,使iSCSI目标服务能够发现cinder块存储卷。 include /etc/cinder/volumes/* (12)启动块存储服务并将服务配置为开机自动启动 [iyunv@block ~]# serviceopenstack-cinder-volume start Starting openstack-cinder-volume: [ OK ] [iyunv@block ~]# service tgtd start Starting SCSI target daemon: [ OK ] [iyunv@block ~]# chkconfigopenstack-cinder-volume on [iyunv@block ~]# chkconfig tgtd on 四、使用Dashboard创建云硬盘并挂载到虚拟机实例 1、启动虚拟机实例 (1)使用demo用户登录Dashboard,在项目à Compute à 实例处,点击右上方的“启动云主机”。 (2)输入云主机名称cirros,选择云主机类型为m1.tiny,云主机启动源选择“从镜像启动”,镜像名称选择“cirros-0.3.3-x86_64”,在“访问&安全”中选择一个密钥对,点击运行。 (3)等待云主机启动完成。 (4)使用本地控制台或SSH登录到云主机。 (5)使用fdisk -l查看云主机的硬盘和分区情况。可以看到目前云主机有一块硬盘/dev/vda,大小为1GB。 $ sudo fdisk -l Disk /dev/vda: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders,total 2097152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes /512 bytes I/O size (minimum/optimal): 512 bytes / 512bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/vda1 * 16065 2088449 1036192+ 83 Linux 2、创建云硬盘 (1)在项目à Compute à 云硬盘处,点击右上方的“创建云硬盘”。 (2)输入云硬盘名称 (3)稍等片刻,当状态变成Available时,云硬盘就创建好了。 3、将云硬盘挂载到云主机 (1)在项目à Compute à 云硬盘处,点击动作中的“编辑挂载”。 (2)选择将云硬盘连接到的云主机为cirros,点击连接云硬盘。 (3)稍等片刻,云硬盘的状态变成In-Use,连接到为“在设备/dev/vdb上连接到cirros”。 (4)在云主机cirros中使用fdisk -l查看硬盘和分区情况。可以看到云主机cirros多了一块硬盘/dev/vdb,且不包含任何分区。 $ sudo fdisk -l Disk /dev/vda: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders,total 2097152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes /512 bytes I/O size (minimum/optimal): 512 bytes / 512bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/vda1 * 16065 2088449 1036192+ 83 Linux Disk /dev/vdb: 1073 MB, 1073741824 bytes 16 heads, 63 sectors/track, 2080 cylinders,total 2097152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes /512 bytes I/O size (minimum/optimal): 512 bytes / 512bytes Disk identifier: 0x00000000 Disk /dev/vdb doesn't contain a validpartition table (5)可以像操作本地硬盘一样,使用fdisk对云硬盘进行分区。 $ sudo fdisk /dev/vdb Device contains neither a valid DOSpartition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with diskidentifier 0x7df570ca. Changes will remain in memory only, untilyou decide to write them. After that, of course, the previous contentwon't be recoverable. Warning: invalid flag 0x0000 of partitiontable 4 will be corrected by w(rite) Command (m for help): n Partition type: p primary (0 primary, 0extended, 4 free) e extended Select (default p):p Partition number (1-4, default 1): 1 First sector (2048-2097151, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G}(2048-2097151, default 2097151): Using default value 2097151 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. $ sudo fdisk -l /dev/vdb Disk /dev/vdb: 1073 MB, 1073741824 bytes 9 heads, 8 sectors/track, 29127 cylinders,total 2097152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes /512 bytes I/O size (minimum/optimal): 512 bytes / 512bytes Disk identifier: 0x7df570ca Device Boot Start End Blocks Id System /dev/vdb1 2048 2097151 1047552 83 Linux (6)将分区格式化为EXT4文件系统并挂载。 $ sudo mkfs -t ext4 /dev/vdb1 $ sudo mkdir /media/vdb1 $ sudo mount -t ext4 /dev/vdb1 /media/vdb1/ $ mount rootfs on / type rootfs (rw) /dev on /dev type devtmpfs(rw,relatime,size=248160k,nr_inodes=62040,mode=755) /dev/vda1 on / type ext3(rw,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered) /proc on /proc type proc (rw,relatime) sysfs on /sys type sysfs (rw,relatime) devpts on /dev/pts type devpts(rw,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /dev/shm type tmpfs(rw,relatime,mode=777) tmpfs on /run type tmpfs(rw,nosuid,relatime,size=200k,mode=755) /dev/vdb1 on /media/vdb1 type ext4(rw,relatime,user_xattr,barrier=1,data=ordered) $ cd /media/vdb1/ $ ls lost+found (7)解除挂载分区/dev/vdb1 $ cd $ sudo umount /dev/vdb1 $ mount rootfs on / type rootfs (rw) /dev on /dev type devtmpfs(rw,relatime,size=248160k,nr_inodes=62040,mode=755) /dev/vda1 on / type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered) /proc on /proc type proc (rw,relatime) sysfs on /sys type sysfs (rw,relatime) devpts on /dev/pts type devpts(rw,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /dev/shm type tmpfs(rw,relatime,mode=777) tmpfs on /run type tmpfs(rw,nosuid,relatime,size=200k,mode=755) (8)断开云硬盘:在项目à Compute à 云硬盘处,点击动作中的“编辑挂载”。 (9)点击断开云硬盘。 确认断开,云硬盘cloud-disk的状态转换为Available。 (10)删除云硬盘:在项目à Compute à 云硬盘处,点击动作中的“删除云硬盘”。 五、使用命令行创建云硬盘并挂载到虚拟机实例 1、创建云硬盘 (1)应用demo用户的环境变量 [iyunv@controller ~]# source demo-openrc.sh (2)使用cinder create创建云硬盘,指定名称为cinder-disk,大小为1GB。 [iyunv@controller ~]# cinder create--display-name cinder-disk 1 (3)查看云硬盘 [iyunv@controller ~]# cinder list 2、将云硬盘挂载到云主机 (1)显示云主机 [iyunv@controller ~]# nova list (2)使用nova volume-attach将云硬盘挂载到云主机,其中cirros为云主机名称(或使用云主机ID:e8cfdd25-45bb-406c-a9f6-23e6d51af094),f4dcfa82-f280-4e5f-8ca3-cc891402042a为cinder list所显示的云硬盘ID。 [iyunv@controller ~]# nova volume-attachcirros f4dcfa82-f280-4e5f-8ca3-cc891402042a /dev/vdb (3)显示cinder卷的信息,其中f4dcfa82-f280-4e5f-8ca3-cc891402042a为cinder list所显示的云硬盘ID。可以看到云硬盘已经挂载到ID为e8cfdd25-45bb-406c-a9f6-23e6d51af094的云主机cirros。 [iyunv@controller ~]# cinder showf4dcfa82-f280-4e5f-8ca3-cc891402042a +------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'device': u'/dev/vdb',u'server_id': u'e8cfdd25-45bb-406c-a9f6-23e6d51af094', u'id':u'f4dcfa82-f280-4e5f-8ca3-cc891402042a', u'host_name': None, u'volume_id':u'f4dcfa82-f280-4e5f-8ca3-cc891402042a'}] | | availability_zone | nova | | bootable | false | | created_at | 2015-04-23T00:38:43.000000 | | display_description | None | | display_name | cinder-disk | | encrypted | False | | id | f4dcfa82-f280-4e5f-8ca3-cc891402042a | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | |os-vol-tenant-attr:tenant_id | 7b7130b52f834455b1a71e133dcf6369 | | size | 1 | | snapshot_id | None | | source_volid | None | | status | in-use | | volume_type | None | +------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ (4)显示云硬盘的状态,可以看到云硬盘已经挂载到云主机。 [iyunv@controller~]# cinder list (5)在云主机中可以看到云硬盘/dev/vdb。 $ sudofdisk -l Disk/dev/vda: 1073 MB, 1073741824 bytes 255heads, 63 sectors/track, 130 cylinders, total 2097152 sectors Units =sectors of 1 * 512 = 512 bytes Sectorsize (logical/physical): 512 bytes / 512 bytes I/O size(minimum/optimal): 512 bytes / 512 bytes Diskidentifier: 0x00000000 Device Boot Start End Blocks Id System /dev/vda1 * 16065 2088449 1036192+ 83 Linux Disk/dev/vdb: 1073 MB, 1073741824 bytes 16 heads,63 sectors/track, 2080 cylinders, total 2097152 sectors Units =sectors of 1 * 512 = 512 bytes Sectorsize (logical/physical): 512 bytes / 512 bytes I/O size(minimum/optimal): 512 bytes / 512 bytes Disk identifier:0x00000000 Disk/dev/vdb doesn't contain a valid partition table 3、扩展云硬盘 (1)使用nova volume-detach断开云硬盘,其中cirros为云主机名称(或使用云主机ID:e8cfdd25-45bb-406c-a9f6-23e6d51af094),f4dcfa82-f280-4e5f-8ca3-cc891402042a为cinder list所显示的云硬盘ID。 [iyunv@controller~]# nova volume-detach cirros f4dcfa82-f280-4e5f-8ca3-cc891402042a (2)显示云硬盘的状态 [iyunv@controller~]# cinder list (3)将云硬盘cinder-disk扩展为2GB。 [iyunv@controller~]# cinder extend cinder-disk 2 (4)显示云硬盘的状态 [iyunv@controller~]# cinder list (5)可以将云硬盘挂载到云主机,确认云硬盘已扩展为2GB。 [iyunv@controller~]# nova volume-attach cirros f4dcfa82-f280-4e5f-8ca3-cc891402042a /dev/vdb $ sudofdisk -l /dev/vdb Disk/dev/vdb: 2147 MB, 2147483648 bytes 16 heads,63 sectors/track, 4161 cylinders, total 4194304 sectors Units =sectors of 1 * 512 = 512 bytes Sectorsize (logical/physical): 512 bytes / 512 bytes I/O size(minimum/optimal): 512 bytes / 512 bytes Diskidentifier: 0x00000000 Disk/dev/vdb doesn't contain a valid partition table (6)断开并删除云硬盘 [iyunv@controller~]# nova volume-detach cirros f4dcfa82-f280-4e5f-8ca3-cc891402042a [iyunv@controller~]# cinder delete cinder-disk [iyunv@controller~]# cinder list 正在删除云硬盘。 [iyunv@controller~]# cinder list 已删除云硬盘。 图文详情,请查看附件。
|