在CentOS6.6上安装多节点OpenStack 〇、拓扑规划 本案例将在两个节点上部署OpenStack,其中一个节点为控制节点(controller node),另一个节点为计算节点(compute node)。控制节点包含一个网卡eth0,用于管理网络。计算节点包含两个网卡,eth0用于管理网络,eth1用于外部网络。 两节点OpenStack案例拓扑规划如下图所示。在本案例中将使用较简单的传统网络(legacy networking),即使用nova-network实现OpenStack网络连接,不需要配置Neutron组件。 (注:本案例中所有可配置的密码都是123456。) 一、安装CentOS 6.6 操作系统 1、安装控制节点 在VMware Workstation中新建CentOS 64位虚拟机。为虚拟机分配2.5GB内存,虚拟硬盘大小为100GB,选择CentOS-6.6-x86_64-bin-DVD1.iso作为安装光盘。为虚拟机配置一块网卡,网络连接方式为NAT。 从光盘安装操作系统,将主机名设置为controller,为eth0网卡手工配置IP地址、子网掩码、默认网关和DNS服务器,使虚拟机可以连接到Internet。在这里将IP地址配置为192.168.8.11。 在分区界面,使用自动分区。 在软件包选择界面,使用Minimal安装方式。 操作系统安装完毕后,需要安装VMware Tools,以增强虚拟机性能。在VMware Workstation的菜单中选择“虚拟机”à“安装VMwareTools”,安装过程如下。 由于VMware Tools安装程序需要使用perl,而CentOS最小安装不包含此程序,所以需要首先安装perl。确认Linux可以连接到Internet,或者使用本地光盘作为安装源,执行以下命令。 需要挂载本地光盘进行安装 严格按本地挂载方式进行。 Yum配置文件需要添加enabled=1让其成为被使用yum源 [iyunv@controller ~]# yum install perl 对所有的提示信息输入y确认。 [iyunv@controller ~]# mkdir /media/dvd [iyunv@controller ~]# mount -t iso9660/dev/sr0 /media/dvd [iyunv@controller ~]# cd /media/dvd/ [iyunv@controller dvd]# cpVMwareTools-9.9.2-2496486.tar.gz ~ [iyunv@controller dvd]# cd ~ [iyunv@controller ~]# tar -zxvfVMwareTools-9.9.2-2496486.tar.gz [iyunv@controller ~]# cdvmware-tools-distrib/ [iyunv@controller vmware-tools-distrib]#./vmware-install.pl 安装过程中对所有提示都按回车,采用默认值即可。安装完成后重新启动虚拟机。 2、安装计算节点 在VMware Workstation中新建CentOS 64位虚拟机。为虚拟机分配2.5GB内存,并在处理器配置中选中“虚拟化Intel VT-x/EPT或AMD-V/RVI”。虚拟硬盘大小为100GB,选择CentOS-6.6-x86_64-bin-DVD1.iso作为安装光盘。为虚拟机配置两块网卡,第一块网卡的网络连接方式为NAT,第二块网卡的网络连接方式为仅主机模式。 从光盘安装操作系统,将主机名设置为compute,为eth0网卡手工配置IP地址、子网掩码、默认网关和DNS服务器,使虚拟机可以连接到Internet,eth1网卡不需要配置。在这里将eth0的IP地址配置为192.168.8.22。 在分区界面,使用自动分区。在软件包选择界面,使用Minimal安装方式。操作系统安装完毕后,需要安装VMware Tools,安装过程与控制节点相同。 二、OpenStack基本环境配置 1、网络配置 ----------------------------------------------controller节点:配置开始-------------------------------------------- (1)编辑网卡配置文件,确认网卡的IP地址配置,使CentOS能够连接到Internet。 [iyunv@controller ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=Ethernet UUID=2bbafbcb-4250-4356-a26f-f56d4b37f086 # 网卡的唯一ID,不要改变此配置 ONBOOT=yes NM_CONTROLLED=no # 不使用NetworkManager控制此网卡,通常只需要修改这一项 BOOTPROTO=none IPADDR=192.168.8.11 PREFIX=24 GATEWAY=192.168.8.2 DNS1=8.8.8.8 DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System eth0" HWADDR=00:0C:29:DE:EE:BF # 网卡的MAC地址,不要改变此配置 LAST_CONNECT=1427285903 [iyunv@controller ~]# ifconfig eth0 Link encap:Ethernet HWaddr00:0C:29:DE:EE:BF inet addr:192.168.8.11 Bcast:192.168.8.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fede:eebf/64Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:400 errors:0 dropped:0 overruns:0 frame:0 TX packets:274 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:34862 (34.0 KiB) TXbytes:29839 (29.1 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0b) (2)配置本地名称解析,实现controller和compute节点的本地地址解析。 [iyunv@controller ~]# hostname controller [iyunv@controller ~]# vi/etc/sysconfig/network NETWORKING=yes HOSTNAME=controller GATEWAY=192.168.8.2 [iyunv@controller ~]# vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.8.11 controller 192.168.8.22 compute ----------------------------------------------controller节点:配置结束 -------------------------------------------- ----------------------------------------------compute节点:配置开始 --------------------------------------------- (1)编辑网卡配置文件,确认网卡的IP地址配置,使CentOS能够连接到Internet。 [iyunv@compute ~]# vi/etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=Ethernet UUID=251c00d2-2a0f-461b-b9d1-e6692769e290 ONBOOT=yes NM_CONTROLLED=no # 不使用NetworkManager控制此网卡,通常只需要修改这一项 BOOTPROTO=none HWADDR=00:0C:29:48:B3:10 IPADDR=192.168.8.22 PREFIX=24 GATEWAY=192.168.8.2 DNS1=8.8.8.8 DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System eth0" [iyunv@compute ~]# vi/etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 HWADDR=00:0C:29:48:B3:1A TYPE=Ethernet UUID=834d90f3-ae1e-4f7f-b6a2-0e5a470218da ONBOOT=yes #开机自动启用此网卡 NM_CONTROLLED=no #不使用NetworkManager控制此网卡 BOOTPROTO=none #不使用DHCP方式获取IP地址 (2)重新启动网络服务 [iyunv@compute ~]# service network restart Shutting down interface eth0: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining if ip address 192.168.8.22 isalready in use for device et h0... [ OK ] Bringing up interface eth1: [ OK ] [iyunv@compute ~]# ifconfig eth0 Link encap:Ethernet HWaddr00:0C:29:48:B3:10 inet addr:192.168.8.22 Bcast:192.168.8.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe48:b310/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:421 errors:0 dropped:0 overruns:0 frame:0 TX packets:270 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:35753 (34.9 KiB) TXbytes:33620 (32.8 KiB) eth1 Link encap:Ethernet HWaddr00:0C:29:48:B3:1A inet6 addr: fe80::20c:29ff:fe48:b31a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:468(468.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0(0.0 b) (3)配置本地名称解析,实现controller和compute节点的本地地址解析。 [iyunv@compute ~]# hostname compute [iyunv@compute ~]# vi /etc/sysconfig/network NETWORKING=yes HOSTNAME=compute GATEWAY=192.168.8.2 [iyunv@compute ~]# vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.8.11 controller 192.168.8.22 compute ----------------------------------------------compute节点:配置结束 --------------------------------------------- 在进行更多配置之前,需要测试网络连通性和地址解析。在controller和compute节点上分别ping两个节点的主机名。 [iyunv@controller ~]# ping -c 4 controller PING controller (192.168.8.11) 56(84) bytesof data. 64 bytes from controller (192.168.8.11):icmp_seq=1 ttl=64 time=0.049 ms 64 bytes from controller (192.168.8.11):icmp_seq=2 ttl=64 time=0.045 ms 64 bytes from controller (192.168.8.11):icmp_seq=3 ttl=64 time=0.048 ms 64 bytes from controller (192.168.8.11):icmp_seq=4 ttl=64 time=0.056 ms --- controller ping statistics --- 4 packets transmitted, 4 received, 0%packet loss, time 2997ms rtt min/avg/max/mdev =0.045/0.049/0.056/0.008 ms [iyunv@controller ~]# ping -c 4 compute PING compute (192.168.8.22) 56(84) bytes ofdata. 64 bytes from compute (192.168.8.22):icmp_seq=1 ttl=64 time=27.5 ms 64 bytes from compute (192.168.8.22):icmp_seq=2 ttl=64 time=0.493 ms 64 bytes from compute (192.168.8.22):icmp_seq=3 ttl=64 time=0.504 ms 64 bytes from compute (192.168.8.22):icmp_seq=4 ttl=64 time=0.419 ms --- compute ping statistics --- 4 packets transmitted, 4 received, 0%packet loss, time 3015ms rtt min/avg/max/mdev =0.419/7.248/27.579/11.738 ms [iyunv@compute ~]# ping -c 4 controller PING controller (192.168.8.11) 56(84) bytesof data. 64 bytes from controller (192.168.8.11):icmp_seq=1 ttl=64 time=0.539 ms 64 bytes from controller (192.168.8.11):icmp_seq=2 ttl=64 time=0.486 ms 64 bytes from controller (192.168.8.11):icmp_seq=3 ttl=64 time=0.483 ms 64 bytes from controller (192.168.8.11):icmp_seq=4 ttl=64 time=0.500 ms --- controller ping statistics --- 4 packets transmitted, 4 received, 0%packet loss, time 3024ms rtt min/avg/max/mdev =0.483/0.502/0.539/0.022 ms [iyunv@compute ~]# ping -c 4 compute PING compute (192.168.8.22) 56(84) bytes ofdata. 64 bytes from compute (192.168.8.22):icmp_seq=1 ttl=64 time=0.041 ms 64 bytes from compute (192.168.8.22):icmp_seq=2 ttl=64 time=0.050 ms 64 bytes from compute (192.168.8.22):icmp_seq=3 ttl=64 time=0.050 ms 64 bytes from compute (192.168.8.22):icmp_seq=4 ttl=64 time=0.050 ms --- compute ping statistics --- 4 packets transmitted, 4 received, 0%packet loss, time 3020ms rtt min/avg/max/mdev =0.041/0.047/0.050/0.009 ms 2、清空防火墙规则 在controller和compute节点上分别执行以下配置。 (1)清空filter表的所有规则。 [iyunv@controller ~]# iptables -F (2)确认所有链的默认策略为ACCEPT,规则已清空。 [iyunv@controller ~]# iptables-save # Generated by iptables-save v1.4.7 on WedMar 25 15:26:11 2015 *filter :INPUT ACCEPT [25:1624] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [13:1212] COMMIT # Completed on Wed Mar 25 15:26:11 2015 (3)保存防火墙配置。 [iyunv@controller ~]# service iptables save iptables: Saving firewall rules to/etc/sysconfig/iptables:[ OK ] 3、将SELinux配置为允许模式 在controller和compute节点上分别执行以下配置。 (1)将SELinux模式切换为Permissive。 [iyunv@controller ~]# setenforce 0 (2)编辑SELinux的配置文件。 [iyunv@controller ~]# vi/etc/sysconfig/selinux # This file controls the state of SELinuxon the system. # SELINUX= can take one of these threevalues: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=permissive # 将SELinux配置为允许模式 # SELINUXTYPE= can take one of these twovalues: # targeted - Targeted processes are protected, # mls - Multi Level Security protection. SELINUXTYPE=targeted (3)重新启动系统。 [iyunv@controller ~]# shutdown -r now 重新启动完毕后,检查SELinux的工作模式。 [iyunv@controller ~]# getenforce Permissive 4、配置NTP服务 ----------------------------------------------controller节点:配置开始 -------------------------------------------- (1)安装NTP服务器。 [iyunv@controller ~]# yum install ntp (2)编辑NTP服务器的主配置文件。 [iyunv@controller ~]# vi /etc/ntp.conf 添加以下配置: restrict 192.168.8.0 mask 255.255.255.0nomodify # 允许192.168.8.0/24网段使用此服务器 (3)启动NTP服务器并将服务配置为开机自动启动。 [iyunv@controller ~]# service ntpd start Starting ntpd: [ OK ] [iyunv@controller ~]# chkconfig ntpd on (4)检查NTP服务的配置结果。 [iyunv@controller ~]# ntpstat # 查看NTP服务器是否已经从上层服务器时间获得时间 synchronised to NTP server (202.112.29.82)at stratum 3 time correct to within 332 ms polling server every 64 s [iyunv@controller ~]# ntpq -p # 查看已经连接的上层NTP服务器,*表示正在使用中 remote refid st t when poll reach delay offset jitter ============================================================================== *dns1.synet.edu. 202.118.1.46 2 u 52 64 17 97.339 40.437 9.299 +gus.buptnet.edu 202.112.10.60 3 u 123 64 16 98.632 35.008 8.558 ----------------------------------------------controller节点:配置结束 -------------------------------------------- ----------------------------------------------compute节点:配置开始 --------------------------------------------- (1)安装NTP客户端程序ntpdate。 在compute节点安装NTP客户端,从controller的NTP服务器获取NTP时间信息。 [iyunv@compute ~]# yum install ntpdate (2)使用ntpdate从NTP服务器获取时间信息。 [iyunv@compute ~]# ntpdate 192.168.8.11 # 从controller的NTP服务器手工校时 25 Mar 17:43:51 ntpdate[1690]: adjust timeserver 192.168.8.11 offset 0.048645 sec (3)安装NTP服务器 使用ntpdate只能手工获取NTP信息,虽然可以配置计划任务定期执行ntpdate,但是这里推荐在非controller节点也安装NTP服务器,从controller获取时间信息。 [iyunv@compute ~]# yum install ntp (4)编辑NTP服务器的主配置文件。 [iyunv@compute ~]# vi /etc/ntp.conf 编辑以下配置: restrict 192.168.8.11 # 放行NTP服务器来源 #server 0.centos.pool.ntp.org iburst # 去掉默认的Internet上层NTP服务器 #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst server 192.168.8.11 # 将上层NTP服务器配置为controller的IP地址 (5)启动NTP服务器并将服务配置为开机自动启动。 [iyunv@compute ~]# service ntpd start Starting ntpd: [ OK ] [iyunv@compute ~]# chkconfig ntpd on (6)检查NTP服务的配置结果。 等待5-15分钟,检查compute是否已经与controller的NTP服务器同步 [iyunv@compute ~]# ntpstat synchronised to NTP server (192.168.8.11)at stratum 4 time correct to within 1135 ms polling server every 64 s [iyunv@compute ~]# ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================== *controller 202.112.29.82 3 u 40 64 17 0.519 -20.792 0.565 ----------------------------------------------compute节点:配置结束 --------------------------------------------- 5、配置数据库服务 ----------------------------------------------controller节点:配置开始 -------------------------------------------- (1)在controller节点,安装MySQL客户端和服务器,以及MySQLPython库。 [iyunv@controller ~]# yum install mysqlmysql-server MySQL-python (2)编辑MySQL的主配置文件。 [iyunv@controller ~]# vi /etc/my.cnf 在[mysqld] 配置段落,添加以下配置: bind-address = 192.168.8.11 default-storage-engine = innodb innodb_file_per_table collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = utf8 (3)启动MySQL服务器。 [iyunv@controller ~]# service mysqld start (4将MySQL服务器配置为开机自动启动。 [iyunv@controller ~]# chkconfig mysqld on (5)安装MySQL数据库。 [iyunv@controller ~]# mysql_install_db (6)设置MySQL数据库root用户的密码为123456。 [iyunv@controller ~]# /usr/bin/mysqladmin -uroot password '123456' (7)使用mysql_secure_installation加强MySQL服务器的安全性。 [iyunv@controller ~]#mysql_secure_installation NOTE: RUNNING ALL PARTS OF THIS SCRIPT ISRECOMMENDED FOR ALL MySQL SERVERS IN PRODUCTION USE! PLEASEREAD EACH STEP CAREFULLY! In order to log into MySQL to secure it,we'll need the current password for the root user. If you've just installed MySQL, and you haven't set the root password yet, thepassword will be blank, so you should just press enter here. Enter current passwordfor root (enter for none): # 输入MySQL数据库root用户密码 OK, successfully used password, movingon... Setting the root password ensures thatnobody can log into the MySQL root user without the proper authorisation. You already have a root password set, soyou can safely answer 'n'. Change the root password?[Y/n] n ...skipping. By default, a MySQL installation has ananonymous user, allowing anyone to log into MySQL without having to have auser account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users?[Y/n] y ...Success! Normally, root should only be allowed toconnect from 'localhost'. This ensures that someone cannot guess at theroot password from the network. Disallow root loginremotely? [Y/n] y ...Success! By default, MySQL comes with a databasenamed 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a productionenvironment. Remove test database andaccess to it? [Y/n] y -Dropping test database... ...Success! -Removing privileges on test database... ...Success! Reloading the privilege tables will ensurethat all changes made so far will take effect immediately. Reload privilege tablesnow? [Y/n] y ...Success! Cleaning up... All done! If you've completed all of the above steps, your MySQL installation should now be secure. Thanks for using MySQL! ----------------------------------------------controller节点:配置结束 -------------------------------------------- ----------------------------------------------compute节点:配置开始 --------------------------------------------- 安装MySQL Python库。 [iyunv@compute ~]# yum install MySQL-python ----------------------------------------------compute节点:配置结束 --------------------------------------------- 6、添加OpenStack软件源并更新系统 在controller和compute节点上分别执行以下配置。 (1)安装yum-plugin-priorities插件。 yum-plugin-priorities插件使YUM能够安排已配置安装源的相对优先级,该组件是Openstack RDO发行版软件包必需的。 [iyunv@controller ~]# yum installyum-plugin-priorities (2)添加EPEL和Openstack RDO安装源。 安装wget。 使用浏览器浏览到https://repos.fedorapeople.org/repos/openstack/openstack-icehouse网站,下载rpm文件rdo-release-icehouse-4.noarch.rpm,将文件通过SFTP或其他方式传输到CentOS的/root目录中。 使用浏览器浏览到http://mirrors.aliyun.com网站,点击文件列表中epel右边的help,查看帮助信息。 使用浏览器浏览到http://mirrors.aliyun.com/repo网站,下载epel-6.repo,将文件通过SFTP或其他方式传输到CentOS的/root目录中。 [iyunv@controller ~]# rpm -ivhrdo-release-icehouse-4.noarch.rpm warning: rdo-release-icehouse-4.noarch.rpm:Header V4 RSA/SHA1 Signature, key ID 0e4fbd28: NOKEY Preparing... ########################################### [100%] 1:rdo-release ########################################### [100%] [iyunv@controller ~]# rpm -ivhepel-release-6-8.noarch.rpm warning: epel-release-6-8.noarch.rpm:Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY Preparing... ########################################### [100%] 1:epel-release ########################################### [100%] [iyunv@controller ~]# cd /etc/yum.repos.d/ [iyunv@controller yum.repos.d]# ls CentOS-Base.repo CentOS-Media.repo epel-testing.repo rdo-release.repo CentOS-Debuginfo.repo CentOS-Vault.repo foreman.repo CentOS-fasttrack.repo epel.repo puppetlabs.repo [iyunv@controller yum.repos.d]# mv epel.repoepel.repo.bak [iyunv@controller yum.repos.d]# cp/root/epel-6.repo . [iyunv@controller yum.repos.d]# ls CentOS-Base.repo CentOS-Media.repo epel.repo.bak puppetlabs.repo CentOS-Debuginfo.repo CentOS-Vault.repo epel-testing.repo rdo-release.repo CentOS-fasttrack.repo epel-6.repo foreman.repo (3)更新软件包列表。 [iyunv@controller yum.repos.d]# yummakecache (4)安装openstack-utils软件包。 openstack-utils能够使Openstack的安装和配置过程更简单。 [iyunv@controller yum.repos.d]# yum installopenstack-utils 对所有的提示信息输入y确认。 (5)安装openstack-selinux软件包。 openstack-selinux包含在RHEL和CentOS上安装Openstack时所需的用来配置SELinux的策略文件。 [iyunv@controller yum.repos.d]# yum installopenstack-selinux (6)升级系统软件包。 [iyunv@controller yum.repos.d]# yum upgrade 对所有的提示信息输入y确认,升级完毕后重新启动系统。 7、配置消息服务器 只需要在controller节点安装消息服务器。 (1)安装qpid-cpp-server。 [iyunv@controller ~]# yum installqpid-cpp-server (2)编辑Qpid主配置文件。 为了简化Openstack的安装过程,建议禁用消息代理服务的身份验证。 [iyunv@controller ~]# vi /etc/qpidd.conf 更改以下配置: auth=no (3)启动Qpid消息服务器并将服务配置为开机自动启动。 [iyunv@controller ~]# service qpidd start Starting Qpid AMQP daemon: [ OK ] [iyunv@controller ~]# chkconfig qpidd on
图文详情请看附件
|