zi663227 发表于 2017-6-26 17:38:05

install openstack by apt-get

  1.dashboard安装
  Install the packages:
  # apt-get install openstack-dashboard
  Edit the /etc/openstack-dashboard/local_settings.py file and complete the following actions:
  Configure the dashboard to use OpenStack services on the controller node:
  OPENSTACK_HOST = "controller"
  Allow all hosts to access the dashboard:
  ALLOWED_HOSTS = ['*', ]
  Configure the memcached session storage service:
  SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
  CACHES = {
  'default': {
  'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
  'LOCATION': 'controller:11211',
  }
  }
  Note
  Comment out any other session storage configuration.
  Enable the Identity API version 3:
  OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
  Enable support for domains:
  OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
  Configure API versions:
  OPENSTACK_API_VERSIONS = {
  "identity": 3,
  "image": 2,
  "volume": 2,
  }
  Configure default as the default domain for users that you create via the dashboard:
  OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
  Configure user as the default role for users that you create via the dashboard:
  OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
  If you chose networking option 1, disable support for layer-3 networking services:
  OPENSTACK_NEUTRON_NETWORK = {
  ...
  'enable_router': False,
  'enable_quotas': False,
  'enable_distributed_router': False,
  'enable_ha_router': False,
  'enable_lb': False,
  'enable_firewall': False,
  'enable_vpn': False,
  'enable_fip_topology_check': False,
  }
  Optionally, configure the time zone:
  TIME_ZONE = "TIME_ZONE"
  Replace TIME_ZONE with an appropriate time zone identifier. For more information, see the list of time zones.
  Finalize installation¶
  Reload the web server configuration:
  # service apache2 reload
  2. manila-apimanila-schedule安装了
  Before you install and configure the Share File System service, you must create a database, service credentials, and API endpoints.
  To create the database, complete these steps:
  Use the database access client to connect to the database server as the root user:
  $ mysql -u root -p
  Create the manila database:
  CREATE DATABASE manila;
  Grant proper access to the manila database:
  GRANT ALL PRIVILEGES ON manila.* TO 'manila'@'localhost' \
  IDENTIFIED BY 'MANILA_DBPASS';
  GRANT ALL PRIVILEGES ON manila.* TO 'manila'@'%' \
  IDENTIFIED BY 'MANILA_DBPASS';
  Replace MANILA_DBPASS with a suitable password.
  Exit the database access client.
  Source the admin credentials to gain access to admin-only CLI commands:
  $ . admin-openrc
  To create the service credentials, complete these steps:
  Create a manila user:
  $ openstack user create --domain default --password-prompt manila
  User Password:
  Repeat User Password:
  +-----------+----------------------------------+
  | Field   | Value                            |
  +-----------+----------------------------------+
  | domain_id | e0353a670a9e496da891347c589539e9 |
  | enabled   | True                           |
  | id      | 83a3990fc2144100ba0e2e23886d8acc |
  | name      | manila                           |
  +-----------+----------------------------------+
  Add the admin role to the manila user:
  $ openstack role add --project service --user manila admin
  Note
  This command provides no output.
  Create the manila and manilav2 service entities:
  $ openstack service create --name manila \
  --description "OpenStack Shared File Systems" share
  +-------------+----------------------------------+
  | Field       | Value                            |
  +-------------+----------------------------------+
  | description | OpenStack Shared File Systems    |
  | enabled   | True                           |
  | id          | 82378b5a16b340aa9cc790cdd46a03ba |
  | name      | manila                           |
  | type      | share                            |
  +-------------+----------------------------------+
  $ openstack service create --name manilav2 \
  --description "OpenStack Shared File Systems" sharev2
  +-------------+----------------------------------+
  | Field       | Value                            |
  +-------------+----------------------------------+
  | description | OpenStack Shared File Systems    |
  | enabled   | True                           |
  | id          | 30d92a97a81a4e5d8fd97a32bafd7b88 |
  | name      | manilav2                         |
  | type      | sharev2                        |
  +-------------+----------------------------------+
  Note
  The Share File System services require two service entities.
  Create the Shared File Systems service API endpoints:
  $ openstack endpoint create --region RegionOne \
  share public http://controller:8786/v1/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field      | Value                                 |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id         | 0bd2bbf8d28b433aaea56a254c69f69d      |
  | interface    | public                                  |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | 82378b5a16b340aa9cc790cdd46a03ba      |
  | service_name | manila                                  |
  | service_type | share                                 |
  | url          | http://controller:8786/v1/%(tenant_id)s |
  +--------------+-----------------------------------------+
  $ openstack endpoint create --region RegionOne \
  share internal http://controller:8786/v1/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field      | Value                                 |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id         | a2859b5732cc48b5b083dd36dafb6fd9      |
  | interface    | internal                              |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | 82378b5a16b340aa9cc790cdd46a03ba      |
  | service_name | manila                                  |
  | service_type | share                                 |
  | url          | http://controller:8786/v1/%(tenant_id)s |
  +--------------+-----------------------------------------+
  $ openstack endpoint create --region RegionOne \
  share admin http://controller:8786/v1/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field      | Value                                 |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id         | f7f46df93a374cc49c0121bef41da03c      |
  | interface    | admin                                 |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | 82378b5a16b340aa9cc790cdd46a03ba      |
  | service_name | manila                                  |
  | service_type | share                                 |
  | url          | http://controller:8786/v1/%(tenant_id)s |
  +--------------+-----------------------------------------+
  $ openstack endpoint create --region RegionOne \
  sharev2 public http://controller:8786/v2/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field      | Value                                 |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id         | d63cc0d358da4ea680178657291eddc1      |
  | interface    | public                                  |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | 30d92a97a81a4e5d8fd97a32bafd7b88      |
  | service_name | manilav2                              |
  | service_type | sharev2                                 |
  | url          | http://controller:8786/v2/%(tenant_id)s |
  +--------------+-----------------------------------------+
  $ openstack endpoint create --region RegionOne \
  sharev2 internal http://controller:8786/v2/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field      | Value                                 |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id         | afc86e5f50804008add349dba605da54      |
  | interface    | internal                              |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | 30d92a97a81a4e5d8fd97a32bafd7b88      |
  | service_name | manilav2                              |
  | service_type | sharev2                                 |
  | url          | http://controller:8786/v2/%(tenant_id)s |
  +--------------+-----------------------------------------+
  $ openstack endpoint create --region RegionOne \
  sharev2 admin http://controller:8786/v2/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field      | Value                                 |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id         | e814a0cec40546e98cf0c25a82498483      |
  | interface    | admin                                 |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | 30d92a97a81a4e5d8fd97a32bafd7b88      |
  | service_name | manilav2                              |
  | service_type | sharev2                                 |
  | url          | http://controller:8786/v2/%(tenant_id)s |
  +--------------+-----------------------------------------+
  Note
  The Share File System services require endpoints for each service entity.
  Install and configure components¶
  Install the packages:
  # apt-get install manila-api manila-scheduler \
  python-manilaclient
  Edit the /etc/manila/manila.conf file and complete the following actions:
  In the section, configure database access:
  
  ...
  connection = mysql+pymysql://manila:MANILA_DBPASS@controller/manila
  Replace MANILA_DBPASS with the password you chose for the Share File System database.
  In the and sections, configure RabbitMQ message queue access:
  
  ...
  rpc_backend = rabbit
  
  ...
  rabbit_host = controller
  rabbit_userid = openstack
  rabbit_password = RABBIT_PASS
  Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.
  In the section, set the following config values:
  
  ...
  default_share_type = default_share_type
  rootwrap_config = /etc/manila/rootwrap.conf
  In the and sections, configure Identity service access:
  
  ...
  auth_strategy = keystone
  
  ...
  memcached_servers = controller:11211
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  project_name = service
  username = manila
  password = MANILA_PASS
  Replace MANILA_PASS with the password you chose for the manila user in the Identity service.
  In the section, configure the my_ip option to use the management interface IP address of the controller node:
  
  ...
  my_ip = 10.0.0.11
  In the section, configure the lock path:
  
  ...
  lock_path = /var/lib/manila/tmp
  Populate the Share File System database:
  # su -s /bin/sh -c "manila-manage db sync" manila
  Note
  Ignore any deprecation messages in this output.
  Finalize installation¶
  Restart the Share File Systems services:
  # service manila-scheduler restart
  # service manila-api restart
  2.安装manila-share服务
  Install the packages:
  # apt-get install manila-share python-pymysql
  Edit the /etc/manila/manila.conf file and complete the following actions:
  In the section, configure database access:
  
  ...
  connection = mysql+pymysql://manila:MANILA_DBPASS@controller/manila
  Replace MANILA_DBPASS with the password you chose for the Share File System database.
  In the and sections, configure RabbitMQ message queue access:
  
  ...
  rpc_backend = rabbit
  
  ...
  rabbit_host = controller
  rabbit_userid = openstack
  rabbit_password = RABBIT_PASS
  Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.
  In the section, set the following config values:
  
  ...
  default_share_type = default_share_type
  rootwrap_config = /etc/manila/rootwrap.conf
  In the and sections, configure Identity service access:
  
  ...
  auth_strategy = keystone
  
  ...
  memcached_servers = controller:11211
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  project_name = service
  username = manila
  password = MANILA_PASS
  Replace MANILA_PASS with the password you chose for the manila user in the Identity service.
  In the section, configure the my_ip option:
  
  ...
  my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
  Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network interface on your share node, typically 10.0.0.41 for the first node in the example architecture.
  In the section, configure the lock path:
  
  ...
  lock_path = /var/lib/manila/tmp
  Configure share server management support options¶
  The share node can support two modes, with and without the handling of share servers. The mode depends on driver support.
  Option 1 deploys the service without driver support for share management. In this mode, the service does not do anything related to networking. The operator must ensure network connectivity between instances and the NFS server. This option uses LVM driver that requires LVM and NFS packages as well as an additional disk for the manila-share LVM volume group.
  Option 2 deploys the service with driver support for share management. In this mode, the service requires Compute (nova), Networking (neutron) and Block storage (cinder) services for managing share servers. The information used for creating share servers is configured as share networks. This option uses the generic driver with the handling of share servers capacity and requires attaching the selfservice network to a router.
  Warning
  A bug prevents using both driver options on the same share node. For more information, see LVM Driver section at the Configuration Reference.
  Choose one of the following options to configure the share driver. Afterwards, return here and proceed to Finalize installation.
  Shared File Systems Option 1: No driver support for share servers management
  Shared File Systems Option 2: Driver support for share servers management
  Finalize installation¶
  Start the Share File Systems service including its dependencies:
  这里选择模式2安装
  Install the Networking service components:
  # apt-get install neutron-plugin-linuxbridge-agent
  Configure components¶
  Note
  Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets indicates potential default configuration options that you should retain.
  Edit the /etc/manila/manila.conf file and complete the following actions:
  In the section, enable the generic driver and the NFS/CIFS protocols:
  
  ...
  enabled_share_backends = generic
  enabled_share_protocols = NFS,CIFS
  Note
  Back end names are arbitrary. As an example, this guide uses the name of the driver.
  In the , , and sections, enable authentication for those services:
  
  ...
  url = http://controller:9696
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  memcached_servers = controller:11211
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  region_name = RegionOne
  project_name = service
  username = neutron
  password = NEUTRON_PASS
  
  ...
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  memcached_servers = controller:11211
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  region_name = RegionOne
  project_name = service
  username = nova
  password = NOVA_PASS
  
  ...
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  memcached_servers = controller:11211
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  region_name = RegionOne
  project_name = service
  username = cinder
  password = CINDER_PASS
  In the section, configure the generic driver:
  
  share_backend_name = GENERIC
  share_driver = manila.share.drivers.generic.GenericShareDriver
  driver_handles_share_servers = True
  service_instance_flavor_id = 100
  service_image_name = manila-service-image
  service_instance_user = manila
  service_instance_password = manila
  interface_driver = manila.network.linux.interface.BridgeInterfaceDriver
  Note
  You can also use SSH keys instead of password authentication for service instance credentials.
  最后重启manila-share
  # service manila-share restart
  3.cinder 安装
  Before you install and configure the Block Storage service, you must create a database, service credentials, and API endpoints.
  To create the database, complete these steps:
  Use the database access client to connect to the database server as the root user:
  $ mysql -u root -p
  Create the cinder database:
  CREATE DATABASE cinder;
  Grant proper access to the cinder database:
  GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
  IDENTIFIED BY 'CINDER_DBPASS';
  GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
  IDENTIFIED BY 'CINDER_DBPASS';
  Replace CINDER_DBPASS with a suitable password.
  Exit the database access client.
  Source the admin credentials to gain access to admin-only CLI commands:
  $ . admin-openrc
  To create the service credentials, complete these steps:
  Create a cinder user:
  $ openstack user create --domain default --password-prompt cinder
  User Password:
  Repeat User Password:
  +-----------+----------------------------------+
  | Field   | Value                            |
  +-----------+----------------------------------+
  | domain_id | e0353a670a9e496da891347c589539e9 |
  | enabled   | True                           |
  | id      | bb279f8ffc444637af38811a5e1f0562 |
  | name      | cinder                           |
  +-----------+----------------------------------+
  Add the admin role to the cinder user:
  $ openstack role add --project service --user cinder admin
  Note
  This command provides no output.
  Create the cinder and cinderv2 service entities:
  $ openstack service create --name cinder \
  --description "OpenStack Block Storage" volume
  +-------------+----------------------------------+
  | Field       | Value                            |
  +-------------+----------------------------------+
  | description | OpenStack Block Storage          |
  | enabled   | True                           |
  | id          | ab3bbbef780845a1a283490d281e7fda |
  | name      | cinder                           |
  | type      | volume                           |
  +-------------+----------------------------------+
  $ openstack service create --name cinderv2 \
  --description "OpenStack Block Storage" volumev2
  +-------------+----------------------------------+
  | Field       | Value                            |
  +-------------+----------------------------------+
  | description | OpenStack Block Storage          |
  | enabled   | True                           |
  | id          | eb9fd245bdbc414695952e93f29fe3ac |
  | name      | cinderv2                         |
  | type      | volumev2                         |
  +-------------+----------------------------------+
  Note
  The Block Storage services require two service entities.
  Create the Block Storage service API endpoints:
  $ openstack endpoint create --region RegionOne \
  volume public http://controller:8776/v1/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field      | Value                                 |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id         | 03fa2c90153546c295bf30ca86b1344b      |
  | interface    | public                                  |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | ab3bbbef780845a1a283490d281e7fda      |
  | service_name | cinder                                  |
  | service_type | volume                                  |
  | url          | http://controller:8776/v1/%(tenant_id)s |
  +--------------+-----------------------------------------+
  $ openstack endpoint create --region RegionOne \
  volume internal http://controller:8776/v1/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field      | Value                                 |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id         | 94f684395d1b41068c70e4ecb11364b2      |
  | interface    | internal                              |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | ab3bbbef780845a1a283490d281e7fda      |
  | service_name | cinder                                  |
  | service_type | volume                                  |
  | url          | http://controller:8776/v1/%(tenant_id)s |
  +--------------+-----------------------------------------+
  $ openstack endpoint create --region RegionOne \
  volume admin http://controller:8776/v1/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field      | Value                                 |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id         | 4511c28a0f9840c78bacb25f10f62c98      |
  | interface    | admin                                 |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | ab3bbbef780845a1a283490d281e7fda      |
  | service_name | cinder                                  |
  | service_type | volume                                  |
  | url          | http://controller:8776/v1/%(tenant_id)s |
  +--------------+-----------------------------------------+
  $ openstack endpoint create --region RegionOne \
  volumev2 public http://controller:8776/v2/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field      | Value                                 |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id         | 513e73819e14460fb904163f41ef3759      |
  | interface    | public                                  |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | eb9fd245bdbc414695952e93f29fe3ac      |
  | service_name | cinderv2                              |
  | service_type | volumev2                              |
  | url          | http://controller:8776/v2/%(tenant_id)s |
  +--------------+-----------------------------------------+
  $ openstack endpoint create --region RegionOne \
  volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field      | Value                                 |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id         | 6436a8a23d014cfdb69c586eff146a32      |
  | interface    | internal                              |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | eb9fd245bdbc414695952e93f29fe3ac      |
  | service_name | cinderv2                              |
  | service_type | volumev2                              |
  | url          | http://controller:8776/v2/%(tenant_id)s |
  +--------------+-----------------------------------------+
  $ openstack endpoint create --region RegionOne \
  volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field      | Value                                 |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id         | e652cf84dd334f359ae9b045a2c91d96      |
  | interface    | admin                                 |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | eb9fd245bdbc414695952e93f29fe3ac      |
  | service_name | cinderv2                              |
  | service_type | volumev2                              |
  | url          | http://controller:8776/v2/%(tenant_id)s |
  +--------------+-----------------------------------------+
  Note
  The Block Storage services require endpoints for each service entity.
  Install and configure components¶
  Install the packages:
  # apt-get install cinder-api cinder-scheduler
  Edit the /etc/cinder/cinder.conf file and complete the following actions:
  In the section, configure database access:
  
  ...
  connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
  Replace CINDER_DBPASS with the password you chose for the Block Storage database.
  In the and sections, configure RabbitMQ message queue access:
  
  ...
  rpc_backend = rabbit
  
  ...
  rabbit_host = controller
  rabbit_userid = openstack
  rabbit_password = RABBIT_PASS
  Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.
  In the and sections, configure Identity service access:
  
  ...
  auth_strategy = keystone
  
  ...
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  memcached_servers = controller:11211
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  project_name = service
  username = cinder
  password = CINDER_PASS
  Replace CINDER_PASS with the password you chose for the cinder user in the Identity service.
  Note
  Comment out or remove any other options in the section.
  In the section, configure the my_ip option to use the management interface IP address of the controller node:
  
  ...
  my_ip = 10.0.0.11
  In the section, configure the lock path:
  
  ...
  lock_path = /var/lib/cinder/tmp
  Populate the Block Storage database:
  # su -s /bin/sh -c "cinder-manage db sync" cinder
  Note
  Ignore any deprecation messages in this output.
  Configure Compute to use Block Storage¶
  Edit the /etc/nova/nova.conf file and add the following to it:
  
  os_region_name = RegionOne
  Finalize installation¶
  Restart the Compute API service:
  # service nova-api restart
  Restart the Block Storage services:
  # service cinder-scheduler restart
  # service cinder-api restart
  3.1cinder存储节点安装
  Before you install and configure the Block Storage service on the storage node, you must prepare the storage device.
  Note
  Perform these steps on the storage node.
  Install the supporting utility packages:
  # apt-get install lvm2
  Note
  Some distributions include LVM by default.
  Create the LVM physical volume /dev/sdb:
  # pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created
  Create the LVM volume group cinder-volumes:
  # vgcreate cinder-volumes /dev/sdb
  Volume group "cinder-volumes" successfully created
  The Block Storage service creates logical volumes in this volume group.
  Only instances can access Block Storage volumes. However, the underlying operating system manages the devices associated with the volumes. By default, the LVM volume scanning tool scans the /dev directory for block storage devices that contain volumes. If projects use LVM on their volumes, the scanning tool detects these volumes and attempts to cache them which can cause a variety of problems with both the underlying operating system and project volumes. You must reconfigure LVM to scan only the devices that contain the cinder-volume volume group. Edit the /etc/lvm/lvm.conf file and complete the following actions:
  In the devices section, add a filter that accepts the /dev/sdb device and rejects all other devices:
  devices {
  ...
  filter = [ "a/sdb/", "r/.*/"]
  Each item in the filter array begins with a for accept or r for reject and includes a regular expression for the device name. The array must end with r/.*/ to reject any remaining devices. You can use the vgs -vvvv command to test filters.
  Warning
  If your storage nodes use LVM on the operating system disk, you must also add the associated device to the filter. For example, if the /dev/sda device contains the operating system:
  filter = [ "a/sda/", "a/sdb/", "r/.*/"]
  Similarly, if your compute nodes use LVM on the operating system disk, you must also modify the filter in the /etc/lvm/lvm.conf file on those nodes to include only the operating system disk. For example, if the /dev/sda device contains the operating system:
  filter = [ "a/sda/", "r/.*/"]
  Install and configure components¶
  Install the packages:
  # apt-get install cinder-volume
  Edit the /etc/cinder/cinder.conf file and complete the following actions:
  In the section, configure database access:
  
  ...
  connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
  Replace CINDER_DBPASS with the password you chose for the Block Storage database.
  In the and sections, configure RabbitMQ message queue access:
  
  ...
  rpc_backend = rabbit
  
  ...
  rabbit_host = controller
  rabbit_userid = openstack
  rabbit_password = RABBIT_PASS
  Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.
  In the and sections, configure Identity service access:
  
  ...
  auth_strategy = keystone
  
  ...
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  memcached_servers = controller:11211
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  project_name = service
  username = cinder
  password = CINDER_PASS
  Replace CINDER_PASS with the password you chose for the cinder user in the Identity service.
  Note
  Comment out or remove any other options in the section.
  In the section, configure the my_ip option:
  
  ...
  my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
  Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network interface on your storage node, typically 10.0.0.41 for the first node in the example architecture.
  In the section, configure the LVM back end with the LVM driver, cinder-volumes volume group, iSCSI protocol, and appropriate iSCSI service:
  
  ...
  volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
  volume_group = cinder-volumes
  iscsi_protocol = iscsi
  iscsi_helper = tgtadm
  In the section, enable the LVM back end:
  
  ...
  enabled_backends = lvm
  Note
  Back-end names are arbitrary. As an example, this guide uses the name of the driver as the name of the back end.
  In the section, configure the location of the Image service API:
  
  ...
  glance_api_servers = http://controller:9292
  In the section, configure the lock path:
  
  ...
  lock_path = /var/lib/cinder/tmp
  Finalize installation¶
  Restart the Block Storage volume service including its dependencies:
  # service tgt restart
  # service cinder-volume restart
  3.2cinder验证
  Perform these commands on the controller node.
  Source the admin credentials to gain access to admin-only CLI commands:
  $ . admin-openrc
  List service components to verify successful launch of each process:
  $ cinder service-list
  +------------------+------------+------+---------+-------+----------------------------+-----------------+
  |      Binary      |    Host    | Zone |Status | State |         Updated_at         | Disabled Reason |
  +------------------+------------+------+---------+-------+----------------------------+-----------------+
  | cinder-scheduler | controller | nova | enabled |   up| 2014-10-18T01:30:54.000000 |       None      |
  | cinder-volume    | block1@lvm | nova | enabled |   up| 2014-10-18T01:30:57.000000 |       None      |
  +------------------+------------+------+---------+-------+----------------------------+-----------------+
页: [1]
查看完整版本: install openstack by apt-get