lshboo 发表于 2019-2-2 06:17:01

OpenStack, Ceph RBD and QoS

  OpenStack,Ceph RBD and QoS
  时间 2013-12-2307:19:00Planet OpenStack 原文http://www.sebastien-han.fr/blog/2013/12/23/openstack-ceph-rbd-and-qos/
  The Havana cycle introduced a QoSfeature on both Cinder and Nova. Quick tour of this excellent implementation.
  Originally both QEMU and KVM supportrate limitation. This is obviously implemented through libvirt and available asan extra xml flag within thesection callediotune .
  ·total_bytes_sec : the total allowed bandwidth for the guest per second
  ·read_bytes_sec : sequential read limitation
  ·write_bytes_sec : sequential write limitation
  ·total_iops_sec : the total allowed IOPS for the guest per second
  ·read_iops_sec : random read limitation
  ·write_iops_sec : random write limitation
  This is wonderful that OpenStackimplemented such (easy?) feature in both Nova and Cinder. It is also a signthat OpenStack is getting more featured and complete in the existing coreprojects. Having such facility is extremely useful for several reasons. Firstof all, not all the storage backends support QoS. For instance, Ceph doesn’thave any built-in QoS feature whatsoever. Moreover, the limitation is directlyat the hypervisor layer and your storage solution doesn’t even need to havesuch feature. Another good point is that from an operator side it is quite niceto be able to offer different levels of service. Operators can now offerdifferent types of volumes based on a certain QoS, customers then, will becharged accordingly.
  II. Test it!
  First create the QoS in Cinder:
  1
  2
  3
  4
  5
  6
  7
  8
  9
  $ cinder qos-createhigh-iops consumer="front-end" read_iops_sec=2000 write_iops_sec=1000
  +----------+---------------------------------------------------------+
  | Property |                     Value                           |
  +----------+---------------------------------------------------------+
  | consumer |                     front-end                           |
  |    id   |      c38d72f8-f4a4-4999-8acd-a17f34b040cb             |
  |   name    |                high-iops                              |
  |specs    | {u'write_iops_sec': u'1000', u'read_iops_sec': u'2000'} |
  +----------+---------------------------------------------------------+
  Create a new volume type:
  1
  2
  3
  4
  5
  6
  $ cinder type-createhigh-iops
  +--------------------------------------+-----------+
  |                  ID                  | Name      |
  +--------------------------------------+-----------+
  |9c746ca5-eff8-40fe-9a96-1cdef7173bd0 | high-iops |
  +--------------------------------------+-----------+
  Then associate the volume type with theQoS:
  1
  2
  3
  4
  5
  6
  7
  8
  9
  10
  11
  12
  13
  14
  15
  16
  17
  18
  19
  20
  $ cinderqos-associate c38d72f8-f4a4-4999-8acd-a17f34b040cb9c746ca5-eff8-40fe-9a96-1cdef7173bd0
  $ cinder create--display-name slow --volume-type slow 1
  +---------------------+--------------------------------------+
  |       Property      |                Value               |
  +---------------------+--------------------------------------+
  |   attachments   |                  []                  |
  |availability_zone|               nova               |
  |       bootable      |                false               |
  |      created_at   |       2013-12-02T12:59:33.177875      |
  | display_description|               None               |
  |   display_name    |               high-iop             |
  |          id          | 743549c1-c7a3-4e86-8e99-b51df4cf7cdc |
  |       metadata      |                  {}                  |
  |         size      |                  1                   |
  |   snapshot_id   |               None               |
  |   source_volid    |               None               |
  |      status       |               creating               |
  |   volume_type   |               high-iop             |
  +---------------------+--------------------------------------+
  Eventually attach the volume to aninstance:
  1
  2
  3
  4
  5
  6
  7
  8
  9
  $ nova volume-attachcirrOS 743549c1-c7a3-4e86-8e99-b51df4cf7cdc /dev/vdc
  +----------+--------------------------------------+
  | Property |Value                              |
  +----------+--------------------------------------+
  | device   | /dev/vdc                           |
  | serverId |7fff1d37-efc4-46b9-8681-3e6b1086c453 |

  |>  | volumeId |743549c1-c7a3-4e86-8e99-b51df4cf7cdc |
  +----------+--------------------------------------+
  While attaching the device you shouldsee the following xml creation from the nova-volume debug log. Dumping thevirsh xml works as well.
  2013-12-11 14:12:05.874 DEBUGnova.virt.libvirt.config Generated XML
  
  
  
  
  
  
  
  
  2e589abc-a008-4433-89ae-1bb142b139e3
  
  20
  5
  
  
  W Important note: rate-limitingis currently broken in Havana, however the bughas already beenreported and a fixsubmitted/accepted . This same patch has also already beenproposed as a potentialbackport for Havana.

页: [1]
查看完整版本: OpenStack, Ceph RBD and QoS