cinder服务通常看到的有cinder-api,cinder-scheduler和cinder-volume,其中要注意的是cinder并不是后端存储,而是调度,管理后端存储的服务
lihui@MacBook ~/server/source_txt cinder service-list +----+------------------+------------------+------+---------+-------+----------------------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | +----+------------------+------------------+------+---------+-------+----------------------------+ | 4 | cinder-scheduler | 10-111-2-52 | pub | enabled | up | 2016-11-12T15:01:40.000000 | | 5 | cinder-scheduler | 10-111-2-51 | pub | enabled | up | 2016-11-12T15:01:39.000000 | | 7 | cinder-volume | 10-111-0-39@hui | pub | enabled | up | 2016-11-12T15:01:46.000000 | | 10 | cinder-volume | 10-111-0-48@hui | pub | enabled | up | 2016-11-12T15:01:39.000000 | | 11 | cinder-volume | 10-111-0-39@lvm | pub | enabled | up | 2016-11-12T15:01:44.000000 | | 12 | cinder-volume | 10-111-0-47@ceph | pub | enabled | up | 2016-11-12T15:01:38.000000 | | 13 | cinder-volume | 10-111-0-48@fake | pub | enabled | down | 2016-11-10T10:01:02.000000 | +----+------------------+------------------+------+---------+-------+----------------------------+
用户发送API请求创建一个卷,cinder-api收到了请求之后,进行响应和处理,cinder-scheduler则会调度筛选出合适的cinder-volume来进行处理,cinder-volume则会部署在存储节点上,每一个cinder-volume管理着存储设备,同时还支持同时管理多种存储类型,比如ceph,lvm等
volume-type这是一个抽象的概念,这玩意你可以随意创建,随意修改
lihui@MacBook ~/server/source_txt cinder type-list +--------------------------------------+----------+ | ID | Name | +--------------------------------------+----------+ | 19c43c55-5f97-46a2-9fb5-52ac5bc90737 | fake | | 21b3508c-6234-4f00-9e45-cf4c3a6ca660 | lvm | | 2a8c9a6c-b436-4197-8513-259f70f56612 | ceph_sas | | 35199fad-c3bf-4a20-999a-001fd351386d | ceph_ssd | | 6604eedd-68bc-4f3c-a64a-ffd9f36aa6b6 | hui_sas | | a3288263-70e4-43c5-8831-34385e2cd7ba | hui | | bd8f62eb-e6bb-492b-93e1-f905bd2d940f | ceph | | cf23ddf8-d540-43aa-8603-f79e10977642 | hui_ssd | +--------------------------------------+----------+
而这里的volume-type却是和后端存储是对应的
lihui@MacBook ~/server/source_txt cinder extra-specs-list +--------------------------------------+----------+---------------------------------------+ | ID | Name | extra_specs | +--------------------------------------+----------+---------------------------------------+ | 19c43c55-5f97-46a2-9fb5-52ac5bc90737 | fake | {u'volume_backend_name': u'FAKE'} | | 21b3508c-6234-4f00-9e45-cf4c3a6ca660 | lvm | {u'volume_backend_name': u'LVMLocal'} | | 2a8c9a6c-b436-4197-8513-259f70f56612 | ceph_sas | {u'volume_backend_name': u'CEPH_SAS'} | | 35199fad-c3bf-4a20-999a-001fd351386d | ceph_ssd | {u'volume_backend_name': u'CEPH'} | | 6604eedd-68bc-4f3c-a64a-ffd9f36aa6b6 | hui_sas | {u'volume_backend_name': u'HUI'} | | a3288263-70e4-43c5-8831-34385e2cd7ba | hui | {u'volume_backend_name': u'HUI'} | | bd8f62eb-e6bb-492b-93e1-f905bd2d940f | ceph | {u'volume_backend_name': u'CEPH'} | | cf23ddf8-d540-43aa-8603-f79e10977642 | hui_ssd | {u'volume_backend_name': u'HUI'} | +--------------------------------------+----------+---------------------------------------+
每个不同厂商的存储都有自己的extra_specs,那么cinder-Scheduler通过volume-type就会调度到不同的后端存储,但是环境里具体有没有部署该后端存储才能决定能否成功创建卷,比如这里有一个非官方接口获取每个物理节点的后端存储状况
+------------------+------------------+-------------------+---------------------+----------------------+----------------------------+ | Host | Free_capacity_gb | Total_capacity_gb | Volume_backend_name | Storage_pool | Timestamp | +------------------+------------------+-------------------+---------------------+----------------------+----------------------------+ | 10-111-0-39@lvm | 2137.49 | 2234.49 | LVMLocal | None | 2016-11-12T15:04:01.914634 | | 10-111-0-39@hui | infinite | infinite | HUI | None | 2016-11-12T15:04:40.689381 | | 10-111-0-47@ceph | 10799 | 12571 | CEPH | switch01_sas_volumes | 2016-11-12T15:04:27.975057 | | 10-111-0-48@hui | infinite | infinite | HUI | None | 2016-11-12T15:04:55.762181 | +------------------+------------------+-------------------+---------------------+----------------------+----------------------------+
需要查看的有Host和Volume_backend_name,根据该name和extra_specs的对应关系,可见整个环境部署了后端存储的就只有这四种,并没有发现CEPH_SAS这种类型,对应着volume-type,就是ceph_sas这种没有,下面可以做一个简单的测试,调用cinder接口来创建该类型的卷
lihui@MacBook ~/server/source_txt cinder create --volume-type ceph_sas 1 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | pub | | bootable | false | | created_at | 2016-11-12T15:05:57.364355 | | display_description | None | | display_name | None | | id | 9498a3b7-fa2d-490a-a580-46053af14839 | | metadata | {} | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | volume_type | ceph_sas | +---------------------+--------------------------------------+ lihui@MacBook ~/server/source_txt cinder show 9498a3b7-fa2d-490a-a580-46053af14839 +---------------------------------------------+--------------------------------------------------------------------------------------------------+ | Property | Value | +---------------------------------------------+--------------------------------------------------------------------------------------------------+ | attachments | [] | | availability_zone | pub | | bootable | false | | created_at | 2016-11-12T15:05:57.000000 | | display_description | None | | display_name | None | | id | 9498a3b7-fa2d-490a-a580-46053af14839 | | metadata | {} | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-provider-attr:provider_auth | None | | os-vol-provider-attr:provider_geometry | None | | os-vol-provider-attr:provider_location | None | | os-vol-provider-attr:provider_pool_location | None | | os-vol-tenant-attr:tenant_id | 9bd69066f3bd4c9e8c56d4ca7e314330 | | size | 1 | | snapshot_id | None | | source_volid | None | | status | error | | volume_qos | {u'read_bps': u'18648364', u'write_bps': u'18648364', u'read_iops': u'12', u'write_iops': u'20'} | | volume_type | ceph_sas | +---------------------------------------------+--------------------------------------------------------------------------------------------------+
可以看到这里卷状态直接error了,查看日志,可以在Scheduler日志里找到原因
$ grep 9498a3b7-fa2d-490a-a580-46053af14839 cinder-* -R --color cinder-scheduler.log:2016-11-12 23:05:57.413 27671 DEBUG cinder.openstack.common.rpc.amqp [-] received {u'_context_roles': [u'admin', u'_member_'], u'_context_request_id': u'req-d87a5755-35ef-493f-b489-7d60652ae97b', u'_context_quota_class': None, u'_context_project_name': u'Project_qa_admin', u'_context_service_catalog': [{u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://xxxxxxxxxxxxxxxxxxxx.org:8774/v2/9bd69066f3bd4c9e8c56d4ca7e314330', u'region': u'RegionOne', u'publicURL': u'http://xxxxxxxxxxxxxxxxxxxx.org:8774/v2/9bd69066f3bd4c9e8c56d4ca7e314330', u'id': u'0c4d67ccc5be4db2a9083f23167e2902', u'internalURL': u'http://xxxxxxxxxxxxxxxxxxxx.org:8774/v2/9bd69066f3bd4c9e8c56d4ca7e314330'}, {u'adminURL': u'http://xxxxxxxxxxxxxxxxxxxx.org:8774/v2/9bd69066f3bd4c9e8c56d4ca7e314330', u'region': u'RegionTwo', u'publicURL': u'http://xxxxxxxxxxxxxxxxxxxx.org:8774/v2/9bd69066f3bd4c9e8c56d4ca7e314330', u'id': u'106a4ce4eb4f404e8a47780affcc1cd4', u'internalURL': u'http://xxxxxxxxxxxxxxxxxxxx.org:8774/v2/9bd69066f3bd4c9e8c56d4ca7e314330'}], u'type': u'compute', u'name': u'nova'}], u'_context_tenant': u'9bd69066f3bd4c9e8c56d4ca7e314330', u'args': {u'request_spec': {u'volume_properties': {u'status': u'creating', u'volume_type_id': u'2a8c9a6c-b436-4197-8513-259f70f56612', u'user_id': u'691423d9a07843f4be0005f7f98e21f5', u'availability_zone': u'pub', u'reservations': [u'8c85049c-dd1a-429c-b6ca-ab3f51a852d8', u'64a938a5-7642-483c-9c81-505a36512323', u'e06ea46f-aea3-4aaa-a4d9-0c7a48747512', u'a7527276-4270-4ebf-85d3-b0e51b4bc53d'], u'snapshot_id': None, u'attach_status': u'detached', u'display_description': None, u'volume_metadata': [], u'encryption_key_id': None, u'source_volid': None, u'volume_admin_metadata': [], u'display_name': None, u'project_id': u'9bd69066f3bd4c9e8c56d4ca7e314330', u'id': u'9498a3b7-fa2d-490a-a580-46053af14839', u'size': 1, u'metadata': {}}, u'source_volid': None, u'image_id': None, u'snapshot_id': None, u'volume_type': {u'name': u'ceph_sas', u'qos_specs_id': None, u'deleted': False, u'created_at': u'2016-08-25T07:47:20.000000', u'updated_at': None, u'extra_specs': {u'volume_backend_name': u'CEPH_SAS'}, u'deleted_at': None, u'id': u'2a8c9a6c-b436-4197-8513-259f70f56612'}, u'volume_id': u'9498a3b7-fa2d-490a-a580-46053af14839'}, u'volume_id': u'9498a3b7-fa2d-490a-a580-46053af14839', u'filter_properties': {}, u'topic': u'cinder-volume', u'image_id': None, u'snapshot_id': None}, u'namespace': None, u'_context_auth_token': '', u'_context_timestamp': u'2016-11-12T15:05:57.297047', u'_context_is_admin': True, u'version': u'1.2', u'_context_project_id': u'9bd69066f3bd4c9e8c56d4ca7e314330', u'_context_user': u'691423d9a07843f4be0005f7f98e21f5', u'_unique_id': u'fa7e7af8959a41b8886dccb34d73d61d', u'_context_read_deleted': u'no', u'_context_user_id': u'691423d9a07843f4be0005f7f98e21f5', u'method': u'create_volume', u'_context_remote_address': u'10.180.0.200'} _safe_log /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/common.py:279 cinder-scheduler.log:2016-11-12 23:05:57.436 27671 ERROR cinder.volume.flows.create_volume [req-d87a5755-35ef-493f-b489-7d60652ae97b 691423d9a07843f4be0005f7f98e21f5 9bd69066f3bd4c9e8c56d4ca7e314330] Updating volume: 9498a3b7-fa2d-490a-a580-46053af14839 with {'status': 'error'} due to: No valid host was found.
原因十分清楚,No valid host was found.这种volume-type的后端存储没有任何节点有部署
同一种存储类型可能后端有很多节点都有部署,多个存储节点可以是同一个存储池,那么Cinder-Scheduler就像Nova-Scheduler一样进行筛选,比如根据每个cinder-volume掌管的后端存储容量来进行筛选,比如filter先找到容量较多的存储后端来进行使用