Tempest的确很好地指引了我们这种亲密接触OpenStack的吃瓜群众,不光是因为完整提供了一套测试框架,还有就是Test Case的设计方式
@attr(type=['negative', 'gate']) def test_resize_server_using_overlimit_ram(self): flavor_name = data_utils.rand_name("flavor-") flavor_id = self._get_unused_flavor_id() resp, quota_set = self.quotas_client.get_quota_set( self.tenant_id) ram = int(quota_set['ram']) + 1 vcpus = 8 disk = 10 resp, flavor_ref = self.flavors_client.create_flavor(flavor_name, ram, vcpus, disk, flavor_id) self.addCleanup(self.flavors_client.delete_flavor, flavor_id) specs = {"ecus_per_vcpu:": "1"} set_resp, set_body = \ self.flavors_client.set_flavor_extra_spec(flavor_ref['id'], specs) self.assertEqual(set_resp.status, 200) self.assertEqual(set_body, specs) self.assertRaises(exceptions.OverLimit, self.client.resize, self.servers[0]['id'], flavor_ref['id'])
根据命名可以猜到case要验证的是用一个overlimit内存的flavor来resize虚拟机,具体overlimit的是租户的配额还是剩余的物理资源,看具体实现;大概扫一遍流程应该是先获取租户目前的quota,其中ram+1了,进而创建一个flavor,此时如果用这个flavor来创建VM,就会由于配额不足导致出错
先执行一下,错误满天飞,一个TRACE信息
Traceback (most recent call last): File "/Users/lihui/work/cloud/openstack/tempest-ci/tempest/api/compute/admin/test_servers_negative.py", line 72, in test_resize_server_using_overlimit_ram flavor_id) File "/Users/lihui/work/cloud/openstack/tempest-ci/tempest/services/compute/json/flavors_client.py", line 72, in create_flavor resp, body = self.post('flavors', post_body, self.headers) File "/Users/lihui/work/cloud/openstack/tempest-ci/tempest/common/rest_client.py", line 317, in post return self.request('POST', url, headers, body) File "/Users/lihui/work/cloud/openstack/tempest-ci/tempest/common/rest_client.py", line 451, in request resp, resp_body) File "/Users/lihui/work/cloud/openstack/tempest-ci/tempest/common/rest_client.py", line 501, in _error_checker raise exceptions.BadRequest(resp_body) tempest.exceptions.BadRequest: Bad request Details: {u'badRequest': {u'message': u"Invalid input received: 'memory_mb' argument must be a positive integer", u'code': 400}}
看这里执行到create_flavor的时候,报了一个错,说memory_mb必须是一个正数,这就有点小蹊跷了,内存是又下面得到了
ram = int(quota_set['ram']) + 1
也就是说租户自身内存配额+1居然还不是一个正数,直接查看下,虽然能够猜到配额应该是-1(不限制)导致ram=0而无法创建flavor,但是还是查看一下这个case的具体流程,因为flavor的创建是需要admin角色,但是这里具体测试的租户应该不是admin
resp, quota_set = self.quotas_client.get_quota_set( self.tenant_id)
获取配额的租户是self.tenant_id,它来自
@classmethod def setUpClass(cls): super(ServersAdminNegativeTestJSON, cls).setUpClass() cls.client = cls.os_adm.servers_client cls.non_adm_client = cls.servers_client cls.flavors_client = cls.os_adm.flavors_client cls.identity_client = cls._get_identity_admin_client() tenant = cls.identity_client.get_tenant_by_name( cls.non_adm_client.tenant_name) cls.tenant_id = tenant['id']
从命名可以看出来,tenant_id是来自cls.non_adm_client.tenant_name,是一个普通租户角色,其实就是tempest.conf里配置的租户,也可以从错误日志里查看,这里直接查看一下改租户的quota资源情况
$ nova quota-show +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | -1 | | cores | -1 | | ram | -1 | | ecus | -1 | | local_gb | -1 | | floating_ips | 10 | | fixed_ips | -1 | | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes | 255 | | key_pairs | 100 | | security_groups | 10 | | security_group_rules | 20 | +-----------------------------+-------+
可以看到租户的ram配额的确是-1,也可以pdb对比一下两者的结果
手动创建一个ram为0的flavor
$ nova flavor-create flavor-test 11111 0 10 8 ERROR: Invalid input received: 'memory_mb' argument must be a positive integer (HTTP 400) (Request-ID: req-bb87fb65-f0d2-4460-8745-6a418fde6b99)
因此Case错误的原因首先就是flavor创建失败,解决的方法有很多
1:直接将这个租户配额为-1的全改掉,因为正常都设定配额都不会无限制,但我不太喜欢,配额里面五花八门也许会测试出更多异常
2:将ram+1改成ram+2,但如果quota里面有人手贱改了个-2,case还是有问题
3:最保险的方法就是在+1的,这个域(后面可能还有vcpu,disk的case也是这种方法)做下判断,如果小于0,直接置为一个正数即可
最新社区的改法更彻底,直接不让继续
ram = int(quota_set['ram']) if ram == -1: raise self.skipException("ram quota set is -1," " cannot test overlimit")
这里直接先第二种方法修改,验证一下case先,耗时比较久,10分多钟,还是报错了
2017-01-12 23:04:43,121 Request: GET http://pubbetaapi.beta.server.163.org:8774/v2/9bd69066f3bd4c9e8c56d4ca7e314330/flavors/673033715 2017-01-12 23:04:43,121 Request Headers: {'X-Auth-Token': u'1e18587c9f2f47ef980ab15a91a4f3b3'} 2017-01-12 23:04:43,248 Response Status: 404 2017-01-12 23:04:43,249 Nova request id: req-5eb61140-9a51-424d-970f-074da06dc67c 2017-01-12 23:04:43,249 Response Headers: {'content-length': '78', 'date': 'Thu, 12 Jan 2017 15:04:43 GMT', 'content-type': 'application/json; charset=UTF-8', 'connection': 'close'} 2017-01-12 23:04:43,249 Response Body: {"itemNotFound": {"message": "The resource could not be found.", "code": 404}}
看到报了一个404,根据GET的API来看,应该是flavor show的ji接口找不到flavor ID,看上去这个flavor也是random的
问题出在下面随机获取一个flavor ID里
def _get_unused_flavor_id(self): flavor_id = data_utils.rand_int_id(start=10000) while True: try: resp, body = self.flavors_client.get_flavor_details(flavor_id) except exceptions.NotFound: break flavor_id = data_utils.rand_int_id(start=10000) return flavor_id
看这段代码应该是没问题的,random一个大于10000的ID,如果这个ID已经存在了,重新random一个,继续get_flavor_details查,知道不存在,就break,然后返回这个ID,接着就按照设置的VCPU,RAM等配置这个flavor ID,奇怪了,pdb走一遍
-> self.assertRaises(exceptions.OverLimit, (Pdb) p exceptions.OverLimit <class 'tempest.exceptions.overlimit'=""> (Pdb) n > /Users/lihui/work/cloud/openstack/tempest-ci/tempest/api/compute/admin/test_servers_negative.py(83)test_resize_server_using_overlimit_ram() -> self.client.resize, (Pdb) n > /Users/lihui/work/cloud/openstack/tempest-ci/tempest/api/compute/admin/test_servers_negative.py(84)test_resize_server_using_overlimit_ram() -> self.servers[0]['id'], (Pdb) p self.servers[0]['id'] u'702677bd-1e88-4700-a599-858cbdb9bdb0' (Pdb) n > /Users/lihui/work/cloud/openstack/tempest-ci/tempest/api/compute/admin/test_servers_negative.py(85)test_resize_server_using_overlimit_ram() -> flavor_ref['id']) (Pdb) p flavor_ref['id'] u'180046989' (Pdb) n MismatchError: MismatchError() > /Users/lihui/work/cloud/openstack/tempest-ci/tempest/api/compute/admin/test_servers_negative.py(85)test_resize_server_using_overlimit_ram() -> flavor_ref['id']) (Pdb)
走到这里已经进行了resize操作,看下虚拟机情况,惊呆了,居然resize成功了
nova show 702677bd-1e88-4700-a599-858cbdb9bdb0 +----------------------------------------------+------------------------------------------------------------------------+ | Property | Value | +----------------------------------------------+------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | hui.hui1 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2017-01-12T16:19:33.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | availability_zone | hui.hui1 | | config_drive | 1 | | created | 2017-01-12T16:15:08Z | | flavor | flavor--tempest-274541276 (180046989) | | hostId | 0185ed56ab0ab75483a2b64d33caa5ad961541d2ea5086f9a4638d97 | | hypervisor_type | qemu | | id | 702677bd-1e88-4700-a599-858cbdb9bdb0 | | image | debian_7_x86_64_lihui_hui.qcow2 (8ce19753-8555-470b-82e8-4489f7625c91) | | key_name | - | | metadata | {} | | name | server-tempest-1470088717 | | os-extended-volumes:volumes_attached | [] | | os-netease-extended-volumes:volumes_attached | [] | | os-server-status | down | | os_type | linux | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id | 7dadb6ca964041c18b0a87241e4a03eb | | updated | 2017-01-12T16:20:53Z | | use_ceph | yes | | user_id | 4602fee102604fe1bf8dc9648605a11b | +----------------------------------------------+------------------------------------------------------------------------+
再捋一捋,将ram+1改成ram+2有什么用呢,case最终出错的原因是配额不足,无法进行resize,改了resize的flavor的ram,但配额还是-1,怎么会出错了,顿时只能觉得傻的可以,一了白了改掉配额验证即可
✘ lihui@MacBook ~/work/cloud/openstack/tempest-ci/tempest/api/compute/admin master ●✚ nosetests -sv test_servers_negative.py:ServersAdminNegativeTestJSON.test_resize_server_using_overlimit_ram tempest.api.compute.admin.test_servers_negative.ServersAdminNegativeTestJSON.test_resize_server_using_overlimit_ram ... ok ---------------------------------------------------------------------- Ran 1 test in 50.061s OK
最终用例全过,有些地方还是没考虑清楚
lihui@MacBook ~/work/cloud/openstack/tempest-ci/tempest/api/compute/admin master ●✚ nosetests -sv test_servers_negative.py tempest.api.compute.admin.test_servers_negative.ServersAdminNegativeTestJSON.create_test_server ... ok tempest.api.compute.admin.test_servers_negative.ServersAdminNegativeTestJSON.test_get_server_diagnostics_by_non_admin ... ok tempest.api.compute.admin.test_servers_negative.ServersAdminNegativeTestJSON.test_migrate_non_existent_server ... ok tempest.api.compute.admin.test_servers_negative.ServersAdminNegativeTestJSON.test_migrate_server_invalid_state ... ok tempest.api.compute.admin.test_servers_negative.ServersAdminNegativeTestJSON.test_reset_state_server_invalid_state ... ok tempest.api.compute.admin.test_servers_negative.ServersAdminNegativeTestJSON.test_reset_state_server_invalid_type ... ok tempest.api.compute.admin.test_servers_negative.ServersAdminNegativeTestJSON.test_reset_state_server_nonexistent_server ... ok tempest.api.compute.admin.test_servers_negative.ServersAdminNegativeTestJSON.test_resize_server_using_overlimit_ram ... ok tempest.api.compute.admin.test_servers_negative.ServersAdminNegativeTestJSON.test_resize_server_using_overlimit_vcpus ... ok ---------------------------------------------------------------------- Ran 9 tests in 104.883s OK (SKIP=1)