starter-kit:compute (V) – DefCore

OpenStack is made-up of tens of projects and each of them with hundreds of possible configurations, making each cloud different. This leads to a situation where the mission of OpenStack, to ‘create an ubiquitous Open Source cloud computing platform’, is getting more complicated. Moving tasks and projects between different clouds providers (private or public) is a nightmare due to the differences between them.

DefCore sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. This definition uses community resources and involvement to drive interoperability by creating the minimum standards for products labeled “OpenStack.”

RefStack is a source of tools for OpenStack interoperability testing, and what will be used to test our cloud and try to get the ‘OpenStack Compute Powered’ label.

Although the startet-compute:kit tag and the DefCore project are not directly related, we can combine them and work with both so a minimal OpenStack cloud can be deployed that is fully compatible with those products labeled ‘OpenStack’.

As usual we will create an specific user for this task, and get the required code from upstream to run the tests.

root@aio:~# useradd -m refstack
root@aio:~# passwd refstack
root@aio:~# usermod -a -G sudo refstack
refstack@aio:~# git clone git://git.openstack.org/stackforge/refstack-client

The refstack-client includes a script that will help as to prepare the environment for running all the tempest tests.

refstack@aio:~# cd refstack-client
refstack@aio:~/refstack-client# ./setup_env
refstack@aio:~/refstack-client# source .venv/bin/activate

Once the environment has been prepared, we need to create a tempest configuration file, tempest.conf, with our cloud environment data.

[auth]
tempest_roles = _member_

[identity]
auth_version = v2
admin_domain_name = Default
admin_tenant_name = stack
admin_password = minions
admin_username = gru
uri_v3 = http://aio:5000/v3
uri = http://aio:5000/v2.0

And we are ready to launch the tests. By default, refstack will executed the whole test suite, but since only the keystone identity service is available, we can limit the scope of the tests to just it.

(.venv)refstack@aio:~/refstack-client# ./refstack-client test -c ~/tempest.conf -vv -- tempest.api.identity
...
Ran 161 tests in 184.264s

OK
2015-08-20 10:36:06,643 refstack_client:272 INFO Tempest test complete.
2015-08-20 10:36:06,643 refstack_client:273 INFO Subunit results located in: .tempest/.testrepository/1
2015-08-20 10:36:06,664 refstack_client:276 INFO Number of passed tests: 161
2015-08-20 10:36:06,665 refstack_client:288 INFO JSON results saved in: .tempest/.testrepository/1.json

We have conducted a total of 161 tests, and all of them were completed successfully. We can also see how these results stack up against DefCore capabilities. We just need to upload the results to the refstack.net page and check it.

(.venv)stack@aio:~/refstack-client$ REFSTACK_URL=http://refstack.net/api ./refstack-client upload .tempest/.testrepository/1.json 
Test results will be uploaded to http://refstack.net/api. Ok? (yes/y): y
Test results uploaded!
URL: http://refstack.net/#/results/cf61b903-2589-43eb-9970-7129e17c595b
This cloud passes 1.8% (2/114) of the tests in the 2015.07 required capabilities for the OpenStack Powered Compute program.
Excluding flagged tests, this cloud passes 1.8% (2/109) of the required tests.

Compliance with 2015.07: NO

Of course we are not compliant with the OpenStack Powered Compute program (our target for our deployment), since we have just tested the keystone service, but on the plus side, we have passed all the identity service tests, so we are on the right path.

identity-v2-tokens-create [1/1]
tempest.api.identity.v2.test_tokens.TokensTest.test_create_token
identity-v3-tokens-create [1/1]
tempest.api.identity.v3.test_tokens.TokensV3Test.test_create_token

If we want to test the complete suite, you can do it easily, but at this moment it is pointless

(.venv)stack@aio:~/refstack-client$ ./refstack-client test -c ~/tempest.conf -vv --test-list https://raw.githubusercontent.com/openstack/defcore/master/2015.05/2015.05.required.txt
Advertisement

starter-kit:compute (IV) – Managing Keystone

Now that we have a functional and running keystone is time to start populating its database with all the users, projects and tenants that we’ll be using it the day to day. First of all, another user and his virtual environment will be created to install the ‘openstackclient’. This ‘new’ CLI is an effort to have an only command line tool to interact with all the openstack services.

root@aio:~# useradd -m stack
stack@aio:~$ virtualenv venv
stack@aio:~$ source venv/bin/activate
stack@aio:~$ pip install python-openstackclient

To bootstrap keystone, there is a special TOKEN that can be used to create the initial users. We also need to specify the URL where to find the keystone service. All this data can be stored in a file, ‘tokenrc’, and sourced when needed.

export OS_TOKEN=ADMIN
export OS_URL=http://aio:35357/v2.0

The value of the initial token is located in the keystone.conf file under the [DEFAULT] group, with an initial value of ADMIN.

[DEFAULT]
 …
 #admin_token = ADMIN

Creating users, projects and roles

A project (formerly known as tenant), can have several users, and each of this users has a role within that project. It can be ‘admin’, a simple member (by default, indicated by the role ‘_member_’), or any other role we want to create. What a role can and can’t do, is specified in the ‘policy.json’ configuration file. In this example, we will create a project called ‘stack’, an admin role, and two users, one of them being the admin of the project.

(venv)stack@aio:~$ openstack project create stack
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | None |
| enabled | True |
| id | ebae83a27e7a4409aea4311543b7dadb |
| name | stack |
+-------------+----------------------------------+

(venv)stack@aio:~$ openstack role create admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | 3a933d2b3957477b9aabf1e0f32cb388 |
| name | admin |
+-------+----------------------------------+

(venv)stack@aio:~$ openstack user create gru --project stack --password minions
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| email | None |
| enabled | True |
| id | 9be9ad60d4594c288f5b3305e84e3e63 |
| name | gru |
| project_id | ebae83a27e7a4409aea4311543b7dadb |
| username | gru |
+------------+----------------------------------+

(venv)stack@aio:~$ openstack role add --user gru --project stack admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | 3a933d2b3957477b9aabf1e0f32cb388 |
| name | admin |
+-------+----------------------------------+

(venv)stack@aio:~$ openstack user create stuart --project stack --password banana
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| email | None |
| enabled | True |
| id | 85a51778c59f4c66aed800f74cf3cf77 |
| name | stuart |
| project_id | ebae83a27e7a4409aea4311543b7dadb |
| username | stuart |
+------------+----------------------------------+

Although we will not need it now, it’s common to create a special project, usually called ‘service’, with one user per service deployed, like nova or glance, so each of then can authenticate and validate tokens using their own accounts.

(venv)stack@aio:~$ openstack project create service --description "Service Project"
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| enabled | True |
| id | 7bd37e546f4741c6bfba687f35711ae5 |
| name | service |
+-------------+----------------------------------+

Finally, we need to create a service endpoint. This will be published in the Keystone service catalog, and will help services to locate other services in the deployed cloud.

(venv)stack@aio:~$ openstack service create --name keystone \
> --description "Keystone Identity Service" \
> identity
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Keystone Identity Service        |
| enabled     | True                             |
| id          | 7796b8d0e5ff4267abf3b7766c670fb9 |
| name        | keystone                         |
| type        | identity                         |
+-------------+----------------------------------+
(venv)stack@aio:~$ openstack endpoint create --region RegionOne --publicurl "http://aio:5000/v2.0" --adminurl "http://aio:35357/v2.0" --internalurl "http://aio:5000/v2.0" keystone

+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| adminurl     | http://:$(admin_port)s/v2.0      |
| id           | 58081185adf94a82994d019043e3ab7e |
| internalurl  | http://:$(public_port)s/v2.0     |
| publicurl    | http://:$(public_port)s/v2.0     |
| region       | RegionOne                        |
| service_id   | 7796b8d0e5ff4267abf3b7766c670fb9 |
| service_name | keystone                         |
| service_type | identity                         |
+--------------+----------------------------------+

V2.0 vs V3 API ports

One of the differences between v2.0 and v3 API, is that in V3 there is no longer any distinction between the admin port (35357) and the public port (5000), so you can use either to manage the keystone service.

If you prefer to use V3 API through the openstackclient CLI, you need to specify the API version and the correct url of the service:

export OS_IDENTITY_API_VERSION=3
export OS_URL=http://aio:35357/v3

starter-kit:compute (III) – Keystone

From Keystone README.rst: “Keystone provides authentication, authorization and service discovery mechanisms via HTTP primarily for use by projects in the OpenStack family. It is most commonly deployed as an HTTP interface to existing identity systems, such as LDAP.” Basically, is the service where we will store and manage all our users and projects.

The installation is pretty straightforward.  First we’ll need to a couple of development libraries that will be needed later during the installation of the keystone requirements.

root@aio:~# apt-get install libffi-dev libssl-dev

The next step is to create a ‘keystone’ user and the default location where the configuration files will be located, ‘/etc/keystone’, with the correct permissions.

root@aio:~# useradd -m keystone 
root@aio:~# mkdir /etc/keystone 
root@aio:~# chown keystone:keystone /etc/keystone

Once everything is ready, we need to create a mysql database and a user with the appropriate privileges to connect to connect to it.

root@aio:~# mysql -u root -p 
mysql> create database keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'aio' IDENTIFIED BY 'keystone' WITH GRANT OPTION;FLUSH privileges;

Now, it’s time to clone the source from upstream, and create and activate a virtual environment where we will install keystone within all its requirements

keystone@aio:~$ git clone git://git.openstack.org/openstack/keystone 
keystone@aio:~$ virtualenv venv 
keystone@aio:~$ source venv/bin/activate 
(venv)keystone@aio:~$ cd keystone 
(venv)keystone@aio:~/keystone$ pip install pip --upgrade # The version installed by default is outdated
(venv)keystone@aio:~/keystone$ pip install -r requirements.txt 
(venv)keystone@aio:~/keystone$ pip install mysql-python 
(venv)keystone@aio:~/keystone$ python setup.py install

By default, keystone provides sample configuration files that we will need to copy to their default location:

(venv)keystone@aio:~/keystone$ cp -fr etc/* /etc/keystone/ 
(venv)keystone@aio:~/keystone$ mv /etc/keystone/keystone.conf.sample /etc/keystone/keystone.conf

The only initial change that will be needed to have a working keystone service in our environment will be to define the proper connection database string to our mysql server. Under the [database] group, we should define the connection string as:

connection = mysql://keystone:keystone@aio/keystone

After that, we need to populate the database with the required tables:

(venv)keystone@aio:~/keystone$ keystone-manage db_sync

And now we are ready to start the service:

(venv)keystone@aio:~/keystone$ keystone-all

Of course, this is not the best way to start a service, using the command line, but for our initial deployment is more than enough. We’ll see how to create our init scripts in the following posts.

starter-kit:compute (II) – Initial Setup

We’ll use Ubuntu LTS 14.04 (aka Trusty) as the base OS system installed in a dedicated computer (baremetal), but the same goals can be achieved using other distributions or using a virtual machine or container, but some requirements like networking and nested kvm needs to be changes.

In a first step, we will me using a single machine to run all the services (all-in-one) but we will expand this later in the series to include multiple compute nodes and include some HA features to accomplish our goal to have a real production environment.

Although more mainstream distributions include the OpenStack software packaged for them, we will be installing the services directly from source, using the latest available development version, so by the end of the series, we’ll be targeting the Liberty release. The rest of service needed by OpenStack will be installed using the distribution packaging system.

Necessary software

root@aio:~# apt-get install rabbitmq-server mysql-server git python-dev libmysqlclient-dev

In our case, we’ll call our node ‘aio’, but whatever you call your node, made sure that it resolves its hostname properly and to the address 127.0.0.1

And that’s all! Remember your mysql admin password for futures needs and we are ready to start the deployment of our first service: Keystone

The importance of caching

Having a great bandwidth connection is really useful when you need to retrieve different pieces from the Internet, but having a local mirror can really improve your build times.

As an example, building and deploying a full TripleO stack (seed, undercloud and overcloud vms) without any local mirrors and proxy takes (in my server) about 73 minutes, while using a local pypy mirror, apt mirror and a squid proxy takes only 41 minutes. That is roughly a 40% improvement! (Although a lot of this time is spent in booting and configuring the OpenStack vm’s ad services.

In a smaller case, just to build the seed vm (no vm ever booted), the times are the following:

1.- No cache/proxy = 12m 31s
2.- Squid proxy = 10.54
3.- Pypi mirror = 9m 56 s
4.- Pypi mirror + Squid Proxy = 8m 31 s
5.- Pypi mirror + Squid Proxy + Apt mirror = 6m 57s

Obviously, the worst your bandwidth if, the most time you will save!

You can also use the pypi mirror for other tasks, like updating your venv of your project for testing. Let’s take for example, ironic . Keeping in mind that the test suite takes ~ 25s, the times are:

– Without pypi mirror # tox -r -epy27 = 7m 58s
– With pypi mirror # tox -r -e epy27 = 7m 08s

So, hurray for local mirrors and proxys!


http://apt-mirror.github.io/
https://github.com/openstack-infra/pypi-mirror
http://www.squid-cache.org/
https://pypi.python.org/pypi/bandersnatch

Tip: Debugging tests in OpenStack

One easy way to debug tests in openstack using pdb, is to include the sentence:

import pdb; pbd.set_trace()

at the begining of the test is failing, and re-run it with the –nocapture option:

ghe@debian:glance(master)$ ./run_tests.sh -N glance.tests.functional.v1.test_api --nocapture
/home/ghe/github/openstack/glance/glance/tests/functional/v1/test_api.py(1170)test_delete_not_existing()
-> self.cleanup()
(Pdb) 


Happy sysadmin day!

Debian And OpenStack

In February 2012, I wrote:

“The major problem of Debian and OpenStack, is Debian itself. In a so fast moving project like OpenStack, having an “EveryThing-Ready-and-Included” release every two years is not viable for this scenario. In the case of OpenStack, Squeeze is too old to handle it the way it is right now (Too many things not directly related with OpenStack, like qemu-xen, python-modules, etc.,   should be backported to make it functional, and with just volunteer packagers, make it a not so viable option). In the other hand, the next release, is planned to 1Q 2013 (Wheezy being frozen in later June), so Essex release will be outdated and Folsom we be the stable release by then, and with G release knocking on the door (Essex is supposed to be a LTS release, supported by Canonical/Ubuntu until around April 2017so is not that bad). Solutions?  Forget about Squeeze, focus in Wheezy + Essex as an official release , and Folsom (or G release) as a back-ported option. The other option (not viable in a short term), is change the way Debian makes it releases, and try to have something more “componentized” (like Progeny tried to do some years ago) or “rolling-releases”, so trending-topic these days.  Neither GNOME, OpenStack, LibreOffice, linux-kernel (just to name some) has a similar release cycle or core necessities to run in a system, so why try to release, prepare, stabilized everything at the same time no matter what’s the status the upstream project is? (Just int he case of OpenStack, and supposing everything thing is going according the schedule, with a 6 month release cycle vs. 2 years in Debian + frozen period, when Wheezy is released, we are going to release an outdated version -essex (but LTS thanks to Canonical)-  when Folsom is 4 months old! and the new release coming in less than two months!) I agree this needs a lot of more development, but i think it’s something that Debian should have to think about.”

Re-reading it, the situation is not that bad actually. We were lucky enough that the new Ubuntu LTS was released, and we can use the excuse of Essex being supported until 2017. But the key point is the same: “Debian is huge and slow, while OpenStack (and so many other projects) are really fast-moving”