RDO OpenStack Juno ML2 VXLAN 2 Node Deployment On CentOS 7 With Packstack

RDO OpenStack Juno ML2 VXLAN 2 Node Deployment On CentOS 7 With Packstack

The next stable OpenStack release codenamed “Juno” is slated to be released October 16, 2014. On October 1st and 2nd Red Hat's RDO community offered the RDO test day for OpenStack Juno milestone 3 release with great results helping to find bugs and make OpenStack better than ever and I wanted to see how things are going on and how to apply some few workarounds to get OpenStack Juno running and take it with me to OpenStack Summit and prepare for our OpenStack Distros and Deployment Workshop at OpenStack-X Cologne Meetup.

The first thing which I had to learn was the fact that RDO Juno runs only on CentOS 7 / RHEL 7 or Fedora 21 release and there are currently no plans to support it for older EPEL6 systems, so the first step was to get 2 CentOS 7 machines deployed and configured to use for my desired ML2 VXLAN setup and see how to configure the private and public networks and provide the right settings in the Packstack answerfile.

The CentOS boxes each have 2 interfaces, "enp2s0" is the public interface and "eno1" is the private one, Packstack is bind to the private interface "eno1" on the controller node with the IP 20.0.0.11 and the compute node has the private IP 20.0.0.12. In this setup I'm using the controller node also as compute node (only for testing, in a real world scenario, you've to seperate the roles and distribute the services on different nodes).

The public interfaces "enp2s0" are connetced to my xxx.xxx.xxx.xx/xx external public network and since I'd like to see how to assign public floating IPs and be able to ssh into my instances from the outside world (although this is not a good idea due to security reasons and it'd be a good idea to have a VPN connection) I'd to create an external bridge interface ifcfg-br-ex and plug the public "enp2s0" interface as an openvswitch port into it, similar to my previous installations for OS Havana and Icehouse on CentOS 6.5.

But before doing that I decided to install the RDO packages and run Packstack in all-in-one (AIO) mode on a CentOS VM with the following steps:

$ yum -y update
$ yum install https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
$
 vi /etc/selinux/config

changed SELINUX=enforcing
to SELINUX=permissive

and installed packstack:

$ yum install -y openstack-packstack

followed by the following command to install the RDO AIO dist:

$ packstack --allinone

after this first run the first error which I got was:

ERROR : Error appeared during Puppet run: 10.0.0.19_provision_glance.pp
Error: Could not prefetch glance_image provider 'glance': Execution of '/usr/bin/glance -T services -I glance -K d6debd0d0ae14a04 -N
....

and found the first workaround to set "CONFIG_PROVISION_DEMO=n" in packstack answer file.

There was another warning after the initial packstack run:

* Warning: NetworkManager is active on 20.0.0.11. OpenStack networking currently does not work on systems that have the Network Manager service enabled.

I'm still not sure if this warning could be ignored, but I run someting similar to this:

$ systemctl stop NetworkManager.service
$ systemctl disable NetworkManager.service

and re-run packstack with:

$ packstack --answer-file=packstack-answers-xx

and the installation was successful and I could log into Horizon dashboard and discover some new thins in the dashboard.

Note: after the initial run you can find the admin password for Horizon in keystonerc_admin file:

[root@controller ~]# cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=xxxxxxxxxxx
export OS_AUTH_URL=http://20.0.0.11:5000/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '

After some trial and error attempts I'd to re-configure the "enp2s0" interfaces to be used as OVSIntPort on both servers as following and had to add the bridge interface br-ex on the compute node by hand:

[root@compute ~]#  ovs-vsctl add-br br-ex

And created the ifcg-br-ex bridge by hand:

[root@xxx)]# cd /etc/sysconfig/network-scripts/

$ vi ifcfg-br-ex

DEVICE=br-ex
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSIntPort
OVS_BRIDGE=br-ex
USERCTL=no
BOOTPROTO=none
HOTPLUG=no
IPADDR=<public ip>
NETMASK=<public netmask>
GATEWAY=<public gateway>
DNS1=8.8.8.8

And changed the public nic "enp2s0" as following on both servers:

DEVICE=enp2s0
ONBOOT=yes
NETBOOT=yes
IPV6INIT=no
BOOTPROTO=none
NAME=enp2s0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-ex

Rebooted the servers and adapted the config settings for vxlan in the answer-file as following:

CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eno1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

and run packstack again:

$ packstack --answer-file=packstack-rdo-ml2-vxlan-2-node.txt

After this final run you should examine if packstack creates the right openvswitch ports for br-ex, br-int, br-tun, by running "ovs-vsctl show":

[root@controller ~(keystone_osxu)]# ovs-vsctl show
b043aa9a-8f62-41d1-a34c-dad7fc886cbf
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch

....
.. the output shall be very long :-)

Now you'd be able to create your first project (tenant) "osx" and the user "osxu", assign the role admin to her and create the internal and external networks, create a router and add it to your internal subnet and set the gateway to your external network with the following commands:

[root@controller ~]# source keystonerc_admin
[root@controller ~(keystone_admin)]# keystone tenant-create --name osx
[root@controller ~(keystone_admin)]# keystone user-create --name osxu --pass secret
[root@controller ~(keystone_admin)]# keystone user-role-add --user osxu --role admin --tenant osx
[root@controller ~(keystone_admin)]# cp keystonerc_admin keystonerc_osx
[root@controller ~(keystone_admin)]# vi keystonerc_osx
export OS_USERNAME=osxu
export OS_TENANT_NAME=osx
export OS_PASSWORD=secret
export OS_AUTH_URL=http://20.0.0.11:5000/v2.0/
export PS1='[\u@\h \W(keystone_osxu)]\$ '
[root@controller ~(keystone_admin)]# source keystonerc_osx
[root@controller ~(keystone_osxu)]# neutron net-create osxnet
[root@controller ~(keystone_osxu)]# neutron subnet-create osxnet 20.0.0.0/24 --name osx_subnet --dns-nameserver 8.8.8.8
[root@controller ~(keystone_osxu)]# neutron net-create ext_net --router:external=True
[root@controller ~(keystone_osxu)]# neutron subnet-create --gateway <gateway> --disable-dhcp --allocation-pool start=xxx.xxx.xxx.xxx,end=xxx.xxx.xxx.xxx ext_net xxx.xxx.xxx.xxx/xx --name ext_subnet
[root@controller ~(keystone_osxu)]# neutron router-create router_osx
[root@controller ~(keystone_osxu)]# neutron router-interface-add router_osx osx_subnet
[root@controller ~(keystone_osxu)]# neutron router-gateway-set router_osx ext_net

We're almost done, you shall be able to log into the dashboard at:

First navigate in the Horizon dashboard to "Project --> Compute --> Access & Security --> Key Pairs" and create and download the key pair, e.g. osx-key.pem, next navigate to "Security Groups" and create a rule to allow access to your instances through SSH on port 22.

Download the cirros image and add it to glance:

$ wget http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
$ glance image-create --container-format=bare --disk-format=qcow2 --name=cirros --is-public=true < cirros-0.3.3-x86_64-disk.img

Launch an cirros instance through Horizon, create a floating IP and assign it to the instance and ssh into it by:

$ ssh -i osx-key.pem cirros@<floating-ip>

And enjoy your OpenStack Cloud powered by RDO Juno release :-)

Update October 16th
Juno, the 10th OpenStack release was out today and I updated the RDO milestone 3 (beta) to the latest rc2 an rc3 RDO packages and got some help from the lovely RDO community on RDO's mailings list to upgrade to the latest version.

Here are the steps which was done: first stopped all OpenStack services on the controller and compute nodes (openstack-service stop) and ran:

$ yum update

And started the services again:

$ openstack-service start

After this step neutron didn't work anymore and I'd to run packstack again and do a neutron-db-manage like this:

[root@controller ~(keystone_admin)]# neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.migration] Running upgrade juno -> c00e3802ef7, empty message

And was told that this step is only for pre-release updates. Once Juno is settled, you should not run migrations after each update because it's guaranteed there are no such updates that touch database schema in stable branches.

But there's another handy shortcut with openstack-db util to update the db schema:

openstack-db --service <service> --update

So alternatively you can run:

[root@controller ~(keystone_osxu)]# openstack-db --service neutron --update
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
Complete!

[root@controller ~(keystone_osxu)]# openstack-db --service keystone --update
Complete!

What's comming next
The next parts of this post is going to deal with extending this 2 node deployment to a 4 node deployment and seperate networking form the controller and add a new compute host and try Heat and the current status of  the new hypervisor "Nova Docker Driver" implementation.

Follow-Up for this post
I'll post updates to this post over my twitter account @kaffamanesh

Useful links:
https://openstack.redhat.com/RDO_test_day_Juno_milestone_3
https://openstack.redhat.com/Using_VXLAN_Tenant_Networks
http://bderzhavets.wordpress.com/2014/07/29/rdo-setup-two-real-node-controllercompute-icehouse-neutron-ml2ovsvxlan-cluster-on-centos-7/
http://www.astroarch.com/2014/06/rhev-upgrade-saga-installing-open-vswitch-on-rhel-7/
https://blogs.oracle.com/ronen/entry/diving_into_openstack_network_architecture
https://wiki.openstack.org/wiki/Neutron/DVR
https://kashyapc.fedorapeople.org/virt/openstack/rdo/IceHouse-Nova-Neutron-ML2-GRE-OVS.txt

 

Twitter News