CoreOS Cluster on OpenStack

CoreOS Cluster on OpenStack

On this weekend at Docker Cologne Meetup I'd the honor to present CoreOS Cluster running on top of OpenStack, but missed to provide some slides or a step by step howto about the deployment process and the pitfalls which might prevent someone to get the whole deployment as sucessful as possible.

So, this guide is about how to download the latest CoreOS Beta release, add it to glance and boot up a 3 node CoreOS Cluster on OpenStack and provide some references from CoreOS site about how to launch High Availability Services using fleet.

Let's go, first download and unpack the CoreOS 444.0.0 stable release on your OpenStack Controler node: 

$ wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
$ bunzip2 coreos_production_openstack_image.img.bz2

Source your OpenStack keystonerc file:
$ source keystonerc_admin

Add the CoreOS image to glance:

$ glance image-create --name "CoreOS Beta" --container-format bare --disk-format qcow2 --file coreos_production_openstack_image.img --is-public True

Grab your private net-id and image id for the next step to boot your CoreOS cluster:

$ neutron net-list
$ nova image-list

Create your cloud-config YAML file:

# vi cloud-config.yaml 

and insert the follwoing lines into it:

#cloud-config

coreos:
    etcd:
    # generate a new token for each unique cluster from https://discovery.etcd.io/new
    discovery: https://discovery.etcd.io/e4e7cab176779242c4d3cb9ea1076629
    # multi-region and multi-cloud deployments need to use $public_ipv4
    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001

    units:
   - name: etcd.service
    command: start
   - name: fleet.service
   command: start
   ssh_authorized_keys:
   # include one or more SSH public keys
   ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAsNSWk1

Important note: you shall create a new discovery token each time you boot up a new cluster and update your YAML file with the new discovery URL!

Open the tcp ports 4001 and 7001 in your (default) security groups via horizon or from the cli.

Boot your 3 node CoreOS Cluster with:

$ nova boot --user-data ./cloud-config.yaml --image ac32bdd1-fc3c-44c9-ba89-02aedcb1b45a --key-name mykey --flavor m1.small --num-instances 3 --security-groups default --nic net-id=3fddd9b7-fa46-4418-ac12-aff7678e5da8 coreos

At this point your cluster is up and running and you can ssh into your CoreOS machines and verify if your cluster works:

# ssh -i mykey core@<private-ip>

core@host-10-0-0-18 ~ $ fleetctl list-machines
MACHINE IP METADATA
b2746bb6... 10.0.0.13 -
652149d4... 10.0.0.16 -
b84263b0... 10.0.0.15 -

And set a key / value into your distributed etcd key value store on the first machine:

core@host-10-0-0-13 ~ $ etcdctl set /message hello-coreos-world
hello-coreos-world

and get the message on other machine:

core@host-10-0-0-15 ~ $ etcdctl get /message
hello-coreos-world

Troubleshooting:

If you assign a public flooting ip to your coreos instances, you might want to update your environment with:

core@host-10-0-0-13 ~ $ sudo /usr/share/oem/bin/coreos-setup-environment /etc/environment
core@host-10-0-0-13 ~ $ cat /etc/environment
COREOS_PUBLIC_IPV4=<your-flooting-ip will be updated here>
COREOS_PRIVATE_IPV4=10.0.0.13

If something is going wrong, verify if your cloudinit works and grab your user meta data:

host-10-0-0-13 ~ # journalctl -u ec2-cloudinit
-- Logs begin at Sat 2014-05-24 13:25:44 UTC, end at Sat 2014-05-24 13:26:39 UTC. --
May 24 13:25:49 host-10-0-0-15 systemd[1]: Starting Cloudinit from EC2-style metadata...
May 24 13:25:49 host-10-0-0-15 coreos-cloudinit[4026]: 2014/05/24 13:25:49 Fetching user-data from datasource of type "metadata-servic
May 24 13:25:49 host-10-0-0-15 coreos-cloudinit[4026]: 2014/05/24 13:25:49 Parsing user-data as cloud-config
May 24 13:25:49 host-10-0-0-15 coreos-cloudinit[4026]: 2014/05/24 13:25:49 Wrote etcd config file to filesystem
May 24 13:25:49 host-10-0-0-15 coreos-cloudinit[4026]: 2014/05/24 13:25:49 Calling unit command 'start etcd.service'
May 24 13:25:49 host-10-0-0-15 coreos-cloudinit[4026]: 2014/05/24 13:25:49 Result of 'start etcd.service': done
May 24 13:25:49 host-10-0-0-15 coreos-cloudinit[4026]: 2014/05/24 13:25:49 Calling unit command 'start fleet.service'
May 24 13:25:49 host-10-0-0-15 coreos-cloudinit[4026]: 2014/05/24 13:25:49 Result of 'start fleet.service': done
May 24 13:25:49 host-10-0-0-15 systemd[1]: Started Cloudinit from EC2-style metadata.
May 24 13:25:49 host-10-0-0-15 systemd[1]: Starting Cloudinit from EC2-style metadata... 

host-10-0-0-13 ~ # curl http://169.254.169.254/latest/user-data

The above command should retrieve the content of your YAML file!

Now that you're running your cluster, hop over to the CoreOS documentation and run your distributed containers on your cluster using fleet and Sidekick:

https://coreos.com/docs/launching-containers/launching/launching-containers-fleet/

Further reading:

Docker is the best fit for continuous delivery by Mohit Arora.

Mohit Arora wrote an excellent series of blogposts to build a Continuos Delivery pipeline using CoreOS, Packer, Ansible and Go

http://2mohitarora.blogspot.com/2014/05/manage-docker-containers-using-coreos.html

http://2mohitarora.blogspot.com/2014/05/manage-docker-containers-using-coreos_18.html

http://2mohitarora.blogspot.com/2014/05/manage-docker-containers-using-coreos_22.html

http://2mohitarora.blogspot.com/2014/05/manage-docker-containers-using-coreos_26.html

 

Twitter News