Andy Smith's Blog

  • My kubernetes setup

    Updated 2017/01/03: Modified setup to use Xen instead of Qemu for master+nodes, upgrade Kubernetes to 1.5.1, use CoreOS beta instead of alpha.

    This is a description of my local kubernetes setup. If you want to set up kubernetes yourself chances are you should follow the proper guide. This is intended to be the reference that I was desparate for when I set out doing this a few months ago.

    I wanted to run my own kubernetes deployment to run applications and experiment. I didn't just want to try out kubernetes, I wanted to run it 24/7. From the looks of it the easiest way to do this is using Google Compute Engine or AWS. The problem with both of these is to run 24/7 you end up spending quite a lot of money every month just to keep a basic install running.

    After considering a bunch of options (including running a Raspberry Pi Cluster) I came to the conclusion that my best setup would be to run a single physical server that hosted bunch of virtual machines.

    I picked Xen as my hypervisor, Ubuntu as my "dom0" (more on this later) and CoreOS as my kubernetes host. Here's my set up.


    • Dell T20 Server
    • Intel i5-4590
    • 16 GB RAM
    • 120 GB SSD


    Hypervisor: Xen Hypervisor / Ubuntu 16.04. I found myself thoroughly confused by all this talk of "dom0" but the gist of this is: You install Ubuntu 16.04 on your server, you then install (via apt-get) Xen which installs itself as the main OS with your original Ubuntu install as a virtual machine. This virtual machine is called "dom0" and is what you use to manage all your other virtual machines.

    (Another source of confusion - Xen is not XenServer, which is a commercial product you can safely ignore).

    Kubernetes OS: CoreOS Alpha Channel. Right now Stable does not include the kubelet (which we need) so I'm using Alpha. This is what I picked as it tries to support Kubernetes right out of the box.

    Installing Xen

    On a fresh Ubuntu 16.04, install Xen, libvirt and virtinst. Replace it as the deafult boot point and restart. virtinst gives us a CLI we will use to launch virtual machines later. genisoimage we need for mkisofs.

    sudo apt-get install xen-hypervisor-amd64 virtinst genisoimage
    sudo sed -i 's/GRUB_DEFAULT=.*\+/GRUB_DEFAULT="Xen 4.1-amd64"/' /etc/default/grub
    sudo update-grub
    sudo reboot

    What comes back up should be the original Ubuntu install running as a virtual machine on the Xen hypervisor. Because it's the original install we don't know for sure that anything actually changed. We can check with xl:

    root@xen:~# xl list
    Name                                        ID   Mem VCPUs      State   Time(s)
    Domain-0                                     0 19989     4     r-----      75.3

    Looks good!

    Installing Kubernetes

    Kubernetes comes with these nifty scripts that basically set up your whole cluster for you. The problem I found with this is I wanted to manage (and understand) the pieces of software myself. I didn't want a mysterious bash script that promised to take care of it all for me.

    Instead I've created my own set of mysterious scripts, that are slightly less generated and templated that may be useful to some as examples. This is how to use them.

    We're going to use as little as possible of my stuff - the following git repo is 4 CoreOS cloud-config files. These define basic configuration (network setup, applications to run). There's also a piece of config to generate our SSL certificate for the cluster.

    So, grab my config from Github and grab the latest CoreOS Alpha:

    sudo su
    mkdir -p /var/lib/libvirt/images/
    cd /var/lib/libvirt/images/
    git clone -b blog_post_v2 coreos
    cd coreos
    wget -O - | bzcat > coreos_production_xen_image.bin

    Now create a disk and config disk for master, node1, node2, node3:

    #To pad out extra space in the image.
    #TODO: Undoubtedly a better way than this
    dd if=/dev/zero of=tempfile bs=1G count=2
    cat tempfile >> coreos_production_xen_image.bin
    cp coreos_production_xen_image.bin master1.bin
    cp coreos_production_xen_image.bin node1.bin
    cp coreos_production_xen_image.bin node2.bin
    cp coreos_production_xen_image.bin node3.bin

    You may need to generate an SSH key if you haven't already:

    ssh-keygen -t rsa -b 4096 -C "$USER@$HOSTNAME"

    We then put our SSH key in to the cloud-configs for our nodes:

    KEY=$(cat ~/.ssh/
    sed "s#SSH_KEY#$KEY#g" < master1/openstack/latest/user_data.tmpl > master1/openstack/latest/user_data
    sed "s#SSH_KEY#$KEY#g" < node1/openstack/latest/user_data.tmpl > node1/openstack/latest/user_data
    sed "s#SSH_KEY#$KEY#g" < node2/openstack/latest/user_data.tmpl > node2/openstack/latest/user_data
    sed "s#SSH_KEY#$KEY#g" < node3/openstack/latest/user_data.tmpl > node3/openstack/latest/user_data

    We also need to generate our certificates:

    cd certs
    openssl genrsa -out ca-key.pem 2048
    openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
    openssl genrsa -out apiserver-key.pem 2048
    openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
    openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf
    cd ..

    And then put the certificates we generated in to the master node:

    #Total hack, so it's indented correctly when we move it in to .yml
    sed -i 's/^/        /' certs/*.pem
    sed -i $'/CA.PEM/ {r certs/ca.pem\n d}' master1/openstack/latest/user_data
    sed -i $'/APISERVER.PEM/ {r certs/apiserver.pem\n d}' master1/openstack/latest/user_data
    sed -i $'/APISERVER-KEY.PEM/ {r certs/apiserver-key.pem\n d}' master1/openstack/latest/user_data

    Configs done, we can validate to double check:

    curl '' -X PUT --data-binary '@master1/openstack/latest/user_data' | python -mjson.tool
    curl '' -X PUT --data-binary '@node1/openstack/latest/user_data' | python -mjson.tool
    curl '' -X PUT --data-binary '@node2/openstack/latest/user_data' | python -mjson.tool
    curl '' -X PUT --data-binary '@node3/openstack/latest/user_data' | python -mjson.tool

    If that passed ("null" from the server), first create an iso to get the config file in to our xen vm:

    mkisofs -R -V config-2 -o master1-config.iso master1/
    mkisofs -R -V config-2 -o node1-config.iso node1/
    mkisofs -R -V config-2 -o node2-config.iso node2/
    mkisofs -R -V config-2 -o node3-config.iso node3/

    Then create the CoreOS virtual machines using those disks and cloud-configs:

    xl create master1.cfg
    xl create node1.cfg
    xl create node2.cfg
    xl create node3.cfg

    This will start 4 virtual machines running CoreOS and our cloud configs.

    What happens:

    • Download flannel image
    • Kubelet starts and downloads hyperkube
    • Containers started for api server, controller manager, scheduler on master
    • Container for kube-proxy starts on on nodes

    If you need to you can attach to the console and monitor a node booting up:

    xl console master1

    You can also ssh on to the master and check journalctl:

    ssh core@ 
    journalctl -f

    So.. did it work? Let's try using kubectl (which we need to install locally first):

    curl -O
    chmod +x kubectl
    mv kubectl /usr/local/bin/kubectl

    Let's see:

    root@xen# kubectl -s get nodes
    NAME              STATUS    AGE     Ready     1m   Ready     1m     Ready     1m     Ready     1m

    One last thing, if we try and list the pods (running processes) we won't get anything. We need to create the "kube-system" namespace. Which can be easily done:

    curl -H "Content-Type: application/json" -XPOST -d'{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"kube-system"}}' ""


    NAME                                      READY     STATUS             RESTARTS   AGE
    kube-apiserver-            1/1       Running            0          3m
    kube-controller-manager-   1/1       Running            1          4m
    kube-proxy-                  1/1       Running            1          4m
    kube-proxy-                1/1       Running            0          3m
    kube-proxy-                  1/1       Running            0          3m
    kube-proxy-                  1/1       Running            0          3m
    kube-scheduler-            1/1       Running            0          3m



    So what have we actually done? We've turned an Ubuntu server in to a Xen Hypervisor. On that hypervisor we've created 4 virtual machines all running CoreOS. From the CoreOS config from my git repo we've set up 1 CoreOS install running the master kubernetes components, 3 others are running the node components.

    There's many ways we can get Kubernetes running on CoreOS. The particular way we have set it up as is follows.

    • flannel service - This handles our networking. It allows a container on one node to speak to a container on another node.
    • etcd service - This is where kubernetes persists state.
    • docker service - Docker is how this kubernetes setup launches images.
    • kubelet service - This is the only kubernetes component installed as a system service. We use the kubelet to join our kubernetes cluster and launch other kubernetes applications.

    As well as system services we've also installed the following as services managed by kubernetes, we do this by placing kubernetes config in /etc/kubernetes/manifests/. The kubelet service monitors this directory and launches applications based on what it finds.

    • kube-apiserver
    • kube-scheduler
    • kube-controller-manager
    • kube-proxy

    That's all! We've not got a fully functioning kubernetes cluster. Time to play with it.


  • cloud-config validate without cloudinit

    A quick hack to let you validate your CoreOS cloud-config user_data file without having to install coreos-cloudinit (though you do need internet access):

    curl '' -X PUT --data-binary '@user_data'

    This makes use of CoreOS's online validator, but without having to copy/paste which can be a little fiddly when SSH'd somewhere.


  • Running manuka docker honeypot setup

    I've just got dionaea and kippo running in docker images on to make a quick to set up honeypot. The project is called manuka.

    Here's how to get manuka running on Ubuntu 14.04:

    #install docker (skip if you have docker 1.3+ already)
    [ -e /usr/lib/apt/methods/https ] || {
      sudo apt-get update
      sudo apt-get install apt-transport-https
    sudo apt-key adv --keyserver hkp:// --recv-keys \
    sudo sh -c "echo deb docker main > \
    sudo apt-get update
    sudo apt-get -y install lxc-docker
    #install docker-compose
    sudo apt-get install -y python-pip
    sudo pip install docker-compose
    #run manuka
    curl -q >
    chmod +x
    sudo ./

    You have just setup dionaea and kippo.

    Let's try out kippo:

    ssh root@localhost
    # > Password: <12345>
    # > root@svr03:~#

    And dionaea:

    sudo nmap  -d -p 445 --script=smb-vuln-ms10-061
    ls var/dionaea/bistreams
    # > total 4.0K
    # > drwxr-xr-x 2 nobody nogroup 4.0K Mar 16 23:21 2015-03-16

    All logs and files will be saved under $PWD/var/.

    Happy to hear any bug reports and feature requests on Github.


  • Docker volume and docker VOLUME

    I've been fiddling with docker lately and it took me a while to come to this realisation. The docker volume command line argument and the docker VOLUME Dockerfile instruction are a bit different.

    The docker volume command line argument:

    docker run -v /var/logs:/var/logs ubuntu echo test

    And the docker VOLUME Dockerfile instruction:

    VOLUME /var/logs

    The Dockerfile VOLUME instruction doesn't support host directories.

    As discussed in this stackoverflow post it looks like this is intentional because it makes things less portable.