Andy Smith's Blog

My kubernetes setup

This is a description of my local kubernetes setup. If you want to set up kubernetes yourself chances are you should follow the proper guide. This is intended to be the reference that I was desparate for when I set out doing this a few months ago.

I wanted to run my own kubernetes deployment to run applications and experiment. I didn't just want to try out kubernetes, I wanted to run it 24/7. From the looks of it the easiest way to do this is using Google Compute Engine or AWS. The problem with both of these is to run 24/7 you end up spending quite a lot of money every month just to keep a basic install running.

After considering a bunch of options (including running a Raspberry Pi Cluster) I came to the conclusion that my best setup would be to run a single physical server that hosted bunch of virtual machines.

I picked Xen as my hypervisor, Ubuntu as my "dom0" (more on this later) and CoreOS as my kubernetes host. Here's my set up.

Hardware

  • Dell T20 Server
  • Intel i5-4590
  • 16 GB RAM
  • 120 GB SSD

Software

Hypervisor: Xen Hypervisor / Ubuntu 16.04. I found myself thoroughly confused by all this talk of "dom0" but the gist of this is: You install Ubuntu 16.04 on your server, you then install (via apt-get) Xen which installs itself as the main OS with your original Ubuntu install as a virtual machine. This virtual machine is called "dom0" and is what you use to manage all your other virtual machines.

(Another source of confusion - Xen is not XenServer, which is a commercial product you can safely ignore).

Kubernetes OS: CoreOS Alpha Channel. Right now Stable does not include the kubelet (which we need) so I'm using Alpha. This is what I picked as it tries to support Kubernetes right out of the box.

Installing Xen

On a fresh Ubuntu 16.04, install Xen, libvirt and virtinst. Replace it as the deafult boot point and restart. virtinst gives us a CLI we will use to launch virtual machines later.

sudo apt-get install xen-hypervisor-amd64 virtinst
sudo sed -i 's/GRUB_DEFAULT=.*\+/GRUB_DEFAULT="Xen 4.1-amd64"/' /etc/default/grub
sudo update-grub
sudo reboot

What comes back up should be the original Ubuntu install running as a virtual machine on the Xen hypervisor. Because it's the original install we don't know for sure that anything actually changed. We can check with xl:

root@xen:~# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0 19989     4     r-----      75.3

Looks good!

Installing Kubernetes

Kubernetes comes with these nifty scripts that basically set up your whole cluster for you. The problem I found with this is I wanted to manage (and understand) the pieces of software myself. I didn't want a mysterious bash script that promised to take care of it all for me.

Instead I've created my own set of mysterious scripts, that are slightly less generated and templated that may be useful to some as examples. This is how to use them.

We're going to use as little as possible of my stuff - the following git repo is 4 CoreOS cloud-config files. These define basic configuration (network setup, applications to run). There's also a piece of config to generate our SSL certificate for the cluster.

So, grab my config from Github and grab the latest CoreOS Alpha:

sudo su
mkdir -p /var/lib/libvirt/images/
cd /var/lib/libvirt/images/
git clone -b blog_post https://github.com/andrewmichaelsmith/xen-coreos-kube.git coreos
cd coreos
wget https://alpha.release.core-os.net/amd64-usr/current/coreos_production_qemu_image.img.bz2 -O - | bzcat > coreos_production_qemu_image.img

Now create a disk for master, node1, node2, node3:

qemu-img create -f qcow2 -b coreos_production_qemu_image.img master1.qcow2
qemu-img create -f qcow2 -b coreos_production_qemu_image.img node1.qcow2
qemu-img create -f qcow2 -b coreos_production_qemu_image.img node2.qcow2
qemu-img create -f qcow2 -b coreos_production_qemu_image.img node3.qcow2

You may need to generate an SSH key if you haven't already:

ssh-keygen -t rsa -b 4096 -C "$USER@$HOSTNAME"

We then put our SSH key in to the cloud-configs for our nodes:

KEY=$(cat ~/.ssh/id_rsa.pub)
sed "s#SSH_KEY#$KEY#g" < master1/openstack/latest/user_data.tmpl > master1/openstack/latest/user_data
sed "s#SSH_KEY#$KEY#g" < node1/openstack/latest/user_data.tmpl > node1/openstack/latest/user_data
sed "s#SSH_KEY#$KEY#g" < node2/openstack/latest/user_data.tmpl > node2/openstack/latest/user_data
sed "s#SSH_KEY#$KEY#g" < node3/openstack/latest/user_data.tmpl > node3/openstack/latest/user_data

We also need to generate our certificates:

cd certs
openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
openssl genrsa -out apiserver-key.pem 2048
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf
cd ..

And then put the certificates we generated in to the master node:

#Total hack, so it's indented correctly when we move it in to .yml
sed -i 's/^/        /' certs/*.pem
sed -i $'/CA.PEM/ {r certs/ca.pem\n d}' master1/openstack/latest/user_data
sed -i $'/APISERVER.PEM/ {r certs/apiserver.pem\n d}' master1/openstack/latest/user_data
sed -i $'/APISERVER-KEY.PEM/ {r certs/apiserver-key.pem\n d}' master1/openstack/latest/user_data

Configs done, we can validate to double check:

curl 'https://validate.core-os.net/validate' -X PUT --data-binary '@master1/openstack/latest/user_data' | python -mjson.tool
curl 'https://validate.core-os.net/validate' -X PUT --data-binary '@node1/openstack/latest/user_data' | python -mjson.tool
curl 'https://validate.core-os.net/validate' -X PUT --data-binary '@node2/openstack/latest/user_data' | python -mjson.tool
curl 'https://validate.core-os.net/validate' -X PUT --data-binary '@node3/openstack/latest/user_data' | python -mjson.tool

If that passed ("null" from the server), create the CoreOS virtual machines using those disks and cloud-configs:

virt-install \
  --connect qemu:///system \
  --import \
  --name master1 \
  --ram 2048 \
  --vcpus 2 \
  --os-type=linux \
  --os-variant=virtio26 \
  --disk path=/var/lib/libvirt/images/coreos/master1.qcow2,format=qcow2,bus=virtio \
  --filesystem /var/lib/libvirt/images/coreos/master1/,config-2,type=mount,mode=squash \
  --network bridge=virbr0,mac=52:54:00:00:00:3 \
  --vnc \
  --noautoconsole \
  --hvm

virt-install \
  --connect qemu:///system \
  --import \
  --name node1 \
  --ram 2048 \
  --vcpus 2 \
  --os-type=linux \
  --os-variant=virtio26 \
  --disk path=/var/lib/libvirt/images/coreos/node1.qcow2,format=qcow2,bus=virtio \
  --filesystem /var/lib/libvirt/images/coreos/node1/,config-2,type=mount,mode=squash \
  --network bridge=virbr0,mac=52:54:00:00:00:0 \
  --vnc \
  --noautoconsole \
  --hvm

virt-install \
  --connect qemu:///system \
  --import \
  --name node2 \
  --ram 2048 \
  --vcpus 1 \
  --os-type=linux \
  --os-variant=virtio26 \
  --disk path=/var/lib/libvirt/images/coreos/node2.qcow2,format=qcow2,bus=virtio \
  --filesystem /var/lib/libvirt/images/coreos/node2/,config-2,type=mount,mode=squash \
  --network bridge=virbr0,mac=52:54:00:00:00:1 \
  --vnc \
  --noautoconsole \
  --hvm

virt-install \
  --connect qemu:///system \
  --import \
  --name node3 \
  --ram 2048 \
  --vcpus 1 \
  --os-type=linux \
  --os-variant=virtio26 \
  --disk path=/var/lib/libvirt/images/coreos/node3.qcow2,format=qcow2,bus=virtio \
  --filesystem /var/lib/libvirt/images/coreos/node3/,config-2,type=mount,mode=squash \
  --network bridge=virbr0,mac=52:54:00:00:00:2 \
  --vnc \
  --noautoconsole \
  --hvm

This will start 4 virtual machines running CoreOS and our cloud configs. Depending on where you run this (internet speed, server power) this can take quite a long time to get up and running.

What happens:

  • Download flannel image
  • Kubelet starts and downloads hyperkube
  • Containers started for api server, controller manager, scheduler on master
  • Container for kube-proxy starts on on nodes

If you need to you can attach to the console and monitor a node booting up:

virsh console master1

You can also ssh on to the master and check journalctl:

ssh core@192.168.1.254 
journalctl -f

So.. did it work? Let's try using kubectl (which we need to install locally first):

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.2.3/bin/linux/amd64/kubectl
chmod +x kubectl
mv kubectl /usr/local/bin/kubectl

Let's see:

root@xen# kubectl -s http://192.168.122.254:8080 get nodes
NAME              STATUS    AGE
192.168.122.2     Ready     1m
192.168.122.254   Ready     1m
192.168.122.3     Ready     1m
192.168.122.4     Ready     1m

One last thing, if we try and list the pods (running processes) we won't get anything. We need to create the "kube-system" namespace. Which can be easily done:

curl -H "Content-Type: application/json" -XPOST -d'{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"kube-system"}}' "http://192.168.122.254:8080/api/v1/namespaces"

Now:

NAME                                      READY     STATUS             RESTARTS   AGE
kube-apiserver-192.168.122.254            1/1       Running            0          3m
kube-controller-manager-192.168.122.254   1/1       Running            1          4m
kube-proxy-192.168.122.2                  1/1       Running            1          4m
kube-proxy-192.168.122.254                1/1       Running            0          3m
kube-proxy-192.168.122.3                  1/1       Running            0          3m
kube-proxy-192.168.122.4                  1/1       Running            0          3m
kube-scheduler-192.168.122.254            1/1       Running            0          3m

Woohoo!

Conclusion

So what have we actually done? We've turned an Ubuntu server in to a Xen Hypervisor. On that hypervisor we've created 4 virtual machines all running CoreOS. From the CoreOS config from my git repo we've set up 1 CoreOS install running the master kubernetes components, 3 others are running the node components.

There's many ways we can get Kubernetes running on CoreOS. The particular way we have set it up as is follows.

  • flannel service - This handles our networking. It allows a container on one node to speak to a container on another node.
  • etcd service - This is where kubernetes persists state.
  • docker service - Docker is how this kubernetes setup launches images.
  • kubelet service - This is the only kubernetes component installed as a system service. We use the kubelet to join our kubernetes cluster and launch other kubernetes applications.

As well as system services we've also installed the following as services managed by kubernetes, we do this by placing kubernetes config in /etc/kubernetes/manifests/. The kubelet service monitors this directory and launches applications based on what it finds.

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
  • kube-proxy

That's all! We've not got a fully functioning kubernetes cluster. Time to play with it.

cloud-config validate without cloudinit

A quick hack to let you validate your CoreOS cloud-config user_data file without having to install coreos-cloudinit (though you do need internet access):

curl 'https://validate.core-os.net/validate' -X PUT --data-binary '@user_data'

This makes use of CoreOS's online validator, but without having to copy/paste which can be a little fiddly when SSH'd somewhere.

Running manuka docker honeypot setup

I've just got dionaea and kippo running in docker images on to make a quick to set up honeypot. The project is called manuka.

Here's how to get manuka running on Ubuntu 14.04:

#install docker (skip if you have docker 1.3+ already)
[ -e /usr/lib/apt/methods/https ] || {
  sudo apt-get update
  sudo apt-get install apt-transport-https
}

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys \
    36A1D7869245C8950F966E92D8576A8BA88D21E9

sudo sh -c "echo deb https://get.docker.com/ubuntu docker main > \
    /etc/apt/sources.list.d/docker.list"

sudo apt-get update
sudo apt-get -y install lxc-docker

#install docker-compose
sudo apt-get install -y python-pip
sudo pip install docker-compose

#run manuka
curl -q https://raw.githubusercontent.com/andrewmichaelsmith/manuka/master/run.sh > run.sh
chmod +x run.sh
sudo ./run.sh

You have just setup dionaea and kippo.

Let's try out kippo:

ssh [email protected]
# > Password: <12345>
# > [email protected]:~#

And dionaea:

sudo nmap  -d -p 445 127.0.0.1 --script=smb-vuln-ms10-061
ls var/dionaea/bistreams
# > total 4.0K
# > drwxr-xr-x 2 nobody nogroup 4.0K Mar 16 23:21 2015-03-16

All logs and files will be saved under $PWD/var/.

Happy to hear any bug reports and feature requests on Github.

Docker volume and docker VOLUME

I've been fiddling with docker lately and it took me a while to come to this realisation. The docker volume command line argument and the docker VOLUME Dockerfile instruction are a bit different.

The docker volume command line argument:

docker run -v /var/logs:/var/logs ubuntu echo test

And the docker VOLUME Dockerfile instruction:

VOLUME /var/logs

The Dockerfile VOLUME instruction doesn't support host directories.

As discussed in this stackoverflow post it looks like this is intentional because it makes things less portable.