Fork me on GitHub

Other articles


  1. Private tor network on kubernetes

    I recently came across someone running a private tor network with docker and immediately decided I'd have to do similar but in Kubernetes. I also followed another useful blog post about this subject.

    This seemed like a great opportunity to learn about the inner workings of the tor network and flex my kubernetes muscles. Here are some of the tricky bits I encountered for anyone trying to do something similar.

    Testing mode

    To get a chance of running our own tor network we must enable TestingTorNetwork, this tweaks a number of settings, such as not totally banning private IPs and reducing delays in voting.

    Directory Authorities

    A fundamental part of a tor network is the Directory Authority. When connecting to the network the client will connect to one of these to find out a list of relays to further connect to. These are hardcoded in to the tor source code*.

    Fortunately there are config options we can use to override these values (DirAuthority). This config needs to have not just the address but the fingerprint of the authority (so we know we can trust it).

    So from initial research it sounded like all we need to do was:

    • Generate certificates and signatures for 3 directory authorities
    • Create directory authorities (configured with their certificates)
    • Configure 10 relays to talk to directory authorities
    • Create 10 relays

    ConfigMaps and directories

    When trying to get the directory authorities running I had issues poking the certificates in. tor is kind of specific about the structure it expects (an id and keys dir). Because ConfigMaps don't do subdirectories (ref) I ended up using a flat structure in the ConfigMap and using my docker-entrypoint.sh to set up symlinks to achieve the desired structure.

    DirtAuthority address

    For the DirAuthority line we're expected to use an IP address (mailing list discussion). From a kubernetes point of view this is a bit annoying. Using a Service we can easily know the hostname upfront but an IP is more tricky. We could set the ClusterIP but that leaves config bound to a particular cluster setup.

    The solution is not so bad - when we generate each DirAuthority line we just make sure we've already created the Services and use their IP addresses. We can use jsonpath to get the IP:

    kubectl get svc da1 -o 'jsonpath={.spec.clusterIP}'
    

    Works, but it makes our setup a bit less elegant - we have to generate config files based upon the state of the kubernetes cluster.

    Relay address

    On start, if not provided with one, tor will search for an IP address to use. As we don't know our pod IP up front, this sounds ideal. Unfortunately, tor will not pick a private IP address (ref) unless explicitly given that address.

    This means we have to have add another trick - a docker-entrypoint.sh to append an Address line to our torrc with the pod's IP . Again, not awful, but not pretty.

    Running it

    With all these pieces in place I was able to successfully run a private tor network. I can route internet traffic through it (and see it hopping between servers) and scale the number of relays up and down.

    Conclusion

    These are the main problems I had to overcome to get tor running inside kubernetes. The resulting set of scripts is on github: andrewmichaelsmith/private-tor-network-kube.

    I'm reasonably happy with my final product, it produces a fully operational tor network. There is a certain amount of bash scaffolding which I'm not a huge fan of. It might be interesting to try and do this project again but as an Operator.

    ** I'm lying here to keep things simple. There are also Fallback mirrors that tor will connect to first. These are also hardcoded in to the tor source code.

    read more

    There are comments.

  2. Developing an iOS app on Linux in 2017

    I've just published an iOS app on the app store, I developed it (mostly) using Linux (Ubuntu). Here I have documented some of the challenges and discoveries for anyone considering doing the same.

    Before anyone gets too exicted, this is a Cordova app. That means it basically a web app (HTML, CSS, Javascript) served in a web view. There's no Swift or Objective-C here (at least, not written by me). Furthermore, my total solution uses two hosted Mac OS offerings. The day to day development still sticks to Linux, but I didn't find a solution that doesn't touch Mac.

    This post won't go in to much detail about the limitations of a cordova app over a "native" app, as these are already documented elsewhere. I will say that you can produce a decent looking, responsive, completely offline application that Apple will accept on their app store using this mechanism.

    Linux and iOS Development

    Apple are not exactly known for making development for their platforms easy on operating systems that aren't Mac OS. If you look in to this you will find people on the internet advising that even developing a basic Cordova app would be made a lot easier by buying a mac.

    But I'm a Linux user, so I'm not necessarily that interested in making my life easy.

    When I started out my main concerns were:

    • Testing my app locally in an emulator.
    • Building a release to to test on an iPhone.
    • Running my app on an iPhone.
    • Remotely debugging my app on an iPhone.
    • Building an app store ready release.
    • Uploading my release to the app store.

    It turned out these were all things to be worried about (some solveable, some not), I'd missed one:

    • Producing screenshots for the app store.

    Development Environment

    Before we address each of these points I'll give you a quick overview of my setup. I went for the classic gulp/bower/npm/etc. combo. I used the AngularJS framework.

    I used a generator to get started. I ultimately regret this, it got me going quickly but left huge gaps in my knowledge. Next time I would use such a project as a reference, but hand pick the pieces I wanted.

    This generator gave me some .html and .js I could edit, some commands I could run to serve them to my web browser from a local web server.

    With this and Chrome Device Mode I was able to develop a web page and look at what it might look like on an iPhone.

    Whilst that's OK, Chrome is not the web view that Cordova runs on the iPhone, so we don't really have any guarantees that the app will look as we see it on our computer. That brings us to the first concern.

    Testing my app locally in an emulator

    It's quite simple - if you don't run Mac OS you can't run an iPhone emulator. There are browser plugins (and the previously mentioend device mode) that will make a browser sort of look like a phone, but that's your lot.

    Personally I found that for 95% of cases Chrome was similar enough. The other 5% we'll get to later.

    (See "Producing screenshots" if you really want to run an emulator).

    Building a release to to test on an iPhone

    Again, this I couldn't achieve purely on Linux. This brings us to my first cheat.

    Adobe Phonegap is a commercial service based upon Cordova. If you create a (free) account with them they will build iPhone binaries for you (for free).

    There's one more hoop before that will work - certificiates. The iPhone won't accept a binary which isn't signed by a certificate from Apple. And the only way to get your hands on one of these is to give money to Apple.

    Once you sign up and pay for an Apple Developer account you will get some development certificates. You plug these in to phonegap, along with your project's git repo, and a .ipa file is produced.

    Running my app on an iPhone

    Here comes our first pleasant surprise - I can take my phonegap built .ipa and install it on to my iPhone straight from Linux using ideviceinstaller. It's this simple:

    ideviceinstaller -i app.ipa
    WARNING: could not locate iTunesMetadata.plist in archive!
    WARNING: could not locate Payload/app.app/SC_Info/app.sinf in archive!
    Copying 'app.ipa' to device... DONE.
    Installing 'net.app.example'
    Install: CreatingStagingDirectory (5%)
    Install: ExtractingPackage (15%)
    Install: InspectingPackage (20%)
    Install: TakingInstallLock (20%)
    Install: PreflightingApplication (30%)
    Install: InstallingEmbeddedProfile (30%)
    Install: VerifyingApplication (40%)
    Install: CreatingContainer (50%)
    Install: InstallingApplication (60%)
    Install: PostflightingApplication (70%)
    Install: SandboxingApplication (80%)
    Install: GeneratingApplicationMap (90%)
    

    And that's it - I get my app running on my phone exactly as it will be when I sell it. It pops up on the home screen and I can launch, easy.

    Remotely debugging my app on an iPhone

    As anyone who's written code for a browser will know - browser quirks can be the most infuriating issues to code for and around. This is the 5% of problems I mentioend previously.

    Whether it's CSS or Javascript - being able to open the debug console and tweak things is incredinly useful. As you may have already figured out, the cycle of - commit to git, push to git, build binary on third party service (phonegap), download binary, install binary to phone, launch binary - is not exactly a quick feeback loop.

    This brings us to our second pleasant discovery. We can use the ios_webkit_debug_proxy in conjunction with our running app. This allows us to use Chrome devtools on our computer, attached to the Safari webview running in our app on the phone. This makes debugging all manor of browser specific problems a lot easier.

    $ ios_webkit_debug_proxy -f chrome-devtools://devtools/bundled/inspector.html
    Listing devices on :9221
    Connected :9222 to Andrew's iPhone (c8fed00eb2e87f1cee8e90ebbe870c190ac3848c)
    

    It's that easy - then through Chrome I can twiddle CSS and run Javascript in my app.

    Building an app store ready release

    This is the same as how we build our .ipa for testing, the only difference is we have to use some different certificates from Apple. The process is otherwise identical - and phonegap will pop out a production ready .ipa.

    Uploading my release to the app store

    This was a bit of a shock. Naturally on a mac this process integrates in to XCode and those lucky developers can upload to the app store (iTunes Connect) at the push of a button.

    I had assumed there would be some web interface (as there is to configure all other pieces of the app) to allow for submission of our binary. This is not the case.

    Your two options are:

    • XCode
    • Application Loader

    Both of these are native Mac OS tools. This brings us to our second cheat. Unfortunately phonegap aren't kind enough to offer this service for us, but there's another option: MacinCloud. For a fee ($1 an hour) you can access a full blown Mac OS instance with Application Loader available (accessible via rdesktop).

    Using this service, it's possible to upload the .ipa to the app store for public release.

    (In searching for solutions to this I also found various random-people-on-the-internet who in exchange for some cash and all your Apple login details would submit your app for you from their mac. I did not go down this fairly sketchy route).

    Producing screenshots for the app store

    We're not quite finished yet! Chances are you want to upload some screenshots of your application. iTunes Connect has a thing called Media Manager which will helpfully take screenshots of the highest iPhone resoltuion and scale them down for you. At time of writing this is 2208x1242 pixels. That is unfortunately more pixels than I have on my laptop.

    There's no verification of the images you upload (from what I can see), so you could fake these in any way you like, but if you want to produce a bunch of screenshots of your actual app you may end up doing what I did - uploading your code to Macincloud, running it in Xcode and using the iPhone 7 emulator + screenshot functionality.

    Conclusion

    Whilst there were a few hoops to jump through in this process, the whole ordeal was not that painful. Throughout the project I was prepared to just go and get a mac but I was keen to avoid this if I could.

    The main times I found myself truly swearing at my computer were when I was trying to set up plugins - for which (when I got things wrong) the feedback loop was infuriatingly slow.

    All in all I think it was fine to do it this way and I'm glad that to maintain my iOS project I can use my regular development environment. Admittedly, a great deal of the ease comes from the fact that this is a web app - which should be easy to develop on any platform.

    read more

    There are comments.

  3. My kubernetes setup

    Updated 2017/01/03: Modified setup to use Xen instead of Qemu for master+nodes, upgrade Kubernetes to 1.5.1, use CoreOS beta instead of alpha.

    This is a description of my local kubernetes setup. If you want to set up kubernetes yourself chances are you should follow the proper guide. This is intended to be the reference that I was desparate for when I set out doing this a few months ago.

    I wanted to run my own kubernetes deployment to run applications and experiment. I didn't just want to try out kubernetes, I wanted to run it 24/7. From the looks of it the easiest way to do this is using Google Compute Engine or AWS. The problem with both of these is to run 24/7 you end up spending quite a lot of money every month just to keep a basic install running.

    After considering a bunch of options (including running a Raspberry Pi Cluster) I came to the conclusion that my best setup would be to run a single physical server that hosted bunch of virtual machines.

    I picked Xen as my hypervisor, Ubuntu as my "dom0" (more on this later) and CoreOS as my kubernetes host. Here's my set up.

    Hardware

    • Dell T20 Server
    • Intel i5-4590
    • 16 GB RAM
    • 120 GB SSD

    Software

    Hypervisor: Xen Hypervisor / Ubuntu 16.04. I found myself thoroughly confused by all this talk of "dom0" but the gist of this is: You install Ubuntu 16.04 on your server, you then install (via apt-get) Xen which installs itself as the main OS with your original Ubuntu install as a virtual machine. This virtual machine is called "dom0" and is what you use to manage all your other virtual machines.

    (Another source of confusion - Xen is not XenServer, which is a commercial product you can safely ignore).

    Kubernetes OS: CoreOS Alpha Channel. Right now Stable does not include the kubelet (which we need) so I'm using Alpha. This is what I picked as it tries to support Kubernetes right out of the box.

    Installing Xen

    On a fresh Ubuntu 16.04, install Xen, libvirt and virtinst. Replace it as the deafult boot point and restart. virtinst gives us a CLI we will use to launch virtual machines later. genisoimage we need for mkisofs.

    sudo apt-get install xen-hypervisor-amd64 virtinst genisoimage
    sudo sed -i 's/GRUB_DEFAULT=.*\+/GRUB_DEFAULT="Xen 4.1-amd64"/' /etc/default/grub
    sudo update-grub
    sudo reboot
    

    What comes back up should be the original Ubuntu install running as a virtual machine on the Xen hypervisor. Because it's the original install we don't know for sure that anything actually changed. We can check with xl:

    [email protected]:~# xl list
    Name                                        ID   Mem VCPUs      State   Time(s)
    Domain-0                                     0 19989     4     r-----      75.3
    

    Looks good!

    Installing Kubernetes

    Kubernetes comes with these nifty scripts that basically set up your whole cluster for you. The problem I found with this is I wanted to manage (and understand) the pieces of software myself. I didn't want a mysterious bash script that promised to take care of it all for me.

    Instead I've created my own set of mysterious scripts, that are slightly less generated and templated that may be useful to some as examples. This is how to use them.

    We're going to use as little as possible of my stuff - the following git repo is 4 CoreOS cloud-config files. These define basic configuration (network setup, applications to run). There's also a piece of config to generate our SSL certificate for the cluster.

    So, grab my config from Github and grab the latest CoreOS Alpha:

    sudo su
    mkdir -p /var/lib/libvirt/images/
    cd /var/lib/libvirt/images/
    git clone -b blog_post_v2 https://github.com/andrewmichaelsmith/xen-coreos-kube.git coreos
    cd coreos
    wget https://beta.release.core-os.net/amd64-usr/current/coreos_production_xen_image.bin.bz2 -O - | bzcat > coreos_production_xen_image.bin
    

    Now create a disk and config disk for master, node1, node2, node3:

    #To pad out extra space in the image.
    #TODO: Undoubtedly a better way than this
    dd if=/dev/zero of=tempfile bs=1G count=2
    cat tempfile >> coreos_production_xen_image.bin
    
    cp coreos_production_xen_image.bin master1.bin
    cp coreos_production_xen_image.bin node1.bin
    cp coreos_production_xen_image.bin node2.bin
    cp coreos_production_xen_image.bin node3.bin
    

    You may need to generate an SSH key if you haven't already:

    ssh-keygen -t rsa -b 4096 -C "$USER@$HOSTNAME"
    

    We then put our SSH key in to the cloud-configs for our nodes:

    KEY=$(cat ~/.ssh/id_rsa.pub)
    sed "s#SSH_KEY#$KEY#g" < master1/openstack/latest/user_data.tmpl > master1/openstack/latest/user_data
    sed "s#SSH_KEY#$KEY#g" < node1/openstack/latest/user_data.tmpl > node1/openstack/latest/user_data
    sed "s#SSH_KEY#$KEY#g" < node2/openstack/latest/user_data.tmpl > node2/openstack/latest/user_data
    sed "s#SSH_KEY#$KEY#g" < node3/openstack/latest/user_data.tmpl > node3/openstack/latest/user_data
    

    We also need to generate our certificates:

    cd certs
    openssl genrsa -out ca-key.pem 2048
    openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
    openssl genrsa -out apiserver-key.pem 2048
    openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
    openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf
    cd ..
    

    And then put the certificates we generated in to the master node:

    #Total hack, so it's indented correctly when we move it in to .yml
    sed -i 's/^/        /' certs/*.pem
    sed -i $'/CA.PEM/ {r certs/ca.pem\n d}' master1/openstack/latest/user_data
    sed -i $'/APISERVER.PEM/ {r certs/apiserver.pem\n d}' master1/openstack/latest/user_data
    sed -i $'/APISERVER-KEY.PEM/ {r certs/apiserver-key.pem\n d}' master1/openstack/latest/user_data
    

    Configs done, we can validate to double check:

    curl 'https://validate.core-os.net/validate' -X PUT --data-binary '@master1/openstack/latest/user_data' | python -mjson.tool
    curl 'https://validate.core-os.net/validate' -X PUT --data-binary '@node1/openstack/latest/user_data' | python -mjson.tool
    curl 'https://validate.core-os.net/validate' -X PUT --data-binary '@node2/openstack/latest/user_data' | python -mjson.tool
    curl 'https://validate.core-os.net/validate' -X PUT --data-binary '@node3/openstack/latest/user_data' | python -mjson.tool
    

    If that passed ("null" from the server), first create an iso to get the config file in to our xen vm:

    mkisofs -R -V config-2 -o master1-config.iso master1/
    mkisofs -R -V config-2 -o node1-config.iso node1/
    mkisofs -R -V config-2 -o node2-config.iso node2/
    mkisofs -R -V config-2 -o node3-config.iso node3/
    

    Then create the CoreOS virtual machines using those disks and cloud-configs:

    xl create master1.cfg
    xl create node1.cfg
    xl create node2.cfg
    xl create node3.cfg
    

    This will start 4 virtual machines running CoreOS and our cloud configs.

    What happens:

    • Download flannel image
    • Kubelet starts and downloads hyperkube
    • Containers started for api server, controller manager, scheduler on master
    • Container for kube-proxy starts on on nodes

    If you need to you can attach to the console and monitor a node booting up:

    xl console master1
    

    You can also ssh on to the master and check journalctl:

    ssh core@192.168.1.254 
    journalctl -f
    

    So.. did it work? Let's try using kubectl (which we need to install locally first):

    curl -O https://storage.googleapis.com/kubernetes-release/release/v1.2.3/bin/linux/amd64/kubectl
    chmod +x kubectl
    mv kubectl /usr/local/bin/kubectl
    

    Let's see:

    [email protected]# kubectl -s http://192.168.122.254:8080 get nodes
    NAME              STATUS    AGE
    192.168.122.2     Ready     1m
    192.168.122.254   Ready     1m
    192.168.122.3     Ready     1m
    192.168.122.4     Ready     1m
    

    One last thing, if we try and list the pods (running processes) we won't get anything. We need to create the "kube-system" namespace. Which can be easily done:

    curl -H "Content-Type: application/json" -XPOST -d'{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"kube-system"}}' "http://192.168.122.254:8080/api/v1/namespaces"
    

    Now:

    NAME                                      READY     STATUS             RESTARTS   AGE
    kube-apiserver-192.168.122.254            1/1       Running            0          3m
    kube-controller-manager-192.168.122.254   1/1       Running            1          4m
    kube-proxy-192.168.122.2                  1/1       Running            1          4m
    kube-proxy-192.168.122.254                1/1       Running            0          3m
    kube-proxy-192.168.122.3                  1/1       Running            0          3m
    kube-proxy-192.168.122.4                  1/1       Running            0          3m
    kube-scheduler-192.168.122.254            1/1       Running            0          3m
    

    Woohoo!

    Conclusion

    So what have we actually done? We've turned an Ubuntu server in to a Xen Hypervisor. On that hypervisor we've created 4 virtual machines all running CoreOS. From the CoreOS config from my git repo we've set up 1 CoreOS install running the master kubernetes components, 3 others are running the node components.

    There's many ways we can get Kubernetes running on CoreOS. The particular way we have set it up as is follows.

    • flannel service - This handles our networking. It allows a container on one node to speak to a container on another node.
    • etcd service - This is where kubernetes persists state.
    • docker service - Docker is how this kubernetes setup launches images.
    • kubelet service - This is the only kubernetes component installed as a system service. We use the kubelet to join our kubernetes cluster and launch other kubernetes applications.

    As well as system services we've also installed the following as services managed by kubernetes, we do this by placing kubernetes config in /etc/kubernetes/manifests/. The kubelet service monitors this directory and launches applications based on what it finds.

    • kube-apiserver
    • kube-scheduler
    • kube-controller-manager
    • kube-proxy

    That's all! We've not got a fully functioning kubernetes cluster. Time to play with it.

    read more

    There are comments.

Page 1 / 7 »

social