-
Migrating from Google Music to Navidrome on Kubernetes
When Google Music shut down this month, instead of taking the easy way out and migrating to Youtube Music I decided to have a stab at hosting my music collection myself. I think that it went pretty well so I thought I'd share the details.
Disclaimer: This is certainly not the most cost effective or simple way to solve this problem. It's suitable for the fairly niche intersection of people who have a personal Kubernetes cluster and are too stubborn to just give up and use Spotify.
Navidrome
Searching for an open source Google Music replacement, it didn't take too long to find Navidrome, a modern web based streaming server written in Golang.
It met my main fairly basic requirements:
- Browser based music streaming
- Mobile support (through DSub)
- (nice to have) Multiple user support
I discovered that this family of music streaming software tend to support the Subsonic API which means that generally mobile and other apps are available for them and Just Work, which is pretty handy!
Navidrome on Kubernetes
I have a Kubernetes cluster that I use for tinkering (because that's the sort of person I am!). It's mostly stable so it seemed sensible to put Navidrome there. Is this the simplest way to get this up and running? No. Is it a fun way to learn more about Kubernetes? Yes!
There isn't an existing Helm chart (a type of Kubernetes package) for Navidrome so I made my own (experimental) chart. Here's how to get that set up in your existing Kubernetes cluster.
Create a PVC
The first thing you need to do is create a
PersistentVolumeClaim
where your music and Navidrome data files will go. Here's an example that I used to create a 250 GB volume on Digital Ocean (storageClassName
will likely vary between cloud providers).pvc.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: navidrome spec: accessModes: - ReadWriteOnce resources: requests: storage: 250Gi storageClassName: do-block-storage
kubectl create -f pvc.yaml
Install Navidrome
Once the PVC is created, we can install Navidrome and connect it up.
helm repo add navidrome https://andrewmichaelsmith.github.io/navidrome helm repo update # Note we're telling it to use the PVC called "navidrome" we created above helm install --set persistence.enabled=true --set persistence.existingClaim=navidrome navidrome/navidrome --generate-name
If this works, helm should output some values to for
POD_NAME
andCONTAINER_PORT
that allows you run:kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
This should give you a running instance of Navidrome on http://localhost:8080, where you can set up an admin user and login (🎉):
Adding Music
So this is great and everything, but we're still missing something quite important - the music!
I am assuming that you have managed to export your music and have it on disk under
/home/your_user/Music
.This technique is a little hacky but for a one off it does the job and I was able to copy a large music collection easily in a day. What we're going to do is get the container to rsync the files from our machine.
This assumes you have an internet accessible SSH server (
your_server
) set up on the computer with the music collection.Let's jump on a shell in our Navidrome pod:
kubectl exec -ti $POD_NAME -- sh
Now, some set up. Here we get an SSH login working to your SSH server and then start copying the files:
apk add rsync openssh ssh-keygen ssh-copy-id [email protected]_server rsync -avz -e [email protected]_server:/home/your_user/Music/ /navidrome/music/
This will sync music from
/home/your_music/Music
on the server[email protected]_server
in to where Navidrome looks for music (🎉!).Note: If you find that some music isn't showing up, you may need to "Rescan server" from the web UI.
Getting an entry point
So this is great - Navidrome is running and we can play our music from it! However, you may have noticed that our access currently depends upon the
kubectl port-forward
command above.That might be enough for you but likely won't be that stable and it also will mean that you can't connect a mobile app to Navidrome.
This is where we need to set up an ingress. There are many other posts describing this set up so I won't get into that detail here.
However, I will show you how I wired my existing nginx ingress controller:
ingress.yamlapiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: navidrome-ingress spec: rules: - host: navidrome.mydomain.net http: paths: - backend: serviceName: navidrome-1606039281 servicePort: 4533
Here I'm using ingress to connect
navidrome.mydomain.net
tonavidrome-1606039281
. You can get the Service name by runninghelm list
. With this set up - I'm able to connect an Android app to my Navidrome instance:Note that the end point will be protected by the admin credentails that you setup, you'll likely want to consider setting up TLS on your ingress as well but I won't go into details of that here.
Was it worth it?
So I've been using this set up for a few weeks now and it works great, but was it worth it?
Costs
As I've mentioned, this lives on an existing Kubernetes cluster, but if I were to do this from scratch how much it would it be?
Prices are for Digital Ocean.
- 2vCPU Server - $10/month
- Load Balancer - $10/month
- 250 GB block storage - $25/month
So that's $45 Navidrome on Kubernetes vs $0 Google Music. Not cheap!
The key thing to understand here is that I'm already paying for the Server and Load Balancer for other projects. Admittedly, in 2020 it still feels a bit steep to be paying $25/month for 250 GB but I can live with it.
Removing dependency on Google and using Open Source
This is a big win for me - I am on a slow, steady path to de-googlify myself and this is one step on that journey.
It's also great to use a music streaming server that is under active development and that I can contribute to.
Conclusion
I'm pretty happy with my set up and if you have similar niche interests I'd recommend giving it a go!
Read more... -
Preserving Client IP in Kubernetes
When deploying applications in Kubernetes it's not uncommon to want to preserve the client's source IP address.
Given that you likely have a
Service
in front of yourPod
it may not come as a surprise that preserving the client address isn't always trivial. This can often result in thePod
application seeing local network IPs as the client IP.Preserved Client IP Support across vendors
IP preservation is something addressed in the Kubernetes documentation on external load balancers but there's a note that this may only be possible on GKE.
I'm currently using Digital Ocean, it turned out that enabling
externalTrafficPolicy
on myService
did not do what I wanted. Internal network IPs show up on my applications.Digital Ocean are clearly aware of this need and have built a feature in to their platform to address this. This is detailed on their documentation on load balancers.
This is done by adding the following annotation to your service:
metadata: name: proxy-protocol annotations: service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
The bad news is that whilst this feature will cause the external
Service
to pass on the client IP address, it does so using the PROXY Protocol. If you're using nginx or something else that speaks PROXY, then you can stop reading - this should work with a quick setting tweak.If you have an application that doesn't speak PROXY then read on.
Cue mmproxy
After some searching I was pleased to discover that someone else had had this problem and solved it!
The open source project mmproxy tackles exactly this challenge. It acts as a go between - understanding PROXY protocol and doing some
iptables
tricks to pass it on to the server.But can we make it work in a kubernetes cluster? After some experimenting, I'm pleased to report that I was able to get this working on a normal Digital Ocean Kubernetes Cluster. Here's some config that worked for me.
This creates a service set up to use the PROXY protocol (because of annotations and
externalTrafficPolicy
):apiVersion: v1 kind: Service metadata: name: mmproxy annotations: service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true" spec: type: LoadBalancer ports: - port: 9001 protocol: TCP targetPort: 9001 externalTrafficPolicy: Local selector: app: mmproxy
Next we create a deployment to receive the traffic to mmproxy and forward onward within the same pod:
apiVersion: apps/v1 kind: Deployment metadata: name: mmproxy spec: replicas: 1 selector: matchLabels: app: mmproxy template: metadata: labels: app: mmproxy spec: initContainers: - name: setup image: docker.pkg.github.com/andrewmichaelsmith/mmproxy/mmproxy:latest command: [ "/bin/bash", "-cx" ] args: # Source: https://github.com/cloudflare/mmproxy - echo 1 > /proc/sys/net/ipv4/conf/eth0/route_localnet; iptables -t mangle -I PREROUTING -m mark --mark 123 -m comment --comment mmproxy -j CONNMARK --save-mark; ip6tables -t mangle -I PREROUTING -m mark --mark 123 -m comment --comment mmproxy -j CONNMARK --save-mark; iptables -t mangle -I OUTPUT -m connmark --mark 123 -m comment --comment mmproxy -j CONNMARK --restore-mark; ip6tables -t mangle -I OUTPUT -m connmark --mark 123 -m comment --comment mmproxy -j CONNMARK --restore-mark; ip rule add fwmark 123 lookup 100; ip -6 rule add fwmark 123 lookup 100; ip route add local 0.0.0.0/0 dev lo table 100; ip -6 route add local ::/0 dev lo table 100; exit 0 ; # XXX hack securityContext: privileged: True containers: - name: netcat image: docker.pkg.github.com/andrewmichaelsmith/mmproxy/mmproxy:latest command: [ "/bin/bash", "-cx" ] args: - apt-get install -y netcat; while true; do nc -l -vv -p 9002; done; - name: mmproxy image: docker.pkg.github.com/andrewmichaelsmith/mmproxy/mmproxy:latest command: [ "/bin/bash", "-cx" ] args: - echo "0.0.0.0/0" > allowed-networks.txt; /mmproxy/mmproxy --allowed-networks allowed-networks.txt -l 0.0.0.0:9001 -4 127.0.0.1:9002 -6 [::1]:9002; securityContext: privileged: True
So here we have:
- init container (
setup
): perform mmproxy setup (as per https://github.com/cloudflare/mmproxy). - proxy container (
mmprxoy
): runmmproxy
, listen on 9001 forward to 9002. - app container (
netcat
): listen on 9002 and log connections.
Does it work? First we find the IP of our service:
$ kubectl get svc | grep mmproxy mmproxy LoadBalancer 10.245.126.223 157.245.27.182 9001:30243/TCP 5m27s
Then try and connect to it:
$ nc -v 157.245.27.182 9001 Connection to 157.245.27.182 9001 port [tcp/*] succeeded! HELLO
And check the logs:
+ nc -l -vv -p 9002 listening on [any] 9002 ... connect to [127.0.0.1] from 44.125.114.38 59630 HELLO
Hurray! The actual client IP is preserved.
Read more... - init container (
-
Private tor network on kubernetes
I recently came across someone running a private tor network with docker and immediately decided I'd have to do similar but in Kubernetes. I also followed another useful blog post about this subject.
This seemed like a great opportunity to learn about the inner workings of the tor network and flex my kubernetes muscles. Here are some of the tricky bits I encountered for anyone trying to do something similar.
Testing mode
To get a chance of running our own tor network we must enable
TestingTorNetwork
, this tweaks a number of settings, such as not totally banning private IPs and reducing delays in voting.Directory Authorities
A fundamental part of a tor network is the Directory Authority. When connecting to the network the client will connect to one of these to find out a list of relays to further connect to. These are hardcoded in to the tor source code*.
Fortunately there are config options we can use to override these values (
DirAuthority
). This config needs to have not just the address but the fingerprint of the authority (so we know we can trust it).So from initial research it sounded like all we need to do was:
- Generate certificates and signatures for 3 directory authorities
- Create directory authorities (configured with their certificates)
- Configure 10 relays to talk to directory authorities
- Create 10 relays
ConfigMaps and directories
When trying to get the directory authorities running I had issues poking the certificates in. tor is kind of specific about the structure it expects (an
id
andkeys
dir). BecauseConfigMap
s don't do subdirectories (ref) I ended up using a flat structure in the ConfigMap and using mydocker-entrypoint.sh
to set up symlinks to achieve the desired structure.DirtAuthority address
For the
DirAuthority
line we're expected to use an IP address (mailing list discussion). From a kubernetes point of view this is a bit annoying. Using a Service we can easily know the hostname upfront but an IP is more tricky. We could set theClusterIP
but that leaves config bound to a particular cluster setup.The solution is not so bad - when we generate each
DirAuthority
line we just make sure we've already created the Services and use their IP addresses. We can use jsonpath to get the IP:kubectl get svc da1 -o 'jsonpath={.spec.clusterIP}'
Works, but it makes our setup a bit less elegant - we have to generate config files based upon the state of the kubernetes cluster.
Relay address
On start, if not provided with one, tor will search for an IP address to use. As we don't know our pod IP up front, this sounds ideal. Unfortunately, tor will not pick a private IP address (ref) unless explicitly given that address.
This means we have to have add another trick - a
docker-entrypoint.sh
to append anAddress
line to ourtorrc
with the pod's IP . Again, not awful, but not pretty.Running it
With all these pieces in place I was able to successfully run a private tor network. I can route internet traffic through it (and see it hopping between servers) and scale the number of relays up and down.
Conclusion
These are the main problems I had to overcome to get tor running inside kubernetes. The resulting set of scripts is on github: andrewmichaelsmith/private-tor-network-kube.
I'm reasonably happy with my final product, it produces a fully operational tor network. There is a certain amount of
bash
scaffolding which I'm not a huge fan of. It might be interesting to try and do this project again but as an Operator.** I'm lying here to keep things simple. There are also Fallback mirrors that tor will connect to first. These are also hardcoded in to the tor source code.
Read more... -
Developing an iOS app on Linux in 2017
I've just published an iOS app on the app store, I developed it (mostly) using Linux (Ubuntu). Here I have documented some of the challenges and discoveries for anyone considering doing the same.
Before anyone gets too exicted, this is a Cordova app. That means it basically a web app (HTML, CSS, Javascript) served in a web view. There's no Swift or Objective-C here (at least, not written by me). Furthermore, my total solution uses two hosted Mac OS offerings. The day to day development still sticks to Linux, but I didn't find a solution that doesn't touch Mac.
This post won't go in to much detail about the limitations of a cordova app over a "native" app, as these are already documented elsewhere. I will say that you can produce a decent looking, responsive, completely offline application that Apple will accept on their app store using this mechanism.
Linux and iOS Development
Apple are not exactly known for making development for their platforms easy on operating systems that aren't Mac OS. If you look in to this you will find people on the internet advising that even developing a basic Cordova app would be made a lot easier by buying a mac.
But I'm a Linux user, so I'm not necessarily that interested in making my life easy.
When I started out my main concerns were:
- Testing my app locally in an emulator.
- Building a release to to test on an iPhone.
- Running my app on an iPhone.
- Remotely debugging my app on an iPhone.
- Building an app store ready release.
- Uploading my release to the app store.
It turned out these were all things to be worried about (some solveable, some not), I'd missed one:
- Producing screenshots for the app store.
Development Environment
Before we address each of these points I'll give you a quick overview of my setup. I went for the classic gulp/bower/npm/etc. combo. I used the AngularJS framework.
I used a generator to get started. I ultimately regret this, it got me going quickly but left huge gaps in my knowledge. Next time I would use such a project as a reference, but hand pick the pieces I wanted.
This generator gave me some .html and .js I could edit, some commands I could run to serve them to my web browser from a local web server.
With this and Chrome Device Mode I was able to develop a web page and look at what it might look like on an iPhone.
Whilst that's OK, Chrome is not the web view that Cordova runs on the iPhone, so we don't really have any guarantees that the app will look as we see it on our computer. That brings us to the first concern.
Testing my app locally in an emulator
It's quite simple - if you don't run Mac OS you can't run an iPhone emulator. There are browser plugins (and the previously mentioend device mode) that will make a browser sort of look like a phone, but that's your lot.
Personally I found that for 95% of cases Chrome was similar enough. The other 5% we'll get to later.
(See "Producing screenshots" if you really want to run an emulator).
Building a release to to test on an iPhone
Again, this I couldn't achieve purely on Linux. This brings us to my first cheat.
Adobe Phonegap is a commercial service based upon Cordova. If you create a (free) account with them they will build iPhone binaries for you (for free).
There's one more hoop before that will work - certificiates. The iPhone won't accept a binary which isn't signed by a certificate from Apple. And the only way to get your hands on one of these is to give money to Apple.
Once you sign up and pay for an Apple Developer account you will get some development certificates. You plug these in to phonegap, along with your project's git repo, and a .ipa file is produced.
Running my app on an iPhone
Here comes our first pleasant surprise - I can take my phonegap built .ipa and install it on to my iPhone straight from Linux using ideviceinstaller. It's this simple:
ideviceinstaller -i app.ipa WARNING: could not locate iTunesMetadata.plist in archive! WARNING: could not locate Payload/app.app/SC_Info/app.sinf in archive! Copying 'app.ipa' to device... DONE. Installing 'net.app.example' Install: CreatingStagingDirectory (5%) Install: ExtractingPackage (15%) Install: InspectingPackage (20%) Install: TakingInstallLock (20%) Install: PreflightingApplication (30%) Install: InstallingEmbeddedProfile (30%) Install: VerifyingApplication (40%) Install: CreatingContainer (50%) Install: InstallingApplication (60%) Install: PostflightingApplication (70%) Install: SandboxingApplication (80%) Install: GeneratingApplicationMap (90%)
And that's it - I get my app running on my phone exactly as it will be when I sell it. It pops up on the home screen and I can launch, easy.
Remotely debugging my app on an iPhone
As anyone who's written code for a browser will know - browser quirks can be the most infuriating issues to code for and around. This is the 5% of problems I mentioend previously.
Whether it's CSS or Javascript - being able to open the debug console and tweak things is incredinly useful. As you may have already figured out, the cycle of - commit to git, push to git, build binary on third party service (phonegap), download binary, install binary to phone, launch binary - is not exactly a quick feeback loop.
This brings us to our second pleasant discovery. We can use the ios_webkit_debug_proxy in conjunction with our running app. This allows us to use Chrome devtools on our computer, attached to the Safari webview running in our app on the phone. This makes debugging all manor of browser specific problems a lot easier.
$ ios_webkit_debug_proxy -f chrome-devtools://devtools/bundled/inspector.html Listing devices on :9221 Connected :9222 to Andrew's iPhone (c8fed00eb2e87f1cee8e90ebbe870c190ac3848c)
It's that easy - then through Chrome I can twiddle CSS and run Javascript in my app.
Building an app store ready release
This is the same as how we build our .ipa for testing, the only difference is we have to use some different certificates from Apple. The process is otherwise identical - and phonegap will pop out a production ready .ipa.
Uploading my release to the app store
This was a bit of a shock. Naturally on a mac this process integrates in to XCode and those lucky developers can upload to the app store (iTunes Connect) at the push of a button.
I had assumed there would be some web interface (as there is to configure all other pieces of the app) to allow for submission of our binary. This is not the case.
Your two options are:
- XCode
- Application Loader
Both of these are native Mac OS tools. This brings us to our second cheat. Unfortunately phonegap aren't kind enough to offer this service for us, but there's another option: MacinCloud. For a fee ($1 an hour) you can access a full blown Mac OS instance with Application Loader available (accessible via
rdesktop
).Using this service, it's possible to upload the .ipa to the app store for public release.
(In searching for solutions to this I also found various random-people-on-the-internet who in exchange for some cash and all your Apple login details would submit your app for you from their mac. I did not go down this fairly sketchy route).
Producing screenshots for the app store
We're not quite finished yet! Chances are you want to upload some screenshots of your application. iTunes Connect has a thing called Media Manager which will helpfully take screenshots of the highest iPhone resoltuion and scale them down for you. At time of writing this is 2208x1242 pixels. That is unfortunately more pixels than I have on my laptop.
There's no verification of the images you upload (from what I can see), so you could fake these in any way you like, but if you want to produce a bunch of screenshots of your actual app you may end up doing what I did - uploading your code to Macincloud, running it in Xcode and using the iPhone 7 emulator + screenshot functionality.
Conclusion
Whilst there were a few hoops to jump through in this process, the whole ordeal was not that painful. Throughout the project I was prepared to just go and get a mac but I was keen to avoid this if I could.
The main times I found myself truly swearing at my computer were when I was trying to set up plugins - for which (when I got things wrong) the feedback loop was infuriatingly slow.
All in all I think it was fine to do it this way and I'm glad that to maintain my iOS project I can use my regular development environment. Admittedly, a great deal of the ease comes from the fact that this is a web app - which should be easy to develop on any platform.
Read more...