Failure strategies vs Swift optionals

When writing code in any language, there’s a couple of ideals that I always try to follow around error handling:

  1. You should always process all possible error paths and respond accordingly.
  2. If the options are between crashing and getting into an inconsistent state, then crashing is better

These two tenants of error handling were drilled into me whilst at Bromium where I lead the team that built the Mac version of their vSentry security product. Every error, no matter how innocuous it may seem, will happen to you when you least expect it, so you should *never* ignore an error path. As a code author you're always chasing the success case, as that's the functionality your user wants, so it's easy to forget that a function might error or that some input to your bit of code might not be what you expect. One of the things I like about Swift (and to a lesser extent also in Go) is that error handling is made explicit and it’s opt out rather than opt in: if you want to ignore an error you can do, but you have to decided to do that, rather than just forget. Such a wonderful feature of the language (modulo the initial confusion because they used exception syntax for something that isn’t exception handling.

At the same time as doing this, there are points in your code where either you know that a failure case will never happen, or if it does it’s because of a programming error rather than something unanticipated in then input or the environment (or at least you expect some code called before yours will have done any input validation). For these cases it’s perfectly fine to just assert a particular state rather than handling it (but still you must acknowledge it!).

It always amazes me that people disable such asserts in production builds though. On most modern event driven code there’s no performance justification for doing this, so the reason given is that you don’t want your program to crash on the user. But to me the alternative is far worse: if an assertion would fail and your asserts have been removed, then your program is now in a state you never designed for it. If you’re lucky nothing serious will happen, but at worst you can cause the user confusion and potentially data loss (very early in my career I failed to validate that I’d detected the temporary folder on disk correctly, and thus cleared out empty string on a disk, which is the root folder…). Yes, if you crash the user will be disappointed, but you will at least get feedback, hopefully a stack trace, and very quickly a bug fix. Otherwise your code can be mis-performing for years before you realise.

Crashes are obviously bad and to be avoided at all reasonable costs, but if you follow rule one, you should never have a crash, except where something you asserted would never go wrong does. In which case you’ve learned you should have handled that error despite what you assumed. If you don’t like asserts, then write more error handlers, but doing neither is not an option in my book.

I’ve finally come to my peace with the two variants of the optional value unwrapping in Swift, which initially to me seemed like an odd design decision for a language focused on program safety. In Swift if you have an optional variable (i.e., a variable that either contains a value or is null), you can either use the conditional unwrap (the ? operator) or an unconditional unwrap (the ! operator). I was of the opinion that you should always use the conditional version and provide suitable error handlers, and that the unconditional unwrap was akin to never bothering to check return values on functions in other languages. But I've been writing more iOS UI code in anger and it has made me realise that the unconditional operator is actually like an assert: I’m going to unwrap this optional and I assert it will always hold a value. This is very useful for where you have resources loaded from storyboards etc. where you “know” that the value won’t be nil and will point to a UI element but convention requires the variable be optional.

You still need to be fully cognisant of this decision, and distrusting of any code you see where an unconditional unwrap us used until you’ve convinced yourself it’s actually just an assert call, and in general explicit error handling is always better for the user, but I’m glad I’ve now found a place for what used to seem like a Swift feature that went against all the other things that make the language safer.

Encoding and decoding polymorphic objects in Swift

Having produced large bits of complicated software, I’m a fan of the strictness that languages like Swift and Go enforce on you, but at the same time I do enjoy using the dynamic features of programming languages to let me write less code, and at times these two ideals rub up against each other. This post is mostly a way for me to brain dump some of the bits of code I’ve had to reason about recently as I rationalise these two conflicting ideals: hopefully this will save someone else some effort if they’re trying to do similar things, or people can perhaps point out alternative easier ways. A large part of this was inspired by this stackoverflow answer by Hamish Knight.

Prologue: Starting simple by dealing with variable decoding in Swift.

As a warm up to the main topic, let’s look at a simpler problem in the same domain: decoding API responses from a service where the JSON structure may change depending on what you asked and whether you get an error back etc. In a dynamic language like Objective-C or Python this relatively easy, as you’ll decode the input to an array/dictionary of elemental types before then examining that value to work out what it conforms to. But in Swift and Go this approach won’t work, as both want to decode JSON to a specific type of structure (a struct in Go, a struct/class/enum in Swift) with specific properties known ahead of time. This is all good if your API only ever returns a single form of JSON, but I’ve yet to see an API what doesn’t somewhere have quite different structures of JSON at some point (usually the different between a successful and an unsuccessful result). 

So, how do you deal with this? You could define a type in your code that is the superset of all possible responses and just code into that and then deal with any optional values that aren’t set, but I’d strongly recommend not doing that - we’re using strongly typed languages for a reason, and this is just trying to escape that. What you should do is define a structure for each possible response, and then test decode each one every time you get a response. For example, in Swift we would do:

import Foundation

struct ValidResponse1: Decodable {
    let name: String
    let count: Int    
}

struct ErrorResponse: Decodable {
    let message: String
}

let jsonString = """
{"message": "Things went wrong"}
"""
let jsonData = jsonString.data(using: .utf8)!
let decoder = JSONDecoder()
do {
    let resp = try decoder.decode(ValidResponse1.self, from: jsonData)
    // process resp
} catch {
    do {
        let resp = try decoder.decode(ErrorResponse.self, from: jsonData)
        // process resp
    } catch {
        // do any more in here or handle unknown response
    }
}

Actually, although that’s similar what I’d write in Go, in Swift we can hide this mess by defining an enum and implementing a custom decoder constructor on that. This then leaves your top level code a lot nicer:

enum APIResponse: Decodable {
    case ValidResponse(ValidResponse1)
    // insert other response structs here
    case Error(ErrorResponse)

    init(from decoder: Decoder) throws {
        let container = try decoder.singleValueContainer()
        do {
            let res = try container.decode(ValidResponse1.self)
            self = .ValidResponse(res)
            return
        } catch {}
        // If you had more API Response types add more do/catch blocks here as above.

        // Let the final decode attempt propagate its error
        let res = try container.decode(ErrorResponse.self)
        self = .Error(res)
    }
}

do {
    let resp = try decoder.decode(APIResponse.self, from: jsonData)
    switch resp {
    case .ValidResponse:
        // process resp
    case .Error:
        // process error 
    }
} catch {
    // handle unknown response here
}

This is the solution I used to talk to the Docker API in my little Stevedore application. As you can see, hiding all the decoding code in the enum decodable constructor makes for a nice and simple path in your main code logic. But the other thing to note here is a pattern whereby we coerced our many response types into a single container type and then used that in deserialisation. This is a pattern that we’ll find another user for in our main topic. 

Chapter 1: Building our app model using polymorphism

In this section nothing scary, I’m just going to set the scene for what is to come. I’ve been playing around with AudioKit, looking at how to build simple effects chains to let me build up more interesting audio effects by composition. The aim here is just to try and understand some of what the elemental audio effects to do a guitar signal, and how each one impacts the sound.

IMG_0033.jpg

To implement this I use a normal polymorphism pattern where I define a base type of effect that all specific effect classes will inherit from. This being Swift, rather than use class inheritance I’m using protocols, but you could just use a superclass and subclasses to achieve a similar result if you needed to for some reason.

protocol Effect {
    var name: String { get set }
    var active: Bool { get set }

    func doEffect(_ s: SoundSample) -> Void
}

struct ReverbEffect: Effect {
    var name = "Reverb"
    var active = true
    var delay = 3.2
    var feedback = 40

    func doEffect(_ s: SoundSample) -> Void { /* some custom reverb code here */ }
}

struct DistortionEffect: Effect {
    var name = "Distortion"
    var active = true
    var gain = 1.5
    var tone = 6

    func doEffect(_ s: SoundSample) -> Void { /* some custom distortion code here */ }
}

// Add another dozen of so similar effect structs here…

Having built up my effect library I’m interested in building up chains of individual audio effects to make something interesting. A dumb version of this code will look like:

let effectsChain: [Effect] = [DistortionEffect(), ReverbEffect()]

let sample = GetSoundSample()
for effect in effectsChain {
    effect.doEffect(sample)
}
sample.play()

So we get some audio, run it through the sequence of effect processors, and play the sample out. This is why the polymorphic approach is appealing here: we don’t care which effects are in our chain, we just call the same protocol on them and we’re done.

In actuality, those structs have to be classes in my actual application, as internally each one has a reference to an AudioKit class object, and in general if your structure has to store a pointer to an object it too has to be defined as a class rather than a struct. 

Chapter 2: Trying to save our effects chain

Having build up a nice sounding chain, the next thing I want to do is save it so that the next time I load my application I can reload it. The instinctive thing to do is, similar to the example in our prologue, just slap Codable onto the protocol, let that get picked up by structs, and away we go. 

…
protocol Effect: Codable {
…

// Save our chain as JSON for saving and restoring
do {
    let jsonData = try JSONEncoder().encode(effectsChain)
} catch {
    // process encoding errors here
}

But if we try to compile that we get the following:

% swift main.swift           
main.swift:45:36: error: generic parameter 'T' could not be inferred
    let jsonData = try jsonEncoder.encode(effectsChain)

The problem here is that although effectsChain contains a list of structs that conform to a protocol, as far as the encoder is concerned you’ve passed it an array of different structs of different types: they don’t have a common ancestor. You can get exactly the same error if you try the following:

let v: [Any] = [“hello”, 42]
let jsonData = try JSONEncoder().encode(v)

If we’d actually used class inheritance rather than a protocol here, encoding would have worked, as the encoder would have had enough information to work with. But that would be a false sense of achievement! Because if you use classes it will encode okay, but loading things back in will fail, as you just don’t have enough information here in the type you’re encoding to say which concrete classes you should decode to - they’ll all end up as the base class (you can see this here in this gist). If I run the gist you will get:

We restored the chain: [main.Effect, main.Effect]

This is not what we want at all! To solve both of these problems we’re going to have to use a similar pattern to the one we did in the prologue of tidying all our concrete instances under a single type that we can explicitly encode and decode and be aware of the differences  between our individual effects.

Chapter 3: Creating a collective type for encoding and decoding

Whilst we could use an enum type to wrap our effect implementations as we did in the prologue to solve this, we’d then lose the advantage in the rest of code to just call “doEffect” (and all the other methods on the protocol that I’ve glossed over to keep the sample code short). Instead we’ll use an enum to help with the type mapping, but our top level wrapper will be a regular struct so we can keep our polymorphic behaviour in the rest of our code without adding switch statements everywhere. 

The first step is we create an codable enum that enumerates all the concrete types that we’ll want to encode/decode as a string. This will also have a computed property on it that returns the actual concrete type for the value stored in the enum.

enum EffectType : String, Codable {
    case reverb, distortion
    
    var metatype: Effect.Type {
        switch self {
        case .reverb:
            return ReverbEffect.self
        case .distortion:
            return DistortionEffect.self
        }
    }
}

At some point in our code we were going to have this switch statement, and this is where we’ve hidden it, so that in the rest of our code we don’t need to see it. We can then add this to our protocol and have each struct include a suitably initialised:

protocol Effect: Codable {
    var type: EffectType { get }
    …
}

struct ReverbEffect: Effect {
    let type = EffectType.reverb
    …
}

struct DistortionEffect: Effect {
    let type = EffectType.distortion
    …
}

Now we have everything we need to make a simple wrapper structure that will contain a single effect and then encode it along with the type information so that it can be uniquely decoded back to the correct type:

struct EffectWrapper {
    var effect: Effect
}

extension EffectWrapper: Codable {
    private enum CodingKeys: CodingKey {
        case type, effect
    }

    init(from decoder: Decoder) throws {
        let container = try decoder.container(keyedBy: CodingKeys.self)
        let type = try container.decode(EffectType.self, forKey: .type)
        self.effect = try type.metatype.init(from: container.superDecoder(forKey: .effect))
    }

    func encode(to encoder: Encoder) throws {
        var container = encoder.container(keyedBy: CodingKeys.self)
        try container.encode(effect.type, forKey: .type)
        try effect.encode(to: container.superEncoder(forKey: .effect))
    }
}

Note we use an extension here to add the codable functionality so that it the init(from decoder:Decoder) method doesn’t replace the default initialiser, otherwise we’d have to redefine the regular init() again rather than just letting the compiler do that. 

With that done, I can now happily encode and decode my objects like so:

// Save our chain as JSON for saving and restoring
do {
    let wrappedChain: [EffectWrapper] = effectsChain.map{EffectWrapper(effect:$0)}
    let jsonData = try JSONEncoder().encode(wrappedChain)
    let jsonString = String(data: jsonData, encoding: .utf8)

    if let json = jsonString {
        print(json)
    }

    // now restore
    let newChain = try JSONDecoder().decode([EffectWrapper].self, from:jsonData)
    print("We restored the chain: %@", newChain)
} catch {
    // handle errors
}

The full code for this example is in this gist, and if you run it you’ll see something like:

% swift main.swift                  

[{"type":"reverb","effect":{"active":true,"delay":3.2000000000000002,"type":"reverb","name":"Reverb","feedback":40}},{"type":"distortion","effect":{"tone":6,"active":true,"type":"distortion","name":"Distorion","gain":1.5}}]

We restored the chain: %@ [main.EffectWrapper(effect: main.ReverbEffect(type: main.EffectType.reverb, name: "Reverb", active: true, delay: 3.2000000000000002, feedback: 40)), main.EffectWrapper(effect: main.DistortionEffect(type: main.EffectType.distortion, name: "Distorion", active: true, gain: 1.5, tone: 6))]

Now we can happily save and restore our effects chain, and all the type strictness is handled away from the main application logic. 

Epilogue: Closing comments

For those of us used to Objective-C’s secure coding, this seems a lot more verbose, but that’s the flip side of having a stricter language with limited introspection capabilities. My main gripe about this approach is that I have to keep the EffectType enum up to date as I add new effects, but because I have to define the type from protocol in the new struct I’m unlikely to forget to do that, though it is susceptible to copy/paste errors - say I end up with a new Flanger effect struct that I lave with the type property set to reverb and the encoded properties aren’t the same - that won’t get detected by the compiler and will blow up instead at run time, which is sad. But this is why you have unit tests I guess.

Be wary of timestamps for Windows Performance Monitor data

On a recent client project I’ve been trying to use Windows’ built in performance monitoring tools to monitor machine health. You can programmatically (or manually using the system provided Performance Monitor tool) set up a data collector that will sample aspects of system performance (such as CPU idle time, disk throughput and space, and a whole lot more) at a specified interval, which you can then have logged to a file in a selection of formats. Whilst on one hand, it’s a pretty nice feature, I’ve spotted some oddities around how it handles timestamps in those files, and I thought I’d write those up here as I failed to find anywhere else warning of these issues. The following is by large taken from a stack overflow post I made hoping someone might correct me; if you know better please do let me know.

If we look at some sample data from a performance monitor file, it might look like this:

"(PDH-CSV 4.0) (GMT Standard Time)(-60)","\\MACHINE-NAME\% C2 Time"
"10/29/2017 01:59:44.562","88.359340674585567"
"10/29/2017 01:59:59.562","93.754278507632279"

Here’ I’m jut monitoring one metric, and the system inserts a timestamp next to it, and the header contains information about the TZ of the machine and the offset from UTC in minutes. However, I noticed whilst reviewing some data captured over the last few months that when the UK moved from GMT to BST that there was just a gap in the data for an hour, and upon some fiddling with my machine’s clock I managed to show that going the other way there was duplicate timestamps for an hour. This is not good: if we look at the longer version of the above data:

"(PDH-CSV 4.0) (GMT Standard Time)(-60)","\\MACHINE-NAME\% C2 Time"
"10/29/2017 01:59:44.562","88.359340674585567"
"10/29/2017 01:59:59.562","93.754278507632279"
"10/29/2017 01:00:14.562","89.834673388599242"
"10/29/2017 01:00:29.563","94.014449309223536"

Because the TZ offset is only stored in the column header, it has no way to indicate the fact that the local timezone offset changed during the recording of this file, we just see a second set of data for that hour. This makes trying to reason about this data during a daylight savings change very hard.

I thought perhaps this might be a limitation of the CSV data format, so I tried the binary format. The binary output of Performance Monitor is not documented, but there are PowerShell bindings to let you query the data. So I had a look at the same data in binary format:

# $counters = Import-Counter -Path mysamples.blg

# $counters[10].Timestamp
29 October 2017 01:59:59
# $counters[11].Timestamp
29 October 2017 01:00:14

# $counters[10].Timestamp.IsDaylightSavingTime()
False
# $counters[11].Timestamp.IsDaylightSavingTime()
False

# $counters[10].Timestamp.ToFileTimeUtc()
131537159995620000
# $counters[11].Timestamp.ToFileTimeUtc()
131537124145620000

Again, not only are the timestamps not timezone aware, it’ll happily tell you that time goes backwards at one point. I’ve had a look through the API documentation for setting these up programatically, and had a play with the UI and I can’t see a way to correct this.

I had a look with how the Windows tools cope with graphing data around these incidents, and the answer is they don’t. The either show a missing hour in the graph, or they’ll squash the “duplicate” hour into a single sample that is averaged.

iwanttobelieve.png

Timezone information is hard, I’m struggling to see why Microsoft didn’t store the timezone in UTC per sample and let the reading program deal with any view related TZ offsets. As it is this makes it quite hard for tools that average data to reliably work over the transitions, particularly if you have to account for things like the machine being intermittently up. You can try and work out the local daylight saving time for the given timezone, but that’s helpfully written down in human readable form that you’ll have to translate yourself.

If I’m missing a trick here that makes this all go away, let me know, either by contacting me directly or better yet on the stack overflow question I posted. But given Microsoft’s own tools for processing the data don’t deal with this scenario I’m not that hopeful. Otherwise hopefully this post will at least save others the head scratching I did trying to work out what was going on in my data.

A simple UI for managing local docker instances

I use Docker a lot for running the various web services I work on (either for myself or under contract). I'm a big fan of how, even when not using containers for deployment, it just simplifies so many things about building web services: I don't need to install a DB locally, I just run it in a container; I can test my code running in parallel using multiple container instances; I can install different potentially conflicting tool chains for different projects; and so forth.

Docker is also sufficiently lightweight I can often forget that I've got a mirror of some client's scaled web infrastructure running on my machine after I clock off, but not sufficiently lightweight I don't notice my battery draining quicker than I'd like when I have forgot to shut everything down. Unfortunately, on macOS at least, the default Docker UI doesn't indicate what the state of your local infrastructure is, making this a somewhat frequent occurrence for me.

To solve this, I've written a small status bar item, called Stevedore, that just simply indicates whether I've left any containers running (by having either an empty ship or one laden with containers on my menu bar) and has a drop down menu that then lets me quickly stop or start them. It's not the most impressive bit of engineering in the world, but it fixes a problem I have, so I thought I'd share it.

Screen Shot 2018-04-17 at 18.48.53 (2).png

Stevedore was also an excuse for me to play with Swift in anger for the first time in quite a while. Stevedore is a simple enough an app that it was easy to get started, but the Docker API being based on HTTP over Unix Domain Sockets meant I got to play with Dispatch IO and other bits of concurrency to keep it interesting to implement, and I wrote my own limitted HTTP parser to manage the docker channel: not because I should, but because it was a useful learning experience doing so in Swift. I really do miss playing with things like GCD, so it was good to exercise that bit of my brain again, and to learn just how far Swift has come since I last wrote any in production.

Whilst on the Bromium Mac team we wrote lots of gnarly code for macOS, it was almost exclusively in Objective-C, as we couldn't afford to keep playing catchup as Apple made incompatible changes to Swift each year. But now that I'm getting back to my own stuff, I'm keen to give Swift a serious go. Having built big/complicated products, I've learned the hard way that any language support you can get to make mistakes less likely is a good thing. This is why I like using Go for web services where possible, and why features of Swift like it's explicit function error returns make me very happy. At some point I'll write up what I think are the good/bad bits of Swift from that point of view, a bit like I did for Go a couple of months back.

Anyway, there's enough functionality in Stevedore I'm already using it daily, so if you think this might be of use to you, then head over to github where I've posted the source, or you can download a binary version here.

Better testing for golang http handlers

I'm writing this up as it doesn't seem to be a common testing pattern for Go projects that I've seen, so might prove useful to someone somewhere as it did for me in a recent project.

One of the things that bugs me about the typical golang http server setup is that it relies on hidden globals. You typically write something like:

package main

import (
    "net/http"
    "fmt"
)

func myHandler(w http.ResponseWriter, r *http.Request) {
     fmt.Fprintf(w, "Hello, world!")
}

func main() {
     http.HandleFunc("/", myHandler)
     http.ListenAndServe(":8080", nil)
}

This is all lovely and simple, but there's some serious hidden work going on here. The bit that's always made me uncomfortable is that I set up all this state without any way to track it, which makes it very hard to test, particularly as the http library in golang doesn't allow for any introspection on the handlers you've set up. This means I need to write integration tests rather than unittests to have confidence that my URL handlers are set up correctly. The best I've seen done test wise normally with this setup is to test each handler function.

But there is a very easy solution to this, just it's not really considered something you'd ever do in the golang docs - they explicitly state no one would ever really do this. Clearly their attitude to testing is somewhat different to mine :)

The solution is in that nil parameter in the last line, which the golang documents state:

"ListenAndServe starts an HTTP server with a given address and handler. The handler is usually nil, which means to use DefaultServeMux."

That handler is a global variable, http.DefaultServeMux, which is the request multiplexer that takes the incoming requests, looks at the paths, and then works out which handler to call (including the default built in handlers if there's no match to return 404s etc.). This is all documented extrememly well in this article by Amit Saha, which I can highly recommend.

But you don't need to use the global, you can just instantiate your own multiplexer object and use that. If you do this suddenly your code stops using side effects to set up the http server and suddenly becomes a lot nicer to reason about and test.

package main

import (
    "net/http"
    "fmt"
)

func myHandler(w http.ResponseWriter, r *http.Request) {
     fmt.Fprintf(w, "Hello, world!")
}

func main() {
     mymux := http.NewServeMux()
     mymux.HandleFunc("/", myHandler)
     http.ListenAndServe(":8080", mymux)
}


The above is functionally the same as our first example, but no longer takes advantage of the hidden global state. This in itself may seem not to buy us much, but in reality you'll have lots of handlers to set up, and so your code can be made to look something more like:

func SetupMyHandlers() *http.ServeMux {
     mux := http.NewServeMux()

    // setup dynamic handlers
     mux.HandleFunc("/", MyIndexHandler)
     mux.HandleFunx("/login/", MyLoginHandler)
    // etc.

    // set up static handlers
     http.Handle("/static/", http.StripPrefix("/static/", http.FileServer(http.Dir("/static/"))))
    // etc.

     return mux
}

func main() {
     mymux := SetupMyHandlers()
     http.ListenAndServe(":8080", mymux)
}


At this point you can start using setupHandlers in your unit tests. Without this the common pattern I'd seen was:

package main

import (
    "net/http"
    "net/http/httptest"
    "testing"
)

func TestLoginHandler(t *testing.T) {

     r, err := http.NewRequest("GET", "/login", nil)
     if err != nil {
          t.Fatal(err)
     }
     w := httptest.NewRecorder()
     handler := http.HandlerFunc(MyLoginHandler)
     handler.ServeHTTP(w, r)

     resp := w.Result()

     if resp.StatusCode != http.StatusOK {
          t.Errorf("Unexpected status code %d", resp.StatusCode)
     }
}

Here you just wrap your specific handler function directly and call that in your tests. Which is very good for testing that the handler function works, but not so good for checking that someone hasn't botched the series of handler registration calls in your server. Instead, you can now change one line and get that additional coverage:

package main

import (
    "net/http"
    "net/http/httptest"
    "testing"
)

func TestLoginHandler(t *testing.T) {

     r, err := http.NewRequest("GET", "/login", nil)
     if err != nil {
          t.Fatal(err)
     }
     w := httptest.NewRecorder()
     handler := SetupMyHandlers()  // <---- this is the change :)
     handler.ServeHTTP(w, r)

     resp := w.Result()

     if resp.StatusCode != http.StatusOK {
          t.Errorf("Unexpected status code %d", resp.StatusCode)
     }
}

Same test as before, but now I'm checking the actual multiplexer used by the HTTP server works too, without having to write an integration test for that. Technically if someone forgets to pass the multiplexer to the server then that will not be picked up by my unit tests, so they're not perfect; but that's a single line mistake and all your URL handlers won't work, so I'm less concerned about that being not picked up by the developer than someone forgetting one handler in dozens. You also will automatically be testing any new http wrapper functions people insert into the chain. This could be a mixed blessing perhaps, but I'd argue it's better to make sure the wrappers are test friendly than have less overall coverage.

The other win of this approach is you can also unittest that your static content is is being mapped correctly, which you can't do using the common approach. You can happily test that requests to the static path I set up in SetupMyHandlers returns something sensible. Again, that may seem more like an integration style test, rather than a unit test, but if I add a unit test to check that then I'm more likely to find a fix bugs earlier in the dev cycle, rather than wasting time waiting for CI to pick up my mistake.

In general, if you have global state, you have a testing problem, so I'm surprised this approach isn't more common. It's hardly any code complexity increase to do what I suggest, but your test coverage grows a lot as a result.

Fretboard design generator

For those less familiar with my other technical outlet, I build custom electric guitars. Of the process of building an electric guitar, doing the layout of the fretboard slots is one of the more fiddly bits, and one that you have to get spot on if the guitar is to be in tune. Even if you're using CNC machinery as part of your workflow, as I do for some of the initial bulk cutting operations, taking the output of a fret spacing calculator and entering it into your design tool is very tedious.

When I made my first fretboard, I found an existing design file that had the slots at the right scale length, but now someone has asked me about a baritone neck, which has a longer scale and so has all the frets in a different position, so I was back to square one. Being a software engineer, I decided to automate the generation of design files into a simple webpage, which you can access here.

template generator 2.png

Whilst not the prettiest of UIs, it's (hopefully) simple to use: you enter the details of the neck you want such as the scale length, the number of frets and so forth, you get a preview of you fretboard along with the positions in a table for you to confirm it is what you want, and then you can export the design as SVG or DXF. This means you can import it into most design and CAM software for final tweaking and then to production. Here you can see one imported into the tool I use for driving the laser cutters at Makespace:

Capture.png

And you can then see a video of it in action here:

I'm a big believer in contributing back to the luthier community that is based a lot around sharing ideas and techniques, so this tool is open source for others to play with and contribute to. The tool was mostly created using MakerJS, a nice Javascript library from Microsoft that is targetted at people trying to make it easy to generate designs for the kit you find in maker spaces programatically. The guys at MakerJS were kind enough to even tweak it in response to my posting this tool to fix some limitations I hit, so many thanks to the MakerJS team!

Managing GOPATH for multiple projects with direnv

I'll stop with the golang tips shortly, but another quick time saver incase you've not seen this before: you can use direnv to manage your GOPATH settings for each of your projects.

direnv is a small utility that will set/unset environmental variables as you enter/leave directories. It's dead easy to set up, and in homebrew if you're on a Mac. This means I can set a GOPATH specifically for each go project, without having to remember to do GOPATH=$PWD each time - direnv just sets it as a change directory to the project, and unsets it when I move away.

This can be useful for other things to, like setting PYTHONPATH or other project specific environmental variables.

Hat tip to Day Barr for alerting me to that one.

Handling golang third party dependancies robustly

I wrote recently about my thoughts on golang, concluding that although far from perfect, I quite like the language as it makes solving a certain class of problem much easier than traditional methods.

One of the things I was a bit dismissive of was how it manages packages. Whilst I'm not a fan of its prescriptive nature, it's out of the box behavior is in my mind just not compatible with delivering software  repeatedly and reliably for production software. However, it's fairly easy to work around this, I've not seen anyone use this particular approach, so I thought I'd document it for future people searching for a solution.

The problem is this: by default golang has a nice convenience feature that third party packages are referred to by their source location. For example, if I want to use GORM (a lightweight ORM for Go), which is hosted on github, I'll include it in my program by writing:

import "github.com/jinzhu/gorm"

And as a build stage I'll need to fetch the package by running the following command:

go get -v github.com/jinzhu/gorm

This command does is checkout the package into your $GOPATH/src directory at $GOPATH/src/github.com/jinzhu/gorm, doing a git clone of whatever their latest master code is.

On one hand this is very nice: you build in how to find and fetch third party dependencies. However, it's enforced two things that I don't want when I'm trying to build production software:

  1. I now rely on a third party service being around at the time I build my software
  2. The go get command always fetches the latest version, so I can't control what goes into my build

Both of these are not something I'm willing to accept in my production environment, where I want to know I can successfully build at any time, and I have full control over what goes into each build.

There is a feature of the golang build system you can use to solve this, just it's not that obvious to newcomers, and this alone isn't very useful, so here's my solution, bsaed on the assumption you're already using git for version control, and you have <code>$GOPATH</code> pointed at your project's root folder:

  1. Clone the project into your own code store repository. I always do this anyway, as you never know when third party projects will vanish or change significantly. 
  2. Create a vendor directory in your project. The golang build system will look $GOPATH/vendor for packages before looking in the $GOPATH/src directory.
  3. Add as a git submodule  the project at the appropriate point under vendor. For GORM that'd be vendor/github.com/jinzhu/gorm, similar to how "go get" would have put it in the src directory.
  4. Replace your "go get" build step with a "git submodule update" command.
  5. And voila, you're done. Using git submodules means you can control which commit on the third party project you're using, and by pointing it at your own mirror, you can ensure if your own infrastructure is there you can still deliver software regardless of external goings ons.

As a friend of mine pointed out, there are tools you can do to try and manage third party code into the vendor location, such as vndr, but the fewer tools I need to install to build a product the better - still, if you want to avoid the creation of directories yourself then you should give this a look.

Some thoughts on Golang

The Go programming language has been around for about a decade now, but in that time I've not had much call to create new networked services, so I'd never given it a go (I find I can't learn new programming languages in abstract, I need a project otherwise the learning doesn't stick). However I had cause to redo some old code at work that had grown a bit unwieldy in its current Python + web framework de jour, so this seemed like a chance to try something new.

I was drawn to Go by the promise of picking up some modern programming idioms, particularly around making concurrency manageable. I'm still amazed that technologies like Grand Central Dispatch (GCD) that save programmers from worrying about low level concurrency primitive (which as weak minded humans we invariable get wrong) are not more widely adopted - modern machines rely on concurrency to be effective. In the Bromium Mac team we leaned heavily on GCD to avoid common concurrency pitfalls, and even then we created a lot of support libraries  to simplify it even further.

Modern web service programming is inherently a problem of concurrency - be it on the input end when you're managing many requests at once to your service, and on the back end when you are trying to off load long running and periodic tasks away from the request service path. Unfortunately the dominant language for writing webservices, Python, is known to be terrible at handling concurrency, so you end up offloading concurrency to other programs (e.g., nginx on the front end, celery on the back end), which works, but means you can only deal with very coarse grain parallelism.

Go seems to have been designed to solve this problem. It's a modern language, with some C like syntax but free of the baggage of memory management and casting (for the most part), and makes concurrency a first class citizen in its design. Nothing it does is earth shatteringly new - the go routine concurrency primative is very old, and the channel mechanism used to communicate between these routines is standard IPC fair - but what it seems to pull off is putting these things together in a way that is very easy to leverage. It also lacks the flexibility of the aforementioned GCD to my mind, but ultimately it is sufficiently expressive that I find it very productive to write highly concurrent code safely. It actually makes writing web services that have such demands fun again, as you end up with a single binary that does everything you need, removing the deployment tedium of the nginx/python/celery pipeline. You can just worry about your ideas, which is really all I want to do.

Another nice feature is the pseudo object orientation system I Go. Go has two mechanisms that lead you in the same direction as traditional OO programming - structs and interfaces. Structs just let you define structs as you might in C, but you can do composition that gives you a sort of inheritance if you need it, and interfaces just define a list of function interfaces you can use on a struct. But an interface isn't tied to a struct as it might be in a traditional OO, they're defined separately. This seems weird at first, but is really quite powerful, and makes writing tests very easy (and again, fun) as it means you can "mock" say the backend object simply by writing an object that obeys an interface, rather than worrying about actual inheritance. Again, it's nothing new, it's just pulled together in a way that is simple and easy to be productive with.

The final nicety I'll mention is another feature is an idiom that in the mac team at Bromium we forced on ourselves - explicit error handling and explicit returns of errors next to the valid result. This again makes writing code to handle errors really natural: this is important, as programmers are inherently lazy people and it's a common cause of bugs in that the programmer simply didn't think about error handling. Go's library design and error type make this easy.

For all this, Go has its flaws. Out of a necessity to allow you to have values that may have no value, Go has a pointer type. But it also makes accessing concrete values and pointers look identical in most cases, so it's easy to confuse those, which can occasionally lead to unexpected bugs, particularly when looping over things where you take the loop pointer rather than the value it's pointing to. The testing framework is deliberately minimal, and the lack of class based testing means you can't really use setup and teardown methods, but this leads to a lot of boiler plate code in your tests - this is a shame, as otherwise Go makes writing tests really easy. And let's not get started on the package system I Go, which is opaque enough to be a pain to use.

It's also a little behind say Python in terms of full stack framework support. The Go community seems against ORMs and Django style stacks, but that does mean it's hard to justify its use if you're writing a website for humans to use with any complexity. There is at least a usable minimal DB ORM in the form of GORM that saves you from writing SQL all the time.

But for all its flaws, I really have taken to Go, and I've written a small but reasonable amount of production quality code in it now, and I still find it a joy to use as it's so productive. For writing backend web services, it's a joy. There's not enough mature framework support yet that I'd use it instead of Django/Python for a full user interactive website, but for IoT backends or such it's really neat (in both senses).

If any of this sounds interesting to you then I can recommend The Go Programming Language book. Not only is it easy to read, it gives you good practical examples that let you see the power of its primitives very quickly. If you think I've got it wrong with any of my criticisms of the language, then do let me know - I'm still a relative newbie to golang and very happy to be corrected so I can be even more productive with it!

Discovering things too late: Quartz Composer edition

Like most techies I have a todo list of things I'd like to hack on as long as my arm, and xmas is one of the few times I get to act on them. However, I don't want to spend all xmas doing my day job in another form, which is what a lot of the list would look like. 

One of the things I wanted to get going again was the screen in our kitchen. We have a nice framed monitor on the wall that we used to use with our CODA screen back in the day to display photos, weather, social media feeds, etc. One of the reasons I was particularly sad when Camvine and CODA went away was not just because of the effort myself and others had poured into the company, it's because it was a genuinely useful product, and I've not since found something that would let me manage content on my wall so easily and our kitchen display has sat unused since.

I often want to get something up and running to replace it, but the amount of boilerplate to get to the position of doing the fun part (displaying pictures and feeds with any sense of style) just puts me off. But then I happened across a tutorial for Apple's Quartz Composer, which let me do all the fun bits right away without any tedious code, and has a path to making it into an app when I'm done. 

500px feed take 2 - Editor.png

Quartz Composer is a lot of fun - it's a node based system where you wire up operations to make a simple flow that results in nice things appearing on screen, and allows many interaction modes. I imagine all my design friends are laughing at me for taking this long to find such a tool - I've always had my head down in the nuts and bolts, which is why I imagine I skipped over this originally. Within half an hour I had something up and running displaying my photo feed from 500px, and a day or two later something slightly more polished.

I looked forward to pouring more time into this project over the coming months, but unfortunately, I've also discovered Quartz Composer has been abandoned by Apple. Whilst you can still use it, it's got some serious issues on El Capitan, initially I was going to post links to tutorials I followed here, but I can't really recommend anyone give it a try. Which is sad, as it delivers both a simplicity to prototype up visual and interactive interfaces very quickly, but then also turn them into production quite simply too.

Still, for my own uses here, it continues to function for now, so I have a working screen in my kitchen again.

Oculus Rift experiences

Last weekend my friend Nick took me and my other half to a strange purple office, and showed me the nice pyramid of playing cards he on his desk. Then we went to a beautiful old house by the edge of Lake Como, with lovely candelabra and trees in the garden. Finally, we went to an asteroid field, and then docked at a space station. All virtual of course, but also real, as what Nick was demonstrating to us was the Oculus Rift, a virtual reality headset that is making it seem like VR is here, once again. 

WP_20141207_14_10_30_Raw.jpg

People of my age are probably quite cynical when it comes to VR. We remember in the early 90s Virtuality, a full VR gaming rig, made popular in the UK by the TV game show Cyberzone. It seemed back then VR was here, but for various reasons it never took off and fizzled out, and then we got all excited about this thing called the Internet and forgot about it. But having seen it back then, it's still something that gets me excited about the possibilities, so having watched skeptically the rise of the Oculus Rift for a while, I finally begged Nick to let me have a go in his Oculus Rift Dev Kit 2 (DK2), and he was gracious enough to provide us with a physical tour of virtual reality.

The Oculus Rift is a nice bit of hardware. You can imagine it as a pair of skiing goggles with an LCD screen strapped to the front; and indeed that's what the early demo units where. The DK2 unit however is well built and feels solid. It's not light, but for me it wasn't too heavy either. Inside it has mobile phone LCD screen designed for 1920x1080, which is shared between both eyes, giving you 980x1080 per eye. Also in the unit are a bunch of hidden IR emitters, that are then tracked by a small sensor you clip to you screen, and it's through this it can track your head position. You have to provide your own sound, or in our case Nick had a nice Turtle Beach headset.

So, what's it like? 

Firstly it's a little disorientating, so I'm glad Nick started us off with some simple demos. You go into this virtual world that you see, but know isn't real, and that first experience takes a few minutes to get used to. It really is confusing when you look down and fail to see your hands. You then spend the next few minutes bobbing your head trying to find anything that exhibits parallax to get the obvious 3D benefits. And here it really does deliver a solid feeling world.

From a technical standpoint there's two main factors that you want for a VR headset to work well, the most of important one is low latency (or low lag as most people say). Put simply, if you move your head, you want the world to move with you with as little delay as possible. In the stock demos for DK2, the latency is effectively not noticeable. Objects in the demos seem totally solid, you move your head and they move (or rather don’t move) as you'd expect. The demos don't have the highest of graphical fidelity, but they get this basic point across well: the world feels solid. The things you see really do feel like they're anchored into position in the world. One of the criticisms of the first Rift dev kit (DK1) was that even with just 15 ms of delay between sampling your head orientation and delivering the picture to your eyes, it was still noticeable. For DK2 they've got that down to an effective 5 ms, by adding an extra rendering stage in which after they've drawn the picture, they sample where your head is now, and apply a little skew to try to correct for any change. It's a noticeable improvement (to me at least); you can switch between the DK1 and DK2 modes and you can see a slight wobble with DK1 mode that is gone in DK2.

The second technical issue you want is visual fidelity too, aka screen resolution. Although 1980x1080 is a lot of pixels for a screen that is a few feet from your face, in the DK2 the pixels can be noticeable at times when a few inches from your face. I suspect they're probably noticeable all the time but your brain fuzzes it out; things more than a few virtual feet away all seem fine, but when you look at things up close, you start to notice the pixels, presumably as they limit the virtual fidelity of nearby objects. This to me was the main technical let down of the Rift in its current form (which we should remember is still a development kit, not a shipping product).

Thus far in all my DK2 experience I've been stood still, gawping like an idiot at these virtual worlds. Next I tried walking, moving around the world using keyboard and mouse input like a first person shooter (FPS), and it was here my body decided it was really confused, and I got something akin to motion sickness. Your eyes have this very convincing input telling you that you're by Lake Como in this house and that you're walking up the stairs to look off the balcony, but your inner ear is saying your sat in a chair in Tooting not moving. I found this to be particularly unsettling, such that I ended up blurring my eyes whilst I moved from location to location, then revelling in the wonder of that particular spot, before repeating the process to move about. Thus it would appear that playing Skyrim (essentially a fell running simulator) with the Rift is not going to be my thing any time soon.

But before we decide this is a show stopper - not all immersive experiences require the viewer to be the source of movement. So long as the world moves about you rather than you in the world, it's not a problem. Thankfully Nick had one of those, having saved the best demo to last: Elite: Dangerous. Elite, incase you're not a child of the 80s, is a space faring simulator, where you pilot your spaceship along trade routes trying to either make an honest living or being a pirate attacking the others and evading the police. This new version is wonderfully detailed, as this screenshot of me playing it on a conventional screen shows:

2014_12_13_11_25_16_ProShot.jpg

Now put that into VR, and you're getting somewhere interesting. Nick started me off at the same position as in that picture above: in the pilot seat of a stationary spaceship floating in an asteroid field, and it was absolutely mesmerising. I could look all around me: before me where the controls, and left, right, up were slowly tumbling asteroids, and beyond them a wall of stars and the band of the Milky Way. With the headset on to mask out the sounds of an office in London, I was briefly there inside this vast vista; not watching someone else in that vista on a screen, I was for the first time truly there myself. The cabin of my spaceship was all there too: I could see the thermos flask strapped in to the right of my chair, I could stand up and see bits of my spaceship out of the canopy. Once sat back down, I fumbled and found the throttle and joystick Nick has, and as I moved them, my virtual hands moved, and I piloted the field, gawping some more, and adding healthily to Nick's spaceship insurance premiums as I failed to spot all the asteroids.

Here, sat in deep space, there's no problem with inner ear and eye disagreeing, it all feels very comfortable. I've never flown a spaceship, so my brain quite happily accepts the whizzing spaceship has no sense of motion (just like in Star Trek then, where they can accelerate beyond the speed of light and no one as much as spills their tea). It really is quite jaw dropping, but I do suspect I'm also getting waves of nostalgia here, as this was the first time I've played Elite since the early 90s.

The one thing that does hamper Elite is that lack of resolution. The menus in the cockpit of my spaceship are a little hard to read (though wonderfully 3D - I can't tell you how pleased I was to see the iconic Elite scanner in real 3D). But there you can at least do one trick that I did naturally but didn't expect to work - you can just move your head closer. Even in space people still need varifocals clearly.

The main sense of wonder though is that canopy and the view beyond. Being able to look not just forward, but any which way, changes things. The space station we docked in felt truly vast. I could look over my shoulder for things buzzing me. A totally new set on experiences in a video game, that make it so much bigger. Elite on my LCD monitor at home is still fun, but I know it's not the Real Thing™, that there's a much better experience out there to be had by those fortunate enough to have a stonking PC and an Oculus Rift.

Summing up - is VR here now, 20 years later? I don't quite think the Rift is ready for the general public, but it's a damned good approach. You need to find more than just Elite to compel it, and the resolution is just too low to make it a replacement for polarised glasses to watch 3D films at home. But assuming hype doesn't overtake it, and it slowly continues to improve over the next couple of years, there's definitely something there that should make an impact on the video game market if nothing else. I hope it's successful, as it really is an amazing experience, of which I'd like to have more, rather than waiting another 20 years.

Moving to Windows Phone

A couple of months ago I surprised both myself and quite a few of my friends by moving from iOS to Windows Phone running on a Nokia Lumia 930 for my daily device, and thought I’d write up some of my thoughts on it here.

jpeg.jpg

Why change? Having consistently used iOS since the iPhone 3G (the oddly named second version of the iPhone), I decided it was time to try something different when I came around to replacing my iPhone 5. I’d held out until WWDC (Apple's annual developer conference) to see what the next version of iOS held in store, but nothing from a user's perspective seemed that new (at least for my typical usage). Don't get me wrong, iOS is really good, and for the most part it just works, which as a user is fantastic, but as a technologist is a little bit dull. And it’s not just the phone OS that has stagnated, so have the applications I run on it. I’ve no idea if it’s me or the app store (or both), but I’ve found myself using the same ten or so apps for the last year, with nothing new to excite me about using my phone.

As an alternative I decided to go for something totally unknown - Windows Phone 8.1. I’d heard some good things about it, but had next to no experience of it (or indeed of Windows since I stopped working at Intel some eight years ago), so it seemed like a suitable technological adventure. And given that part of my aim was to compare it to my iPhone, I opted to buy the flagship Windows Phone phone at the time, the Lumia 930 (which, despite being their flagship phone, was actually cheaper than the equivalent iPhone, though still not cheap).

I’ve now lived with the phone for two months, and people keep asking my opinion on it, so here’s some thoughts on it to date: the good, the mixed, and the bad.

The good

Windows Phone generally seems quiet slick UI wise. It suffers from animation overdose a little (as does iOS these days) slowing down navigation a bit, but on the whole, I do like the start of day experience with the phone. I’ve been using the Live Lock Screen Beta app to have a nicely playful lock screen, and the live tiles on the home screen actually have grown on me quite a bit. Some of the tiles are a bit annoying, and thus I’m forced to minimise them to shut them up (e.g., Cortana wants to show news headlines which I’m not interested in). But having the weather, calendar, and so forth on the home screen is quite nice. It would be nice to be able to have some apps have big tiles and no animations, but overall I do like the live tiles, which I didn't think I would.

There’s some lovely bits of joined up thinking in Windows Phone overall. I’m signed into Facebook on my phone, and it uses people’s profile pictures from Facebook for my address book, saving me from having no pictures for most people. I have Laura’s contact page as a tile on my phone screen, and it’ll display not just Laura’s profile picture, but show what she said today on Facebook too. It was just seamlessly pulled together, which is nice.

The build quality of the Lumia itself is great, and the screen size (4.7") was a big hit with me instantly (this was before anyone knew that Apple would go this way with the iPhone 6). Even after a couple of days, going back to the iPhone 5 to fetch odd bits of data, I realised that I’d struggle to go back to a small screen size. The other bit of the hardware I like is the wireless charging. At the same time I got the phone I got a Wireless Charging Pillow, which is a bit gratuitous, but it’s a lovely convenience not having to fiddle with cables just to recharge it over night; when I go to bed, so does the Lumia.


The mixed

The obvious thing that puts people off moving to Windows Phone is that, given its overall lack of popularity, the low number of apps in the app store when compared to iOS and Android. However, I did my homework before I jumped, and knew that most the apps I used daily were on Windows Phone. Social media is well covered, with things like Instagram, FourSquare/Swarm, Facebook, Twitter all there. So is Spotify, which is how I listen to most my music these days. Runkeeper, which I used to track cycling on iOS, was not there, but the competing Runtastic service is, and I could easily migrate my data, so I did.

One of the things I used on my iPhone regularly was a wide range of photography applications. I don’t have the bandwidth currently to spend hours with my DSLR, Aperture, and Photoshop, so instead on the iPhone had built up an array of apps I used to try make my Instagram output unique. On Windows Phone there’s certainly less to chose from, but it’s not totally bereft of photo applications. At the moment I’m mostly relying on Photoshop Express, which is a solid basic editing tool, but I do miss apps for more advanced editing and modification. Still, was able to take, edit, and publish this photo on my phone, so it’s not too bad:

InstagramCapture_e564dd52-f8ea-48cb-ae62-710b6528f48c.jpg

There’s only been one app where I’ve found no equivalent, and that’s an RSS reader that’ll work with my chosen RSS service, Feedbin. There’s quite a few that work with the more popular Feedly, but for now I’m using the built in browser to access Feedbin, but that’s not nearly as nice on a mobile device as Reeder on iOS.

The camera is another big draw to picking the Nokia device, as they’ve always had a very good reputation, and as I say, I use my mobile device as my primary camera these days. Unfortunately, here the iPhone does beat it. Although on paper the Nokia camera may be better, its just not as usable as the iPhone's camera for everyday usage. The Nokia camera is slower to focus, and slower to start, so the iPhone is much better for that capturing a moment instantly use case. On the flip side you do tend to get a lot more detail with the Nokia camera, but the iPhone is good enough on that for the majority of people. The Nokia camera is far from bad, and I’ve take some pictures I’m really pleased with, but the iPhone camera is just much more usable overall.


The bad

My main gripe with Windows Phone to date is the email client. Out of the box it assumes that we’re living in 1998, and thus tries to use the network as little as possible, only caching the last seven days of email, and not downloading images. In 2014 this is not what I’d expect as the default on the top of the line smartphone. But even with those options set to something more sensible, the client is just a bit more rough that its iPhone equivalent. Apple have made it very easy to flick through your email at speed: reading this, deleting that, and so on. Getting through my email on Windows Phone is just much slower. To delete a single email requires a mode change, a select, and a confirm, with my fingers moving up and down the screen. on the iPhone it’s a simple swipe and tap in a single location. My hope is Maestro, which goes into Beta next week, will provide a nicer alternative.

With the app ecosystem it’s a similar complaint: it’s not the lack of apps that’s the problem, but the lack of quality in the apps that are there. Even some of the apps from famous names that have lovely iOS apps, their Windows Phone apps just feel like they just left it to the intern to knock up over the weekend. I’m hugely grateful that there’s anything from 1password on Windows Phone at all, but boy does their Windows Phone app _aspire_ to be done by an intern. The only exception to this rule is the first party apps. For example, the Xbox Smart Glass app on Windows Phone is absolutely wonderful to use - and shows you that you can write awesome Windows Phone apps that will stand proud alongside apps on iOS in terms of features, ease of use, and aesthetics; it just seems other people aren’t willing to put in the effort.

A small thing: there's no timer app built in, which to me is insane, and reduces the functionality of Cortana when compared to Siri by half (i.e., half the time I used Siri was to set a timer when cooking :).

Finally, there's a few UI bits that just don't sit right with me. Windows Phone phones have three hardware buttons below the screen, back, home, and search, and the functionality of the back button is context sensitive, thus at times confusing. Let's use the mail application as an example. In normal use I'll launch the mail app, see my inbox, and then drill down to a specific email, and press back to return to my inbox. That to me makes sense, and I'm happy. However, if I pick up my phone, see I have 8 mail notifications, and select the first email to read, back takes me to whatever I was doing before I looked at that notification, not as I anticipate to my inbox so I can see the other emails waiting for me. 

Although the back button is simple to describe as a rule (takes you to the last screen you saw across all applications), because what it does is context sensitive it works against my muscle memory for how to navigate through applications. There is no alternative either: once in a mail via notification, I just simply can't get to my other email without going out of mail and coming back in. I can see a certain design rational for this, but in practice it just is annoying, and now if I have mail notifications I tend to go and find the application to read them rather than clicking on the notification.


Summary

Overall, if you just want a phone that works all the time, I'm afraid I’d still recommend an iPhone over a Windows Phone; but I do enjoy using Windows Phone and am in no rush to give it up. As technologists it’s part of our job to understand all the alternatives, and this is a nice reminder of what life is like outside the iOS ecosystem (and I still have my iPad, which I’m in no rush to replace with a Surface :). 

Window Phone is clearly still evolving as Microsoft try to up their game. Each update I get adds some great new bits and pieces to the underlying OS. I think the shame will be if Windows Phone never gets the app developers it deserves. I suspect there’s a small, but reasonable business out there for the first company that actually builds a suite of apps worthy of the platform underneath them.

Why I moved to Hockey App

Until recently, if you were testing iOS apps, there was one third party service that was absolutely essential - TestFlight. TestFlight is a web service that helps you manage all the apps you develop and your app testers, and makes it easy for testers to install your apps on their iOS devices (something Apple have made quite tricky otherwise). Over time TestFlight added more and more functionality, making it easy to retrieve crash logs, add checkpoints to see which features were tested and which weren't, and provide reminders to testers when new versions are available. TestFlight was a godsend to both developers and testers alike. And on top of everything, it is totally free - how much better can you get?

However, I recently came to find TestFlight wanting, and switched to an alternative paid for service, HockeyApp. At a fundamental level HockeyApp does a fairly similar set of things to TestFlight, but is a little less slick than TestFlight, and costs money. A few people have asked why I switched, so here's a quick summary of where HockeyApp wins over TestFlight for me.

Multiple platform support

TestFlight only supports iOS, but of late I've been working on a couple of things for OS X, and HockeyApp supports testing OS X apps in addition to iOS (it also claims to support Android development, but that's not something I've had cause to investigate yet). In general, testing on the Mac is easier than testing on iOS, as Apple have no special technologies in place to limit how you distribute test applications on the Mac, but even here HockeyApp make the process much better with their help than without.

Sparkle is an all but ubiquitous open source library for rolling out app updates on the Mac, or at least it as before the Mac App Store took over that duty. Sparkle will sit in your app and monitor a appcast feed from a server (much like an RSS Feed) to check for updates. HockeyApp supports using Sparkle - when you upload test builds they'll appear in a private appcast feed, and if you tell your app to look at that all your users will get notified the moment there's a new update, ensuring everyone is up to date.

Even if you intend to ship through the Mac App Store, which doesn't allow you to use Sparkle, you can use Sparkle during testing and then remove it when you submit the app to Apple - it's just a huge time saver.

And needless to say, that Hockey App supports both iOS and Mac means I can just use one service rather than two, which is always good.

Uptime

After almost a year of flawless service, TestFlight had some unfortunate downtime recently. Running a web service is difficult, so I appreciate that sometimes things go down, and they have my sympathy there. But I hit a situation where TestFlight went down twice for the better part of a UK working day (they're a US company, so they're most likely asleep at that point), and during the second event there was no acknowledgement that I could see from TestFlight that anything had gone wrong, and during that time I couldn't get important test builds out to my clients, leading me to have to apologise to people for something beyond my control.

This is where the \"free\" suddenly is less appealing. I don't really feel I can complain when something offered for free goes away. It's not that you get what you pay for, but it's more that there's no way in which TestFlight are particularly beholden to users when the users don't pay.

HockeyApp do charge, which makes the relationship much easier to understand. HockeyApp's uptime has been thus far been very good, but of course all web services will have times when someone goes wrong beyond anyone's control, so I don't expect HockeyApp to be above the occasional outage. But I feel they'll take it more seriously given that I'm paying for the service.

Integration with Xcode

I gather TestFlight does something similar now, but at the time when I switched I still had to upload apps to TestFlight by hand via a browser upload. HockeyApp have a little app for the Mac that you can integrate with Xcode as part of the app archive process, which will upload new builds to HockeyApp directly. Combining this with automatically generating version numbers from git means I now don't need to lift a finger when getting new test version of my apps to testers.

Nothing is prefect

This all makes it seem like HockeyApp must clearly be the way and anyone that uses TestFlight is in the wrong, but we must give TestFlight its due here - there's still some things that TestFlight do better than HockeyApp. TestFlight's website is just generally a little more slick than HockeyApp - be it setting up new apps, adding users, and so forth. TestFlight's library for getting crash logs, adding checkpoints from apps, and grabbing new builds is just a single library - you need to add two for HockeyApp and it's just a little bit more involved. But none of this adds up to much more friction in the long run, but it might be enough to keep some people from switching.

I started trying out HockeyApp just for my Mac OS X projects, but at the end of the day I switched wholeheartedly when I was unfortunate that their downtime coincided with a critical time in my app development and was unable to reach my testers twice in a month. I suspect that's atypical, and you should have a look at both services for yourself, but I think it shows that is just how important to developers apps like TestFlight and HockeyApp are now - they're essential infrastructure for developers, and we're severally impacted when they go away. I'm happy to pay for a service like this in order to try to ensure it's reliable - I just can't work without it.

Update: I got a nice note from the people at HockeyApp letting me know that they'll be moving to having just a single library for managing in app updates and crash logging etc., which is great news.

Culture Hack East - June 16th-17th

We're big fans of hack days and working in the culture sector, so we were very pleased when it was announced Culture Hack Day was coming to Cambridge in the form of Culture Hack East. Lots of good things have come out of previous Culture Hack Days (for example, the fabulous Open Plaques project), so we're looking forward to seeing what comes out of this weekend.

But it gets even more fun - we're delighted to announce that we've been asked to give one of the talks at Culture Hack East. We'll be giving a little talk on the lessons learned about relying on the Internet for your mobile app, something we've had a lot of experience of, and how to work around the Internet's fallibility's.

There's still spaces left, so why not sign up and join the fun?

Tickets goes retina

Tickets, our iPad app for accessing your Lighthouse projects, has just gotten the retina makeover with its 1.3 release.

For those of you unfamiliar with Tickets and/or Lighthouse, here's the quick recap. Lighthouse is a website where you can track bugs etc. with your software projects, set milestones etc. We use it for all our projects at Digital Flapjack. The only thing that was missing for us was a nice way to access our bug lists from our iPad, which is why we made Tickets. Tickets is a simple and pretty iPad app that fronts onto Lighthouse and makes it easy to access the latest information about all your projects wherever you are.

This release is basically the new iPad retina display update, meaning Tickets now takes advantage of all those lovely extra pixels on the new iPad if you're fortunate enough to have one. If you have Tickets already, it'll appear as a free update next time you look at the App Store on your iPad, and if not, what are you waiting for, go buy it now - it's by far the most beautiful way to keep track of your development projects on your iPad.

Some inside info on going retina

For your education and entertainment, I thought I'd share with you some notes on what we went through to make Tickets retina ready. As those of you have used Tickets will know, Tickets represents everything very graphically, following Apple's lead with skeuomorphic iPad apps. 

mzl.acqbampf.480x480-75.jpg

Thankfully for us, most of this drawing was already vector based, so most of the in app drawing was retina ready from the day the new iPad shipped. The outline of the tickets, the detailing on the edit popup and so forth are all vector based, and just worked with the new display, looking lovely from the get go.

Most things in Tickets have a subtle texture to them, and in fact this looked fine when stretched, although we did provide retina textures in this release, just because we're perfectionists like that :) Though in doing so we did hit a slight oversight in the standard iOS way of doing things like this.

Clearly textures are just images that are tiled to fill a given area, and as such, it doesn't make sense to provide a texture image that's twice the size of the original texture. However, without help, the iPad doesn't know you're trying to texture with a given bitmap, so we ended up duplicating each texture and giving it the magic @2x filename. A bit of a hack, but it's little effort; however it'd be nice if at some point there was a way to express that given image can be used for both regular and retina display, if only to save on app download size.

However, the main place where it was obvious Tickets hadn't been designed with retina in mind was with the buttons and icons in the app. And here we have an confession to make. The easiest way to get nice large icons for the iPad before the new one was to take icons designed for the retina iPhone, and use them at native resolution on the iPad. We've always been a big fan of the Glyphish icon set (as are many other apps, and for good reason), so we used a bunch of the @2x icons from there and used them.

This of course presented a problem when we came to do the retina iPad version - there was no handy @4x iPhone icons for us to use as the @2x icons for the iPad. This left us in a quandary - how to scale these icons for the retina iPad? Glyphish does ship with the Adobe Illustrator files for the icons, but we tend to do most our design work purely in Adobe Photoshop, so we don't keep a license for Illustrator around. This meant we couldn't easily scale the icons ourselves. We could ask a designer to try and scale them up, but it seemed if we were going to do that, we should just get a full new icon set done anyway.

Before doing that, we had a look around for alternative icon sets that we could scale, and we came across the very nice iconSweets 2 set. It's not quite as comprehensive as Glyphish, but it has some fabulous icons in there, and best still for us, they come with Photoshop vector format images, so we can scale them as we see fit. It was a simple task then to find a similar group of icons to those we used in Glyphish and switch them for ones from iconSweets 2.

So, if you wondered why some of the icon styles had changed slightly in the retina release of Tickets, now you know :)

Overall though, updating Tickets to take advantage of the new iPad's retina display was fairly painless, despite it being a very graphical application. It goes to show that although using vector based drawing takes more time initially, updating the design later is a much simpler process.

Automating your build numbers with Xcode and git

A quick post for users of Xcode and git - how to automatically set your version and build numbers in Xcode from the output of git describe.

I've always used tagging in git as a way to help me generate version and build numbers automatically pretty much since I started using git. Indeed, before git describe existed I wrote a tool to do pretty much the same thing. The basic idea is you tag when you start working on a version with the version number (say, 2.6) with that number, and then you use git describe (and a little awk) to get a build number based on the last tagged version number, and how many commits have been done since (so, if there were 17 commits since we tagged the start of work on a version, the build number is 2.6.17).

The advantage of this is that it means for any given release, I can go and find the exact code that was used to build it. I've used little scripts based on this to build debian packages and the like, but until recently I didn't have a way to make this happen automatically in Xcode. But I've now fixed that, and thought I'd quickly put it here incase others find it useful.

Open your project up in Xcode (I assume you're using the most recent version from the Mac App Store), and head over to the Build Phases tab. Click on the "Add Build Phase" button at the bottom, and select "Add Run Script". This will add a Run Script entry to the end of the build phases list, but we don't want this to happen last, so drag it up to somewhere before the "Compile Sources" phase.

If you expand the tab, you'll see you can enter a script directly into Xcode, which in itself is just a very handy thing to be able to do. You can also select the "Run Script" label on this phase and click it again to edit it. It's a good option to rename it so that it describes what the script does, such as "Get version from git".

Now enter the following script, which I based on various other scripts I'd spotted on the web for auto-incrementing build numbers, and then subverted to my particular needs:

# Assumes that you tag versions with the version number (e.g., "1.1") and then the build number is
# that plus the number of commits since the tag (e.g., "1.1.17")

echo "Updating version/build number from git..."
plist=${PROJECT_DIR}/${INFOPLIST_FILE}

# increment the build number (ie 115 to 116)
versionnum=`git describe | awk '{split($0,a,"-"); print a[1]}'`
buildnum=`git describe | awk '{split($0,a,"-"); print a[1] "." a[2]}'`
if [[ "${versionnum}" == "" ]]; then
    echo "No version number from git"
    exit 2
fi

if [[ "${buildnum}" == "" ]]; then
    echo "No build number from git"
    exit 2
fi

/usr/libexec/Plistbuddy -c "Set CFBundleShortVersionString $versionnum" "${plist}"
echo "Updated version number to $versionnum"
/usr/libexec/Plistbuddy -c "Set CFBundleVersion $buildnum" "${plist}"
echo "Updated build number to $buildnum"

This script uses git describe to get both the version and build numbers (e.g., "2.6" and "2.6.17") and then copies them into the Info.plist for your project, using a little hidden utility called PListBuddy, that's hidden away on most Macs as far as I can tell.

Once done, Xcode should look a little like this:

Xcode version script.png

Assuming you've tagged your git repository at some point, you should find that when you hit build next that the version and build numbers will be automatically updated to reflect the latest state of your git repository.

PlaceWhisper 2.3 released

We're pleased to note that PlaceWhisper 2.3 was release to the app store recently. A small update to our favour located content creation tool, but it has a couple of changes in it we thought we should discuss.

Collections

For those of you who subscribe to PlaceWhisper Pro, then the biggest change is the ability to create and use collections. Before this release you had two options when creating Whispers - placing them on their own, or forming a trail of ordered Whispers. But there was a glaring need - how do you group a set of Whispers that have no particular ordering to them? For example, what if you wanted to group Whispers showing your favourite coffee shops? A trail isn't appropriate here (unless you're doing the caffeinated equivalent of a pub crawl...), but it'd be nice to pick them out from the crowd somehow.

The solution to this are Collections, where you can group Whispers together under a single heading. Not only does this make sense logically, but PlaceWhisper will let people filter the map view based on collection. This means if you're just interested in trying to find a particular set of things, say those hypothetical coffee shops, then you tell PlaceWhisper just to show you the items in that Collection.

Collections also play a big part on extending PlaceWhisper on the web. If you log into the PlaceWhisper website. You can enable embedding of your Collections, at which point you'll be able to embed a stylish map in your blog or website showing all the points for that collection, making PlaceWhisper not only a great way to share location information to mobile users, but also on the web.

This is an major enhancement for PlaceWhisper, and we're looking forward to seeing how you use it.

coffee.png

Subscriptions

In previous releases of PlaceWhisper we offered an in-app subscription to the PlaceWhisper Pro service. With PlaceWhisper Pro you get access to a bunch of new features, such as creating unlimited public content, and now collections and embedded maps on the web. Unfortunately for us, Apple no longer allow services to use the auto-renewing subscription feature of the iTunes store, only magazines and newspapers, which meant we had to stop using this.

So as a result, rather than being able to get PlaceWhisper Pro on a one or twelve month auto-renewing basis, you can now buy three, six, or twelve month non-renewing access to PlaceWhisper Pro, priced at £2.49, £3.99, and £6.99 respectively. For that you get to create unlimited numbers of Whispers, Trails, and Collections; create public Whispers that any PlaceWhisper user can discover, and create embeddable Collection maps that you can share via the web.

All the rest

In addition to those major changes, PlaceWhisper 2.3 contains a host of user interfaces updates and tweaks, particularly to the map view, and optimises network usage (not that it used much before, but every byte counts!).

We're pleased with PlaceWhisper 2.3, and how it brings along the vision of what we want location software to look like in the future! Go download it now and let us know what you think!

Putting stylus to screen

A while back, I blogged about how I'd made my own iPad stylus, using an old marker pen and some conductive padding for the nib. I based what I built on Dan Provost's attempt, trying to improve on Dan's original design by using a marker with a metal case rather than a plastic one. Despite the construction being a success, I didn't really use it much. Firstly, the nib just wasn't good enough for regular use - it flexed too much; and secondly, I just didn't really have much of a need for it. I never found a note taking app on the iPad that felt as useful as my trusty Moleskine notebooks.

Recently though, that latter reason changed (more on why in a moment) - I found myself wanting to doodle on my iPad once more. However, the nib issue hadn't changed, so I needed a better solution. Thankfully Dan Provost took his original concept and used it to build a custom stylus via a kickstarter project. This meant that this time, rather than trying to build something based on his improved design, I just bought what Dan and co over at Studio Neat lovingly call The Cosmonaut.

The Cosmonaut is like a giant rubber crayon for your iPad. The idea is that the iPad's screen isn't designed for a fine tipped stylus, but for your finger, so when using a stylus you need something that mimics a writing implement with a similar sffordance in the real world - such as a marker pen or a crayon. The Cosmonaut does this brilliantly - it fits in the hand well, and feels just like a chunky crayon from when you were a kid.

_MG_5842.jpg

Despite the chunky tip, it's fairly accurate when sketching. There's some give in the "nib", which I worried would cause the centre point to move around, but that hasn't been the case. I've been happily sketching away just the thing. The chunkiness has another useful property - it's fairly easy to fish out of my bag-o-bits when I need it, standing out amougst the random pens, video adapters, and so on that fill the bits compartment on my bag.

So, I'm please to say that I'm delighted with the Cosmonaut, and would recommend it if you want to use your iPad as a device on which to doodle and sketch effectively.


Of course, if you want to use your iPad for such things, you'll also need software, and this brings me to the reason I went looking for a stylus: the app everyone's been talking about of late, Paper, by Fifty Three.

There's been many apps out for the iPad that have attempted to replace paper notebooks over the couple of years the iPad has been with us, some even by people who make the physical objects themselves, but they've always left me cold. "What is there such an app to get wrong?" you might enquire. Well, they typically fail in two areas: how natural it feels, and you organise your notes.

Most notebook replacement apps dive headfirst into the world of skeuomorphism - that is a fancy way of saying they try to look like the real world objects when on the screen. But this leads to issues the moment you put stylus to screen, and what comes out looks nothing like what you'd generate putting pen to paper. Rather, it looks like what you'd generate using Microsoft Paint - very simple and very artificial. Now technically, if the aim of the software is just to let you capture ideas this shouldn't matter, but it does. If you thought your doodles were messy in your notebook, they generally look worse when drawn using one of these tools on an iPad. As a result, it was still more pleasing to sketch on paper.

Paper changes this, but putting a lot of effort into giving you a set of pens and pencils that look on screen like the real thing on paper. In fact, they look even better than someone with limited drawing talent like me could produce on paper at times - the watercolor brush really flatters the drawer, and lets you produce beautiful sketches quite quickly, which just wouldn't be possible as quickly on paper. Unlike other apps which tended to leave me feeling like I should have just used my real notebook, Paper's managed to make me prefer the iPad to paper for the first time for sketching.

Paper isn't perfect, at least not yet. For writing, it's definitely less good. There's a noticable lag on input at times, and this means if you write using a joined up flow quickly, it tends to miss corners out, making my already poor scrawl somewhat (ahem, more) unreadable. I've no idea if this is something that'll improve over time as they get time to optimise the software, or whether it's a hardware limitation. Either way, this means for note taking in talks, I still reach for my trusty Moleskine. It also relies on gestures for turning the page and accessing the tool palette, and these don't quite work all the time, leading you to accidentally draw all over the page when you didn't mean to. At which point you have to use the crazy undo two-finger-dial gesture, which doesn't quite work for me. I do appreciate why they've tried to use gestures so much - Paper does benefit from not having buttons all over it, but I think they need to be a bit more obvious and a bit less error prone to convince me fully.

Organisation wise, Paper wins hands down, despite not using iPads conventional UI for such things. Instead, the clever peeps over at Fifty Three let you create a series of small notebooks, which you can flick through the pages of without having to make them full screen. This makes skim reading notebooks to find the note you're interested in not just easy, but a pleasure to boot. I now feel happy generating a new small notebook for each talk, task, or idea (a bit like how people use Field Note Brand notebooks in the real world I guess) rather than having one massive virtual notebook, as finding what I want is no different from finding it in the real world.

Despite my gripes with the UI, I now use Paper and my Cosmonaut all the time, particularly for sketching out ideas for apps - UI mockups, icon designs, user flows, etc. On one hand, it seems amazing that it has taken me two years of iPad ownership to get to this position - you'd have thought this would have been cracked early on in the iPad's life. But it just shows you how hard it is to get these things right. All the things that make Paper a joy to use - the properly simulated pens and paper and the virtual notebooks that you can skim through with ease - are all hard details to get right.

Two years seems like a long time, but I think we're still only getting to grips with the UI challenges brought to us by the iPad. Paper is an amazing piece of engineering to solve what superficially seems a simple problem. Something to keep in mind when you specify your next project.

Cocoa OAuth 2.0 libraries

OAuth 1.0 was a big step forward for web security, removing the need for users to give their passwords to third parties, but was a bit of a step backward for anyone writing mobile or desktop clients. Handing over to a website broke the user experience of your app, could easily be spoofed anyway, and led to a bunch of hacks like Twitter's X-Auth standard to try and find a way around it.

Thankfully, with OAuth 2.0, this was by and large fixed, with the new version supporting multiple authentication approaches suitable for web, mobile or desktop (not that all websites implement them all, but that's not the standard's fault).

The one downside as a Cocoa developer is that the officially recommended Cocoa library is, to be blunt, rubbish. It doesn't even compile, and the author has acknowledged that it's completely unusable :)

Thankfully though, there is a very good OAuth library that's nicely designed that I can heartily recommend, which I discovered from the same comment thread (though only after trying a bunch of other less good libraries, alas) - OAuth2Client by nxtbgthng. This has a lovely interface, takes care of managing different tokens for you, is ARC friendly, has no external dependancies, and a BSD license. It also supports both iOS and OS X.

If you want a lesson in good library design, then I recommend comparing and contrasting this with OAuth 2 library in Google's Toolkit for Mac which I'm sure is as easy to use as they claim, but just doesn't make it at all obvious that it is. I leave you to work out which one is the good example and which is the less good one :)

Tickets 1.2 in the app store

We released another update to Tickets, our iPad app for managing your projects in the online bug tracking platform Lighthouse, bringing it up to 1.2. It's a small update, but follows our plan on slowly expanding the core set of features over time to help you make your iPad the best way to manage your projects.

The first thing we added was the ability to create new milestones when adding or updating tickets, meaning you can not only curate issues on existing milestones, but plan new ones when away from your desk too. The second thing we've done is expand the ticket list page to let you filter by tags, in addition to the existing set of filters.

So head over to the app store and start managing your projects from wherever you are.