Skip to content

Mobile continuous delivery with a devops mindset - Velocityconf 2015

talks 2 min read

After retiring from running DevOpsDays, I joined a startup making apps for TV shows. The app has to work for exactly one hour during the live show – Formula One pit stop mentality. My mental model of DevOps has four areas: extend delivery to production, get operations feedback back to the project, embed project knowledge into operations, and embed operational knowledge into the project. Applying this to mobile felt like it should be straightforward. It was not.

The build pipeline alone is a gauntlet. iOS requires a Mac (Travis CI and CircleCI now support OS X builds). Apple makes you jump through hoops with certificates and code signing – it feels like Puppet certs but worse. We build multiple versions per environment (staging, testing, app store) with different API endpoints baked in at compile time, each with visual cues so testers know which version they are running. Device provisioning is another hoop: for ad-hoc builds, you need the UDID of every test device. There is no official Apple API for this – all the automation tools are essentially screen-scraping.

For testing, we use scenario tests driven by accessibility labels, which lets us reuse the same test scripts across iOS and Android. Genymotion provides fast Android simulation with hardware acceleration. For final submission, we sweep across real devices on cloud farms because Samsung sometimes changes class loader behavior that simulators will not catch. Network testing uses Chrome DevTools hooked into mobile apps, plus Charles Proxy for simulating 3G and packet loss.

The operations feedback loop is where it gets really interesting. Fabric gives real-time crash reports with device state, memory, OS version, and stack traces – app store crash stats are two days old, which is useless for a live TV show. We send device-level logs to our centralized logging system, correlating user IDs with crash reports and API traces. We can remotely increase logging levels for specific users via feature flags. We built in-app feedback (shake to report) that captures context – version, device, logs – so we never have to ask users what they did. The app store review process (seven days average, two days for an “expedited” critical fix) drove us to push as much functionality as possible into remotely updatable HTML views, remote config, remote strings, and even method swizzling for live patching without resubmission.

Watch on YouTube – available on the jedi4ever channel

This summary was generated using AI based on the auto-generated transcript.

Navigate with