How bad app QA derailed the US presidential primary
Quality assurance (QA) testing of a digital product is, unquestionably, not glamourous. I doubt you’ve ever seen the phrase ‘I have a passion for QA testing’ on a developer’s C.V.
Even so, testing is a vital step in the process of developing an app or digital service. Get it right and it’s invisible (and, sadly, uncelebrated). Get it wrong, and you could butcher your company’s reputation or, worst case, maybe skew the direction of a US presidential primary election.
Wait, what?
I am of course alluding to the fiasco at the Democratic Iowa caucuses at the start of this month. There, an unspecified ‘coding error’ in a result reporting app commissioned by the Iowa Democratic Party (for a cool $63,000, no less) caused the caucus to descend into a farcical political free for all.
On the night, party officials noticed that the app was reporting only partial data from county caucuses back to its centralised data warehouse, making it impossible for the statewide results to be reliably calculated. Predictably, this caused chaos.
Some Democratic candidates chose to claim victory, even without any results confirmed, while others, irresponsibly, poured fuel on an already inflamed situation by suggesting voters may not be able to trust the integrity of the race. Republicans also piled in to kick their political opponents.
All this could have a material impact on on the primary race, politics in America, and maybe even the presidential election.
For starters, thanks to its position at the start of the race, the winner of the Iowa primary caucus tends to enjoy a burst of positive press and a big bounce in national polls. This year however press coverage was dominated by the app fiasco, meaning the normal bounce was not present for the winner and the downsides were reduced for entrants who underperformed.
Secondly, this ordeal comes at a time when American trust in politics and the integrity of elections is already shaky, meaning the country could do without avoidable mishaps like this that undermine trust further. The ordeal could have a long-lasting impact for Iowa too by adding to a groundswell of support for ditching caucusing in favour of a more conventional ballot vote.
Bad QA testing + zero user training = predictable disaster
So, how did we end up here then? How can it be that a couple of lines of code may help dictate who becomes the Democratic presidential nominee?
Well, clearly, there was a breakdown in the QA procedure at Shadow Inc – the digital studio contracted to supply the app. After all, if a product is incapable of performing the single task it was created to do, that product has no business being shipped.
Gerard Niemira, CEO of Shadow Inc, admitted as much to Bloomberg Businessweek, suggesting that the error should have been caught before caucus night:
“Yes, it was anticipate-able. Yes, we put in measures to test it. Yes, it still failed. And we own that.”
On top of the data reporting issues, many caucus leaders had problems either downloading or logging into the app, meaning they gave up trying to use it and instead flooded the ’emergency’ telephone lines to report their results.
These problems point to further issues with Shadow Inc’s QA procedures. For example, the fact that users struggled to log into the system suggests that little user testing took place before the product rollout to validate the UX of the app. This excellent Vice article highlights the ‘labyrinthine’ login and validation procedures within the app, painting this as a classic case of a bunch of techies developing a product they understand, but which is unfathomable to end-users, many of which were middle-aged or elderly volunteers.
All this adds up to making the Iowa caucus app something of a gold plated case study for a failed product rollout.
Interestingly, Steven Vaughan-Nichols of ZDNet suggests that stories like this are not uncommon when it comes to electoral software. Political parties have a habit of demanding that every element of an application is developed from scratch, as they are nervous of leaning on open-source codebases. This, he claims, makes accidents like this more likely.
I vote to destroy my reputation
In addition to impacting the political outcomes of the Iowa caucus, it shouldn’t be forgotten that this episode has likely caused untold (maybe fatal?) damage to Shadow Inc’s reputation as a development studio.
It’s certainly bringing the company an uncomfortable amount of scrutiny into their complicated ownership structure.
Indeed, Nevada, the third state to vote in the Democratic primary race, has already said that it will abandon plans to use Shadow Inc’s app. Instead, caucus leaders will be relying on a system of linked Google Forms and Google Sheets, with data entry carried out via iPads set up and supplied by the party itself.
How digital product QA testing should be done
Here at Browser, we build QA into all our digital projects by sticking to a tried and tested 60/20/20 structure.
This specifies that 20% of a given project’s resource budget should be allocated to QA testing (with 60% designated for development, and the remaining 20% for project management). These budgets are ring-fenced early in the process, meaning the QA budget can’t get nibbled away by overruns in other areas.
As well as making sure resource is protected for quality assurance processes, we also structure our QA testing for best effect. This means making testing an iterative, ongoing process that’s built into and grows with the project’s design sprints (this is often called a ‘unit testing’ approach).
It is, after all, tempting to put off QA until the end of a project, given the likelihood that it’ll throw up awkward questions and issues, but doing so is a bad idea for two reasons.
Firstly, it turns testing into a gargantuan task that can make it awkward to approach and manage. Secondly, if QA only happens at the end of a project, then it’s vulnerable to being squeezed between development overruns and a fixed delivery date.
What’s more, following a unit testing approach means we can automate parts of our QA processes by using systems such as automated code linters, Scrutinizer-ci.com and browser testing via Ghostinspector.com. This approach saves time and money but isn’t easy to do if you’ve left testing until the end of the project and have a whole app to test, rather than just a single unit of the app.
Final thoughts
No app or digital product will ever be truly bug-free, but if you stick to good QA testing practices throughout a project rather than seeing the process as a cost to be minimised, you’ll have a better chance than most of keeping your client happy.
Not only that, but you’ll never know what cost a poorly tested app could have on your business and it’s reputation. Just look at Shadow Inc. How much work do you think they will be picking up in the next few months?