Coracle and the art of testing
Is the world getting better at testing?
In some ways, the answer has to be yes. This week has seen the Bank of England releasing the results of “stress tests” designed to see how individual banks are likely to perform under crisis conditions.
In software, unit testing is almost mandatory in complex applications. This means taking individual functions (small chunks of code) and passing information through them to make sure they give the expected result.
And in content? Are there unit tests for content?
Not exactly. We see the beginnings of a science of priming and this has been applied to visual stimuli such as images and words. Put simply, priming in this context means that the response to one stimulus influences the response to another stimulus. A classic example is the finding that we will recognise the word ‘Nurse’ more quickly if it follows the word ‘Doctor’ than if it follows the word ‘Bread’.
Advertisers have long had a rough idea of the effects of priming: David Ogilvy often used to say that the two most effective words in advertising headlines were “new” and “free”!
The results of priming experiments are very interesting, but it’s a young field and still highly experimental. A well known study exposed some participants to words such as ‘grey’, ‘old’, ‘lonely’ and ‘worried’. They were then asked to walk down the hall to do a second test. The participants who had seen those words associated with age took considerably longer than the others to walk from one room to the next. This finding caused a stir, but later research was unable to replicate the results. So until we understand a great deal more about priming, it’s unlikely to be widely used in testing content. Instead, we have to fall back on experience and user testing.
For a content creator, experience means a great deal of creation and constantly listening for user responses. After writing many hundreds of thousands of words and creating endless graphics, a content creator will start to get some idea of what works best.
User testing isn’t a short cut to experience, but it can certainly give a good idea of how people respond to a particular piece of content. Sometimes the reaction can be blunt, so there’s no place for precious or egotistical writers who consider what they do as art. It can also be highly unexpected – after honing content for weeks, adjusting graphs by the pixel, and toning down a video by half a decibel, a user experiencing it fresh can point out something that later seems blindingly obvious. What might a user test look like? Well, it certainly doesn’t need to involve hundreds of people. Half a dozen ordinary participants are likely to throw up 90% of all possible problems. Above all, they should be like everybody else – ideally, they will be selected because they’re not part of the organisation that creates the content, and because they have had no experience of or input into the development. In other words, they should come at it fresh.
They may be asked to complete a survey, carry out an eye-tracking test, or give a running commentary as they use your content. This sort of response can be supplemented by observation and note-taking. Data about user journeys can be gathered from server logs, and where resources allow, a record can be kept of scrolling, mouse activity and the use of interfaces such as buttons. This might all seem rather elaborate compared to some forms of testing. After all, a basic software unit test can be written and run in five minutes. This may be true! But in the absence of an empirical approach, testing something that gives such subjective responses will always be tricky.
Here at Coracle, we put a lot of thought into testing and the best way to do it. This has resulted in a toolbox full of useful methods ranging from our Amazon-style “rate this page” widget to surveys and pilot user groups. Watch out in 2015 as we develop these ideas and improve the art of testing!