Image for The power of user-testing in digital design
Back to Blogs

The power of user-testing in digital design

Working as a digital designer within the NBU Studio team my specialism is creating engaging user experiences (UX) and user interface designs (UI) across various channels. Within the fast-paced tech industry, it’s important to stay on top of any digital trends as well as industry standard tool-kits, so when I heard the School of UX was hosting its annual ‘The UX Conference’ I jumped at the opportunity.

This year’s theme was collaboration between all types of designers. The first day was dedicated to talks around this subject, whilst the second was a choice of various digital workshops such as AB testing, content design, and crafting sustainable design language systems.

I signed up for the AB testing workshop to develop in-depth knowledge of user testing. The two instructors Sim and Nicolas from OpenTable – which is a global online reservation service – were absolutely fantastic tackling this subject, which was quite a lot of information to take in.

What is AB testing?

AB testing differs to standard user testing as a method of comparing primarily two versions of a very specific part of a web page or app against each other to determine which one performs better.1

The control is the existing design and the variant is the second option. There are multiple platforms such as Google Analytics, or usertesting.com to do AB testing. However, a majority of larger companies use in-house AB testing software programmes. The results from this kind of testing is based on metrics, or something called quantitative data in the digital world.

Standard user testing refers to a technique used in the design process to evaluate a product, feature or prototype with real users.2 Generally, this is tested in person with a group of up to 10 people via a prototype of the design on a mobile, iPad, or Desktop. You can test multiple aspects of the design using this kind of testing. The data results from this is called qualitative data.

If possible, it’s best to have both sets of data from quantitative and qualitative feedback to make the best-informed decision on what design has the most positive results.

You may ask yourself, why is user-testing important? Well doing usability testing the right way, at right time, with the right set of people reduces the risk of building the wrong product; thereby saving time, money and other precious resources.3

“Actual humans will expose problems you’ve failed to identify during your design and development process. Even the best teams can’t predict every possible pitfall.” 4

The process of AB testing

During the workshop we were split into teams and given handouts with examples of AB testing questions alongside ‘mock’ AB results and asked to determine if the variant or control was more successful.

Before starting any AB test, you need to figure out what your test objective is, and from this you derive a ‘hypothesis’. Ask yourself why you are doing this and what are you hoping to achieve from this test. A good place to start is with this formula:

“By changing x, we’ll see x, which will influence z.” This is an example in action: “By Changing the checkout button from grey to red, we should see an increase in conversion because it will be more visible to users.

After you’ve established the above, the below are best practices to follow for accurate testing results:

  • Make sure that the test duration is at least two weeks
  • If you are testing more than two variants to the control, extend the testing time by one more week
  • Run the test during a period that isn’t affected by unusual traffic
  • Be mindful with more traffic, there is a higher chance of random error
  • If you are unsure, you can always re-run the test

Evaluating the results from AB testing

So, how do you know if the control or variant has worked out best? This can be quite complex process depending on how many variants you’ve tested.

Generally, a good indicator is a good balance of higher percentage of click-through rates, and a higher change percentage. You’ll also want to look out for a higher percentage of significance. This reflects your risk tolerance and confidence level. Once you determine your winner, this percentage, for example 95%, means you can be confident that the results are real and not down to human error. Below is an example of a positive significance:

The variant would be a better choice to go with here.

The power of user-testing and AB testing can never be denied and should always be included throughout the digital process. It should never be treated as an afterthought. I hope some of the thinking in this blog article can be applied to daily aspects of your job role, not only across digital design. For more information on the School of UX click here.

To see an example of AB testing for OpenTable’s restaurant rating layouts click here

By: Stephanie Howard

 

1 https://www.optimizely.com/optimization-glossary/ab-testing/

2 https://www.everyinteraction.com/definition/user-testing/

3 https://www.everyinteraction.com/definition/user-testing/

4 https://www.everyinteraction.com/definition/user-testing/

Image for The power of user-testing in digital design
Image for The power of user-testing in digital design
Image for The power of user-testing in digital design