Benchmarking & Competitive Analysis

Benchmarking is a research method where you compare the performance of your design over time or measure how it performs in comparison to its competitors. There are two types of benchmarking:

  • Standalone benchmarking – you set some key performance indicators (KPIs) and measure how your proposed design changes affect them. It is ideal to use when you start redesigning a product. A good way of doing it is benchmarking against business objectives – seeing whether new changes to the user interface help in achieving your business goals.
  • Competitive analysis – comparing how well your product design performs compared to its most important competitors.

Benchmarking studies should be conducted regularly, for example annually or when design changes are being made.

Benchmarking is most commonly applied to websites, however, it could be used for other interfaces as well.

There are two areas of benchmarking – marketing and usability. This lesson will focus on usability benchmarking.

  1. Standard usability benchmarking
    In order to understand how design changes impact usability, you need to measure usability of the interface before any changes are made to it and then see how various changes affect it.
    Typical process of usability benchmarking:
  1. Identify the users to test, preferably they should be your target audience members and have some prior experience with your product (unless you are interested in the performance of new users).
  2. Recruit users: you could email users from an existing customer list or use a panel agency that finds users who meet your requirements. The number of participants needs to be large in order to obtain statistically significant results, at least 20, however, you would need more to get a low margin or error. This is a good resource if you would like to determine the number of participants for a specific level of precision or a margin of error (though it is quite mathematical) http://www.measuringu.com/blog/qa-urut.php#samplesize . The more users the better, however often more than 75 do not justify the resources required.
  3. Define the tasks you would like the participants to perform, spend time creating good task scenarios (see lecture 5 for advice on creating good scenarios).
  4. Decide on the task metrics. Typical usability metrics are task completion, task-based efficiency and satisfaction (lecture 6 discusses various usability metrics), though you could also measure task-level satisfaction, number of usability problems encountered, number of errors, depending on the goals of your study.
  5. Choose software for unmoderated user testing. It depends of your budget, the cheapest option is to use free survey software like Survey Monkey www.surveymonkey.com. You could give your participants task descriptions, ask them to carry out the tasks and ask them to answer some questions about their experience. However, you will not get a reliable measure of task times and often completion rates are higher this way because users tend to be over-confident in their ability.
    A better option is using an unmoderated testing tool, such as Loop11 (http://www.loop11.com/) – you can create custom tasks for participants and it automatically collects usability metrics.
    If your budget allows it, the best option is using comprehensive testing software such as UserZoom (http://www.userzoom.co.uk) – besides usability metrics, you can see click-paths, heat-maps and video recordings of users interacting with your product, so you can have a combination of quantitative and qualitative data.
  6. Carry out a pilot study – test the set up of the test with few users to see if there are any unexpected problems or some adjustments need to be made.
  7. Carry out the study – have the study open for users to complete in five days or any other number realistic in your situation.
  8. Analyze results – common usability measures are usually automatically calculated by unmoderated testing software. If you were generating quantitative data as well, look at heatmaps and replay some user videos to determine what leads to results being the way they are.

  1. Once some design changes are made or proposed – repeat the process for a new design and compare.

2-1

  1. Benchmarking against business objectives:
    Benchmarking design decisions against products’ business objectives helps to see whether various proposed improvements make the product more or less successful in achieving its goals, thus meeting its business objectives. This is the best way of justifying design changes.The process of benchmarking a product against its business objectives:
  1. Decide on the key business objective behind the product or service – how does the organization make or save money with the product. The key objective could be to sell some products, get users sign up for a newsletter, donate, view ads or contact to arrange a service. The key business objective should be clear, not generic such as “become a popular website”.
  2. Identify the UX factors that will help to achieve the key objective. Some examples of such factors are the ability to search for products easier, the ability to view a product from all angles or to compare products in a simple way. You often have to do some research to identify them, for example send an online survey to a sample of the target audience members. The UX factors are assumptions that need testing.
  3. Propose a way to work on improving the UX factors – specific design activities that need to be carried out to meet the UX objective. For example, you could run a usability test to see if there are any problems at the moment with e.g., the current product comparison tool, create some design ideas to improve it.
  4. Measure the benchmark state of each UX factor (see the steps 1-2 and 4-7 in the previous section about standard usability benchmarking). You need to derive some values to assess the current performance, e.g. the success rate or efficiency (refer to lesson 6 for usability metrics). Measure the performance of your product to get the current values which will be used for comparison later. After that set some realistic targets for improvement, e.g. if the current success rate of the product comparison tool is 50%, set it to 75%.
  5. Track changes in each UX factor until target values are achieved. Start improving the interface and testing changes with users to see whether you are getting closer to your target values, e.g. whether the success rate your product comparison tool is getting closer to 75%. Use unmoderated user testing.
  6. Test if the business objective is being met. Once the targets are met, you need to check whether your assumption that these factors help to achieve the business goal was correct. The best way of doing it is running an A/B test (a comparison of two versions of a product by showing them to similar visitors at the same time), version A being the original interface, version B – the interface with UX improvements. It is important to ensure that version B does not include a lot of new features or many major changes, since you will not be able to tell which ones lead to an improvement, or whether the change is due to an improvement to the UX factors or something else. If it does not help in meeting the business objective, it is likely that wrong UX factors were identified.

DIVIDER

Competitive analysis (competitor benchmarking):

It is a process of comparing the performance of a website’s (or another digital product’s) user interface to the performance of its competitor’s interface in order to identify its strengths and weaknesses. The comparison can be holistic – ranking competing products by overall usability metrics, or it could be focused, comparing specific features or elements.

The process of competitive analysis:

  1. Identify your goals – what exactly you want to achieve by benchmarking? To compare the overall performance or you would like to focus on how your product search compares to similar features of competitors’ websites?
  2. Choose competitors, typically 2 to 3 are chosen, more than that can be expensive and overwhelming to analyze. Choose your direct competitors who provide the best user experience.
  3. Carry out steps 1-6 of the section about standard usability benchmarking.
  4. Carry out testing. Each participant should complete some tasks on 2-3 products (more is overwhelming), at the end of the session participants should be asked to comment on the products used and on how they compare, what they liked, what they found confusing (most unmoderated testing tools allow creating questions).
  5. Analyze results – see how well your product performs compared to its competitors. Unmoderated testing tools will easily provide data on KPIs, however, the main goal should not be to declare a winner, but to improve your design, thus look at the biggest strengths in competing designs, what trends arise across sites. Base your future design decisions on what you discovered to work well and avoid the mistakes your competitors make.

3-1

Expert reviews could also be used for competitor benchmarking, it is a cheaper alternative (lesson 8 discuses the limitations of it). A usability expert could evaluate the competing products, looking for relative strengths and weaknesses, trends, patterns and differences. It helps to identify what is missing in your product, where usability is inferior compared to competitors’. Reviews can be broad or narrow (focusing on a particular feature, e.g. checkout).

Arrow-up