What is Usability Testing? Types, Benefits, Examples and More

What is Usability Testing? Types, Benefits, Examples and More

One way that organizations fail in software quality efforts is by losing sight of the bigger picture. Software testing goes a long way, but, if it does not incorporate the user’s perspective, the result might be a costly blind spot. Product owners, designers, programmers and testers all see projects within fixed perspectives; even if they have the end user’s interests in mind, there’s no way for them to know for certain how customers will use the product.

Unless, of course, the organization consults the user. Here is where usability testing delivers value for digital products.

Usability testing is a crucial tool for developing great products. Unlike most forms of software testing, usability testing involves the user of the software, not a tester. Through usability testing, the organization assigns representative users with tasks for a product and evaluates what they like or dislike and where they struggle with an app, software, system or platform. While there are different usability testing methods, the process typically involves a participant completing a series of tasks that the organization uses to improve the product.

Every organization should take advantage of usability testing before a big product launch. How and when the company implements usability tests depends on the outcomes and insights they would like to gain.

In this usability testing guide, we’ll walk through the basics of the approach, as well as how Applause offers world-class expertise in this area, to help you make an informed decision with how you would like to proceed. This usability testing guide will cover the following areas:

What is usability testing?

Usability testing, unlike functional testing, focuses on the user experience the software provides. So, what is usability testing? The approach is all about how the application works in the real world for real users performing real tasks.

Participants provide a critical eye on the product that an organization typically cannot achieve in-house, which can yield everything from severe functional defects to minor design recommendations. Depending on the insight, the organization can order a hotfix or simply file away the feedback for consideration in the future.

Participants typically complete a set of tasks, and then report on whether the tasks were easy to complete. From there, usability testing guides researchers to determinations based on the feedback on where improvements can be made. Organizations can perform usability testing anywhere — in person or remotely, at home, in a coffee shop, or in a lab setting.

Ultimately, usability testing guides the organization to three key objectives:

  • identify design issues

  • uncover opportunities to improve

  • learn more about the intended user’s preferences and behavior

From there, the organization can move forward with confidence that the product will meet customer needs and expectations. If key issues arise, it gives the organization an opportunity to identify shortcomings and fix them either prior to production or in a future iteration.

What usability testing does is incorporate a spectrum of specific focuses. Generally, the following two testing disciplines fall under the umbrella of usability testing:

Formative usability. This software usability testing method occurs at the beginning of the design phase to understand initial user reactions to product design. Rather than review the interface itself, users typically evaluate the design based on prototypes and wireframes. Formative usability testing enables the organization to gather feedback on the user experience and expectations before code progresses further in the software development life cycle (SDLC). When programmers can make changes early, it saves the organization time and money when compared to costly refactors.

Summative usability. During the latter half of the development phase, the organization can conduct summative usability testing with an actual working product. At this point, the organization wants to establish metrics on the experience from time on task to success rates.

These software usability testing methods ultimately combine to provide an evaluation, which distills the feedback into a report that reflects the product’s true usability. The organization can conduct usability tests at any stage of the SDLC, from early in the prototype stage to late-stage assessments via alpha and beta testing. Usability testing can also occur in production, such as for legacy systems or new versions of a product. While it is cheaper and easier to manage changes earlier in the SDLC, user feedback is always valuable for the long-term viability of a brand and its products.

Difference between usability testing and other methods

It is easy to confuse software usability testing with similarly named approaches to quality assurance and quality control. Sometimes, two approaches are fairly synonymous.

Let’s explore the similarities and differences between several methods. Keep in mind that these definitions are not standardized throughout the industry. Thus, they can be subjective and vary among usability/UX professionals.

Usability testing vs. user research. Usability testing is a subset of user research. User research incorporates usability testing as well as approaches like interviews, surveys and focus groups to best identify the behavior and needs of a product’s intended customer. Like usability testing, user research varies in terms of when it can occur relative to product development, especially given the use of different development methodologies. However, the earlier an organization can incorporate user research, the better it can document and strategize according to that insight.

Usability testing vs. user testing. The latter term, user testing, validates the viability of a product idea. User testing serves as the first opportunity to establish a potential market for a product. By understanding how users approach a problem currently, and how your product might help them address that problem in a more efficient way, you can develop user personas that guide how you position a product for customers. Usability testing comes later, as it involves gauging a user’s reaction to a product that is already in some stage of development — even if it is just a prototype.

Usability testing vs. user acceptance testing. Both tests yield helpful information about how the product works. User acceptance testing confirms that the product and its features function as intended. For example, user acceptance testing would validate that a user can create an account, receive a confirmation email, log in and can begin using features of a registered user. Usability testing gauges the user’s opinions or expectations of the app, such as that the registration process takes too long or seems unnecessary altogether. The former validates functionality and requirements, while the latter provides user perspective of that functionality.

Usability testing vs. UI/GUI testing. User interface/graphical user interface testing assesses the front end of the product to make sure the look matches the applicable requirements and standards. Usability testing might involve a user assessing the comfortability or friendliness of the GUI, which is a different objective, but it goes beyond that. Usability studies also make sure the product functionality is easy for the customer to use, not just that it is visually appealing to them.

Usability testing vs. UX testing. These two terms are likely the most synonymous, and we will refer to them interchangeably in this guide, but there are slight differences between usability testing and UX testing, depending on the individual’s definition. Usability testing is one aspect of UX testing. However, UX testing goes beyond just usability to try to understand how the customer feels about their experience with a product, which might even include gauging their emotions or impression of the brand as a whole. For example, if a restaurant wanted to gather product feedback from real customers for its new mobile app, usability testing might uncover difficulties joining a loyalty program, while UX testing would expound further to explain how the user feels about the colors of the app or the promotions available through the program.

Ebooks

3 Ways Your Users' Feedback Can Boost Your Bottom Line

Learn how leveraging user feedback early and often ensures a high quality user experience.

Read Now

Types of usability testing

There is no singular way to conduct software usability testing. Whichever way gathers the most helpful and actionable insight for the product is the way to go, and that can vary depending on the business and even the individual release. The organization should consider its objectives and desired outcomes when establishing which types of usability testing to use, and how best to approach the task.

Let’s compare and contrast the following types of usability testing, then explore some individual methods:

Moderated vs. unmoderated. Moderated usability testing involves an individual (moderator) facilitating or guiding the participant through the test, either in person or remotely. Moderated usability tests allow the organization to answer questions that occur during testing or probe the participant for more information when needed. Moderated usability testing might also involve a logger who takes notes about the user’s actions, intentions and errors. Unmoderated usability tests allow the participant to complete tasks at their own pace without anyone watching or facilitating.

An organization might opt for moderated usability testing for a number of reasons. For one, the participant might require a moderator to walk them through the usable parts of a product if it is in a prototype stage. In addition, a moderated usability test allows for follow-up questions, interviews and even a genuine rapport between the participant and moderator, which can yield interesting feedback. Make sure the moderator has experience with usability studies, however, as they can also potentially introduce some bias. This type of testing requires more one-on-one time with a participant and dedicated resources to the effort.

Unmoderated usability testing is suitable for either a small group or a large, diverse pool of participants. Unmoderated usability tests are cheaper and faster to complete than moderated tests, as they can be conducted anytime and anywhere. This type of usability testing, however, can run into challenges when defects arise, or when there is ambiguity in either a task or participant’s response.

Remote vs. in-person. Do not confuse these types of usability testing with moderated and unmoderated testing, which can both occur remotely or live. As the name suggests, in-person usability testing occurs with the participant on location, as opposed to following tasks in their chosen location (remote).

In-person usability studies grant UX researchers several benefits, whether moderated or not. The organization can provide a device for the participant to avoid potential technical issues and ensure an optimal experience on the chosen/provided device. This also makes it easier to recruit participants, who might not have access to a device meeting the system requirements. Likewise, in-person usability tests enable the organization to control the testing environment and also test prototypes that might not work on the participant’s personal devices. One other benefit; UX researchers can better gauge non-verbal reactions when usability tests occur in person. This usability testing method, however, comes with overhead, as the organization must provide personnel, a secure location and other resources. Also, in-person tests might make the participant less comfortable, which can skew their responses.

Remote usability testing lessens the cost component for the organization, enabling participants to evaluate a product from a comfortable location. In addition, the participant can use a familiar device, which enables them to experience the product as they would in the real world — valuable insight for a UX researcher. Remote usability testing can suffer from technical difficulties, connection issues and miscommunications, especially when conducted by an inexperienced researcher. A research operations (ResearchOps) team can smooth out some of these potential wrinkles by conducting an initial technical evaluation and prep work to ensure the testing goes according to plan.

Here are some specific usability test methods and techniques that you can use:

  • screen or video recording

  • usability lab

  • guerilla tests

  • contextual inquiry

  • usability testing partners

Screen or video recording. This approach can be helpful to review the participants on-screen actions, such as where and how quickly they navigate or click. You might also project the participant’s screen onto another device or monitor for live assessment. Additionally, record the participant’s audio or video responses to questions for later review, as you might catch something on playback that you missed the first time around, such as a non-verbal cue. A variety of software products assist with screen and video recording.

Usability lab. This refers to usability studies conducted in a controlled lab environment that enable the organization to gather all the feedback they need. In this setting, the organization carefully maintains all devices and variables to keep a stable environment. The usability lab typically includes a number of recording devices, as well as a one-way mirror and a means to communicate with the participant. However, to keep participants relaxed, usability labs typically include lighting and temperature controls, plus comfortable furniture or other amenities.

Guerilla testing. As a low-cost, quick alternative to more thorough testing, guerilla testing (also called hallway testing or corridor testing) can reveal some helpful usability insights. Rather than carefully cultivating a pool of participants, guerilla testing occurs out in the wild, such as at coffee shops and shopping centers. With guerilla testing, UX researchers approach random or minimally vetted individuals for short evaluations of a product, often less than a half-hour. Unlike a lab environment, guerilla testing is lightweight, typically just requiring a device for testing, a test facilitator, a note-taker and screen recording software. While this approach improves metrics like conversion rates and customer satisfaction, guerilla testing cannot guarantee that you are sufficiently testing usability for your target audience.

Contextual inquiry. This usability testing method takes place in the participant’s own environment. Often less task-oriented than other types of usability testing, contextual inquiry involves a researcher interviewing the participant and observing them as they work with the product. By interviewing the participant during the test, researchers can gain in-the-moment insights that the participant might otherwise forget or omit in a post-session report. While contextual inquiry might occur in imperfect conditions compared to a lab environment, this method enables the researcher to learn subtle details, such as:

  • what devices and equipment the participant uses

  • where and how their workspace is set up

  • how long it takes them to figure out product functionality

  • what environment or technical issues they run into

Usability testing partners. Functional or usability testing partners can conduct testing on behalf of the organization. While this usability testing method involves some additional cost, a partner typically offers expertise that the organization does not have in house. Additionally, the usability or UX testing partner typically has access to a broad pool of potential participants, which it can cultivate based on the client’s needs. For example, the client might request 20-29 year-olds with Apple mobile devices in APAC for an upcoming product launch — a UX testing partner, such as Applause, must meet this need and provide experienced researchers to elicit the necessary insights from participants. While there is a cost component to this approach, it is equally — or more — costly to pay in-house UX researchers, recruit researchers, purchase and maintain equipment, and conduct testing. A usability testing partner provides high-quality usability feedback that can overcome this expense and let the organization focus on other objectives.

These are just several usability testing methods. UX researchers might take a variety of explorative or comparative approaches to learn how participants feel about and interact with a product.

Benefits of usability testing

Keeping customers happy while serving them effectively and efficiently is the best way for any company to achieve its primary goal of maximizing profits. However, the organization must buy into the benefits of usability testing. Without the agreement and understanding that these user insights will deliver valuable action items, software usability initiatives can fall flat, as the organization moves on to prioritize other items in the backlog. Invest in UX testing to get value out of the approach.

Organizations can realize a number of benefits of usability testing, some of which we mentioned already, including:

  • direct feedback from your target audience to inform future feature development

  • detection of usability issues before a product launch

  • time and cost savings by addressing concerns earlier in the SDLC

  • insight into user satisfaction with a product before broad release

  • performance testing in a real-world setting

  • effectiveness of personalization and localization features

  • validation of usability requirements

  • unbiased assessment of the product

  • boost of confidence in the organizational approach to product development

Through usability testing, the organization can address areas of concern to earn positive customer sentiment and improve the overall experience.

Stages of a usability test plan

There are many phases of a usability test, all of which help optimize the feedback that produce actionable results. Consider each step carefully as you progress through usability test planning and execution.

1. Choose the product or functionality for assessment

The scope of work helps keep the study and its participants on task. Determine which product, app, site or platform — or part of any of those — the participant will assess, and identify what feedback you hope to receive from usability testing. Perhaps there is a hypothesis you want to confirm, or a workflow you want to assess. These are all factors to keep in mind as you define what you want to measure with usability testing.

2. Define the participant’s tasks

Rather than jotting down a to-do list for participants, think about what general goals or objectives the product’s users will have. Build out your list of assignments, or tasks, from there. Ideally, tasks should be realistic, actionable and specific, yet without leading the participant too heavily toward a particular action. The goal is to observe how a participant chooses to perform the task and what difficulties they face during it.

3. Craft a guide (moderated) or script (unmoderated)

If your usability test plan includes a moderator, write a guide that keeps the study consistent for all participants and avoids any unintended bias. The moderator typically also introduces the product, which includes some background, the participant’s knowledge or impression of any existing products, and what they will be expected to accomplish during testing. A guide should keep the moderated test on track for the sake of consistency, but can also enable some flexibility to investigate interesting areas when they come up. A script is helpful for unmoderated usability testing to make the instructions as clear as possible for the participant — deviation here isn’t ideal.

4. Determine roles and responsibilities

Who on the team makes the most sense to moderate a usability test? Typically, you want an individual that can stay on task while also being personable and engaging with the participant. The more comfortable the participant feels, the more they can engage with the product naturally — and the more the moderator can elicit helpful information. Likewise, the organization might employ a separate note-taker for usability testing — someone who has great attention to detail. However, this role isn’t necessary, as sessions are often recorded and researchers can take their own notes.

5. Recruit participants

Recruitment will vary depending on the usability testing method. In most usability test plans, you will want to screen and vet participants. It can be a difficult task to find adequate representatives for your user base. Create personas that detail the intentions, desires and concerns of your users, then recruit based on those personas. The number of participants might also vary. Organizations typically use incentives, such as gift cards, to help with recruitment.

6. Prepare materials and the environment

Once you have participants, determine how and where they will conduct the test. Materials include criteria such as devices, as well as the product and any access restrictions you want to put in place. Also, implement any software or hardware you will need to record the test, such as video cameras or a screen-sharing app. Determine whether the usability testing benefits more from a controlled, albeit comfortable environment or a natural environment, where participants would typically use the product.

7. Execute usability testing

It is time to test. Keep in mind that any struggles the participant runs into can be useful feedback, so do not rush to help unless there is an issue with the instruction itself. Ask the participant a variety of questions after each task to get their immediate reaction to the product functionality. A moderator can also use a variety of tactics, such as a blink test or expectancy test, to generate instant feedback from the participant.

8. Analyze results against success criteria

When conducted correctly, the usability testing plan reveals a treasure trove of data — equally useful and hard to sort through. Take the time to assess common threads among participants, such as tasks that took a long time to complete, common errors, the severity of some tasks versus others, and areas or functionality that users didn’t enjoy. Patterns of behavior, difficulty or sentiment will reveal a lot about the product.

9. Compile a report

With data analysis complete, UX researchers often — though not in all cases — put together a usability testing report that includes a summary of the testing, the methods or tactics used, testing results, and recommendations for improvement with different levels of severity. The goal of the usability testing report is to make it clear how the testing was done, so that it can be repeated if necessary, and to convince stakeholders that the participant findings are significant enough to act upon.

10. Create a strategy based on the findings

Stakeholders determine when actions must be taken based on the usability testing report. The findings from usability testing guide stakeholders to prioritize and fix critical issues that make it impossible or extremely difficult to complete core tasks. If stakeholders determine there are higher-priority initiatives, they might place issues that cause friction or annoyance into the product backlog rather than act upon them right away.

The organization can also approach usability testing in an iterative manner. In this case, it might conduct usability tests early in the SDLC for a few small, simple tasks, and make fixes before launch. This enables the organization to learn granular pieces of information about the product as they build it.

Keep track of the participants the organization uses. It might be helpful to go back to them for more usability tests later, perhaps more complex, time-consuming tests.

Usability testing outcomes

While you should keep usability testing outcomes and objectives in mind from the beginning, most usability insights fall into two primary categories: quantitative and qualitative. Both qualitative and quantitative data are helpful usability testing outcomes that inform how the organization can adapt the product to address user concerns or difficulties. However, researchers often collect qualitative and quantitative data at different times and sometimes require separate usability testing plans.

Qualitative data, or qual data, consists of a participant’s observations or feelings about the product. Qualitative data can help the organization adapt the product or feature design to make it more desirable for the intended user. These insights are the participants’ reflections on the usability of a product, as opposed to a statistical finding.

Qualitative data might involve a measurement or metric, but, more importantly, it answers why a product is easy or not easy to use. Researchers observe where a participant struggles with elements of the UI, or identify points of friction upon questioning the participant. Often, if multiple participants struggle or react poorly to a particular feature or workflow, that is a pattern of qualitative data that can prove useful. To generate more qualitative data from a usability study, the researcher or moderator should allow some flexibility for more questioning or extra time to probe a complex workflow.

Sometimes, qualitative data can include a quantitative measurement, such as when a participant is asked to rate how they feel about a product on a 1-10 scale. But, generally, qualitative data is less about measurement than the human reaction to a product.

Quantitative data, or quant data, is one or more metrics or measurements collected during usability studies that reflect a product’s ease of use. Organizations can benefit from quantitative data in larger usability studies, where more data is collected — and, ideally, in a controlled environment to keep the data consistent. However, smaller studies can also yield helpful quantitative data, especially with unmoderated methodologies, where self-reported feedback is the focus.

Where qualitative data can provide hard-to-interpret or even conflicting findings, quantitative data ideally provides neutral insight in the form of hard numbers. Researchers might measure task completion times or number of errors for a large sampling of participants to determine the most problematic areas. Even a simple customer satisfaction score at the end of the study can provide a helpful metric to track — and improve upon — over time.

While quantitative data does not directly reflect participants’ opinions of product usability, the insights often make a compelling case for stakeholders, especially when compared against baseline performance or satisfaction ratings. For example, if participants’ satisfaction rating drops significantly for a new product version, or task completion times double, that is an indication of a severe problem. The researcher can also compare quantitative data against competitors’ offerings or other products within the company’s portfolio.

Ultimately, quant data shows what happened in usability studies, and qual data explains why it happened.

Usability testing costs and challenges

Software development and testing budgets are tight, which can make it difficult to set aside extra money for usability studies. When assessing software usability testing cost, a number of factors affect the price, including:

  • sample size of the participants

  • difficult-to-source participant profiles

  • the type of usability testing

  • breadth and depth of the testing

  • amount of time allocated to the study.

Yet, when done well, usability studies can deliver incredibly valuable insights that make up for the expenditures — and then some.

The ROI for usability testing can be staggering, but only when done right. It can be a challenge for organizations to handle UX testing in-house for a variety of reasons, which can result in lower ROI — or an outright waste of time.

Finding the right participants. It is possible to find too experienced a participant or too inexperienced a participant. Likewise, it is a challenge to find participants that truly represent your target audience. Without this targeted user insight, you will only be able to test the actual usability without understanding the attitudes and preferences of your intended audience. It is important to have a screener to seek out participants who fit your target customer demographic. This person might be an internal worker, but they should have experience vetting participants for usability tests.

Soliciting good, honest responses. Participants will walk through whatever series of tasks you give them. Make sure you guide them in a helpful manner toward outcomes that are useful. It also makes sense to let the user experiment and explore on the site, as they can inform you, directly or indirectly, what is most important to them. It is easy for a moderator to introduce bias that informs the user’s perspective of the product, either through too much moderation of the tasks or by asking leading questions. Make the experience as holistic as possible for the user, and trust them to inform you about the product in their own ways and time.

Consuming time and efficiency. While there is a strong case for the value of usability studies, it is not without some time and effort. Organizations often dedicate internal resources to UX testing, which might take workers away from other tasks and slow development or release of the product. If the business cannot — or will not — dedicate internal resources to UX testing, an outside testing provider might be a helpful alternative. Seek out an experienced partner that can conduct usability tests on your schedule, within your scope and on budget.

Budgetary restraints and scheduling pressures might tempt leadership into forgoing usability studies. This is a mistake. All digital products should undergo usability testing to avoid alienating customers.

Whitepapers

Crowdsourced Usability Testing

In the past, usability studies took months to complete. Today, UX testing can be done in weeks by companies of all sizes.

Read Now

Usability testing examples

Tailor your usability study to achieve the objectives that matter most to the organization. When the business buys into usability testing findings and makes the appropriate changes, it can achieve significant improvement in the user experience.

Applause helps enterprises across a variety of industries realize their usability testing goals. Whether it is a healthcare organization that prioritizes ease of use for a telehealth app or a streaming media provider that wants to eliminate any friction in the registration process, Applause is here to help name-brand companies deliver excellent products to their customers.

Here are a few usability testing examples from real Applause customers.

Dow Jones. Internal testers were not enough for the publishing company to ensure a high-quality experience for all users. Dow Jones relies on Applause for a mix of functional and usability testing. Through the partnership, Applause has identified more than 7,000 functional and usability issues across 70 countries and more than 750 device-OS combinations.

“It’s more than focusing on the functional aspects of an app but considering whether this site would work for a 50-year-old with a certain educational background,” said Sumeet Mandloi, former director of engineering at Dow Jones.

Read more about Dow Jones’ partnership with Applause.

GreenTube. The online and mobile gaming company struggled to validate product functionality and usability with its small internal staff. This was a problem for the company, which also had to adhere to regional gambling restrictions and a plethora of user devices. Applause provided GreenTube with functional, payment, automation and UX testing. GreenTube gained usability insights from in-market testers that helped improve its platform and challenge competitors.

Read more about GreenTube’s partnership with Applause.

Paysafe. The payment solutions company uses Applause as an extension of its internal QA team to conduct functional, payment and UX testing. Applause testers evaluated applications and features in production to help improve the user experience. Applause collected user insights from 50 countries to identify a variety of usability concerns, such as a cumbersome display issue.

Watch this webinar to learn how Paysafe and other companies turn to Applause for functional and UX testing services.

Applause UX testing services

Every digital product should undergo usability testing to ensure it is a good fit for the intended audience. When customers are dissatisfied with a subpar product, they often will not give it a second chance — that’s revenue lost, potentially forever. Every product is capable of improvement, and usability or UX testing helps uncover those points of friction hidden from view.

Applause global community members shine a light on those problem areas to help deliver a five-star user experience at every touchpoint. Our diverse community helps you gain usability insights from the customer profiles that you define, anywhere around the world.

Applause provides a fully managed UX solution, including UX researchers with deep industry expertise. These researchers lead usability studies from small, iterative feedback to deep product analysis, and create effective scripts and tasks to make the process efficient. Applause analyzes the usability findings to provide actionable insights that help make your product the best that it can be.

Any device/OS combination, any location, any customer profile — Applause is ready to deliver exceptional insights and maximize the value of your usability testing investment.

It’s time to go beyond this usability testing guide. Realize the full value of experienced, global UX researchers and participants. Reach out to Applause to learn more about our usability testing solution, credentials and case studies.

Ebooks

Essential Guide to Exceptional Customer Journeys

Customer journeys are constantly evolving. New ways to interact with your brand require thorough vetting to remove friction and offer the best user experience possible.

Read Now
Want to see more like this?
David Carty
Senior Content Manager
Reading time: 27 min

Digital Quality Matters More Than Ever: Do Your Experiences Keep Customers Coming Back?

Take a deep dive into common flaws in digital experiences and learn how to overcome them to set your business apart.

4 Ways to Get Maximum Value from Exploratory Testing

Well-planned exploratory testing can uncover critical issues and help dramatically improve the customer experience. See how to guide testers to where exploration can yield the greatest returns.

3 Keys to an Effective QA Organization

Get your internal, external and crowdsourced testers on the same page

What is the Metaverse? And What Isn’t It?

It’s not far-flung sci-fi anymore — the metaverse is here, and it requires companies to rethink their approach to UX and testing

Why Machine Learning Projects Fail

Read this article to learn the 5 key reasons why machine learning projects fail and how businesses can build successful AI experiences.

How Localization Supports New-Market Launches

Success or failure in a new market is all about how you resonate with customers — don’t skimp on prep work