“User acceptance testing” can have several meanings. There’s the technical definition – and then there’s the way that the phrase is commonly used. But when teams use the same term to refer to different processes, things can get confusing.

Fortunately, both types of user acceptance testing are pretty simple to understand. They also have aspects in common, including many of the same goals.

User Acceptance Testing with a user and a mobile appWhat is User Acceptance Testing?

User acceptance testing verifies that the software’s user experience is acceptable for launch. Also known as UAT, it’s part of every website or app QA process in some way, shape, or form. However, there are a few different ways of doing it. The biggest difference? Whether the UAT is done by professional QA testers or real customers.

UAT by QA Testers

When UAT is done by professional QA testers, it’s similar to any form of manual testing. One way to think of it is as a combination between smoke testing and regression testing. It’s more than just checking a few obvious sections, like smoke testing. But it’s also not about testing every possible scenario, like regression testing.

For UAT, testers should make sure that the app or website feels easy to understand and use. Testing should always be more than just looking for bugs, but this is especially true in UAT. The goal is to think about whether the features feel polished and ready to launch.

UAT by Customers

In contrast to the above, many people use the term “User Acceptance Testing” to refer to testing done by customers. In some cases, this could mean customers in the sense of everyday users. It can also refer to the client that hired your team.

For example, with everyday users, this could involve a focus group of people giving feedback on the software. If you’re building software for a client, UAT can also mean having the client review and approve the final version.

Which Stage is User Acceptance Testing Done in?UAT tester checking off test cases

UAT is done in the last stage of the software development lifecycle.

On one hand, this is true about most QA in general. After all, a website or app can’t be tested if none of the functionality or design has been implemented! So it’s natural that any form of testing would be towards the end of software development. But user acceptance testing is often performed even after earlier rounds of QA.

For example, in a typical Jira QA workflow, there might be statuses for: Backlog, Selected For Development, In Progress, Ready for QA, In QA, UAT, and Done. In this scenario, the team’s QA testers would test it during “In QA” status, and then it would go into “UAT” to be checked by customers or clients.

User Acceptance Testing Examples

Example of a scenario where the client does UAT:

Let’s say that a digital agency is developing a new website landing page. The site has gone through the design process, and developers have created the initial version. They deploy the update to the test environment, and assign the ticket to QA.

At this step, QA starts testing the page, and reporting bugs. Once they’re done, they reassign the ticket(s) to developers to fix the bugs.

When the developers think they’ve fixed the bugs, the ticket goes back to QA. This time, QA doesn’t find any issues during testing. The ticket would then go into “UAT” status, and sent to the client to provide any final approval or feedback needed.

QA testers looking at websiteExample of a scenario where real users do UAT:

Imagine a start-up that’s about to launch their new app. It’s gone through the software development process, including being tested by QA. The team thinks it’s ready to go. However, they want to make sure it’s acceptable to real end users.

In this case, they might find a group of everyday people who aren’t affiliated with the company. For example, through a focus group. Then they would have these real users give their feedback on whether the app feels user-friendly and bug-free.

Example of a scenario where QA testers do UAT:

This would be almost identical to either of the above scenarios – but without the additional client or customer review. QA would simply do a final pass of testing before launch to make sure the user experience is in good shape.

How Should User Acceptance Testing Be Performed?

There are different opinions about how user acceptance testing should be performed. But the most important part in every situation is that it aligns with the goals of your client or team. For example, if there are certain areas of the app or site that are most important to the team, UAT should focus the most on those.

Person in wheelchair using laptop to test software

User Acceptance Testing Best Practices

Going further with the aspect mentioned above, the single best practice you can have is to make sure that everyone’s on the same page. The best way to achieve this is to practice good communication.

Some additional best practices for user acceptance testing include:

  • Test cases (if it’s done by an internal tester and not a client/user).
  • Report all user experience feedback, not just broken functionality. That doesn’t mean the team will prioritize fixing all of them before launch. But at least this way, everyone can be aware of potential issues that might need to be addressed in the release after.

Is User Acceptance Testing White Box Testing?

In a word, no. White box testing means that the tester can review the actual software code behind the scenes. User acceptance testing isn’t white box testing, because it only involves testers (or customers/users) interacting with what the software shows on the front end. If you were going to categorize it, it would be considered black box testing.

Should User Acceptance Testing Always be Done?

There are two ways to answer this. When it comes to having the type of UAT that involves real customers, the answer is no. It doesn’t always need to be done – if Engineering is taking QA’s feedback seriously.

This isn’t to say that customer feedback isn’t extremely important to act on. But it’s not realistic in modern Agile software development to have time built in to have focus groups with real customers in every sprint. Between tight deadlines and limited resources, it’s usually not feasible.

Instead, what can be done is proactively monitoring App Store/Play Store reviews for example, or tweets about your website, customer service reports, etc. If you see any user experience feedback that’s consistent or concerning, you can add backlog tickets for it.

What about when it comes to having QA testers do a final pass before release? Ideally, that should be done every time. Even if the latest changes were very minor, having testers check the user experience of the version that’s going to go live is always important.