UAT is critical to the success and capability of your marketing efforts - but it is an area not often looked at. So how do you conduct UAT correctly?
Testing is a key element of any marketing initiative and is often reiterated through the common mantra ‘Test it, test it again and finally test it’, with many examples available on the importance of testing and measuring ongoing marketing activities.
One area not looked as often, but encountered several times though out your career is the testing or user acceptance testing (UAT) of a new marketing solution which, although not part of your daily activities, is critical in the success and capability of your marketing efforts.
What is user acceptance testing?
So what do we mean by 'testing' and in particularly user acceptance testing? This is often touted in software development as the beta testing stage or end user testing, but for the purpose of testing a marketing solution it can be defined as: “A process designed to ensure the solution/product meets the user needs and objectives”.
This identifies two points, firstly it is not about testing a system to meet a functional design or even a list of specific requirements but to ensure the solution will enable the business/division to deliver its objectives. It is business and user focused not system focused.
The second point is process, although there can be ad-hoc elements to a UAT it should be planned with the tests and areas covered understood. This will ensure coverage, the results expected are understood and critically that tests can be repeatable to help with identification and resolution.
Where to start with UAT?
Knowing what a UAT is, is good but does not tell you where to start in completing a UAT. Well the good news is for a solution to exist, objectives and requirements should have been captured which will provide the starting point for the testing of your solution.
If we begin by looking at the classic V-Model, we can see how each phase of a project relates to a testing phase, with the key piece being that the user acceptance test is created to meet the objectives and requirements not the functional design or build, so focuses on the business and user need.
So where to start? The objectives and requirements definition, which should include scenarios, the marketing solution is designed to meet as well as detailed specific requirements.
Why are scenarios important?
Scenarios are important because they ensure your solution will enable real life situations and not individual pieces of functionality. If we take a new kitchen which you have had designed for your home and lifestyle, this will include kitchen drawers, cupboards, an oven, a fridge, worktops, etc. each of which could be checked of on an approval sheet, so do I have 10 drawers and do they all open and shut, is the sink usable, etc. This will meet a list of requirements but does not check you can bake a cake which would look at the actual using of the kitchen and as part of this test the individual elements.
So begin with the scenarios you want to deliver in your marketing solution and ensure the testing proves a solutions real life capability.
How do I complete a UAT?
As the term UAT states you are completing a user acceptance test, so initially the definition of acceptance needs to be agreed, ensuring all parties involved in the UAT understand the type of problem encountered in terms of impact or severity. You do not want a cosmetic non-critical error being given the same priority or weighting as a system or partial system failure. This can be managed through the assignation of severity levels for any issue found.
These can be broken down into several levels but for most purposes four levels can be used:
- Severity Level 1 – The fatal or show stopper error, where a complete failure has happened to the entire system or major part of the system, with no workable solution available to continue the testing.
- Severity Level 2 – A major error, with a key set of deliverables not available, but a work around is possible to allow the UAT to continue, even if this workaround is not practical in an operational environment.
- Severity Level 3 – A minor error, with a specific deliverable not being available but it is only small departure from the agreed business needs.
- Severity Level 4 – Cosmetic errors cover those nice to have features, which have no detrimental impact on the solution functionality, but impact the look and feel of the solution.
For each of these levels it can still be a subjective view so examples are critical to help understand how the error can be classified. For further clarity a priority can also be assigned High, Medium and Low but this really only comes into use for Severity Level 3 and 4, as most Severity Level 1’s and 2’s are high priority.
One final level often overlooked, but critical is a level for changes. During a UAT new requirements are liable to be exposed as the key users gain access to the solution and these must be captured and addressed to understand the impact of incorporating additional functionality.
These levels will provide a common and agreed definition of severity levels, but not the sign off or acceptance criteria, which is key to ensure a solution moves from UAT to live and does not exist in a perpetual test state. No solution will be 100% error free, even NASA have fault tolerance levels, so the key is to set the number of acceptable outstanding errors at each severity level. Within the marketing solutions I have delivered the most common pragmatic approach is as follows:
- Severity Level 1 (fatal errors) – 0% tolerance. Due to the show stopping nature of this error for the UAT and operationally this has to have a zero tolerance and should be expected that zero or no examples of this type should be encountered during your UAT.
- Severity Level 2 (major errors) – 0% tolerance. Although a work around is possible for this level or error, it is often not an operationally practical solution so I would recommend having a zero tolerance to these errors, although it would be expected to have a few of these during the UAT.
The next two tolerance levels are dependent on the complexity and depth of solution, but in the multiple solutions I have been involved with delivering single customer views, analytics, reports, campaigning and modeling capabilities I have found these levels provide the flexibility to both parties ensuring the success of the UAT.
- Severity Level 3 (minor errors) – 15% tolerance. These were defined as a specific deliverable not being available but only representing a small departure from the agreed business needs, which could be corrected either during the UAT or crucially after the UAT by the user/administrator of the solution. This minimal impact and capability to affect means a more tolerant view can be taken, but still with high level of success.
- Severity Level 4 (cosmetic/aesthetic errors) – 25% tolerance. As stated these cover those nice to have features which impact the look and feel of the solution. As such a lower tolerance can be accepted, but should still be of sufficient level to maintain quality.
When completing a UAT considering how you will measure success:
- Use scenarios to drive the testing – meet business needs not individual functional requirements.
- Assign severity levels – classify any defects to ensure they are managed appropriately.
- Understand the sign off level – define and agree your UAT tolerance so all parties agree what is meant by success.
If you have any further questions or would like support / guidance in discovering or defining your data-driven marketing solution, please contact Jim through the BlackerRoberts Ltd “Contact Us” page. Alternatively please follow @BlacklerRoberts on twitter for further insights.
About Jim Roberts
Jim Roberts is the founder of the consultancy BlacklerRoberts Ltd and is an experienced marketing professional with over 18 years experience in the Direct Marketing arena across multiple industry sectors, including Financial, Leisure, Retail, and Charity. His passion is the delivery of value from data, using the customer and related information to deliver actionable insight driving improved customer value and understanding.