MethodsArticlesCompareFind a MethodAbout
MethodsArticlesCompareFind a MethodAbout

93 methods. Step-by-step guides. No signup required.

ExploreAll MethodsArticlesCompare
PopularUser TestingCard SortingA/B TestingDesign Sprint
ResourcesAboutArticles & GuidesQuiz

2026 UXAtlas. 100% free. No signup required.

93 methods. Step-by-step guides. No signup required.

ExploreAll MethodsArticlesCompare
PopularUser TestingCard SortingA/B TestingDesign Sprint

2026 UXAtlas. 100% free. No signup required.

HomeMethodsSystem Usability Scale
SurveyPlanning & AnalysisQuantitative ResearchBeginner

System Usability Scale

Measure perceived usability with a standardized questionnaire that enables benchmarking across products and releases.

The System Usability Scale (SUS) is a standardized 10-question survey that produces a single usability score from 0 to 100 for benchmarking and comparison.

Share
Duration20 minutes per respondent.
MaterialsSUS questionnaire.
People12–20 participants.
InvolvementDirect User Involvement

The System Usability Scale (SUS) is a standardized 10-question survey that produces a single composite score between 0 and 100, representing how usable participants perceive a system to be. Developed by John Brooke in 1986, it has become one of the most widely used usability assessment tools in UX research and product development. UX researchers, product managers, and quality assurance teams administer SUS immediately after usability sessions to capture participants' overall impressions while the experience is fresh. The scoring formula is well established, and the extensive body of benchmark data (a score above 68 is considered above average) makes it easy to contextualize results against industry norms. Teams rely on SUS to track usability improvements across releases, compare competing design alternatives with a consistent metric, and give stakeholders a clear quantitative answer to the question of how usable a product is. Because it takes only a few minutes to complete and requires no specialized training to administer, SUS integrates seamlessly into both moderated and unmoderated testing workflows, making it accessible to teams of all sizes.

WHEN TO USE
  • After usability testing sessions when you need a quick, standardized measure of perceived usability for benchmarking.
  • When comparing two or more design alternatives and you need a consistent quantitative metric for decision-making.
  • Before and after a redesign project to measure whether changes have improved perceived usability.
  • When reporting usability status to stakeholders who need a single, well-understood number they can track over time.
  • During remote or unmoderated usability testing where a lightweight post-task survey is practical and efficient.
WHEN NOT TO USE
  • ×When you need to identify specific usability issues because SUS provides an overall score but not diagnostic detail.
  • ×For evaluating very early concepts or paper prototypes where participants cannot meaningfully interact with the system.
  • ×When sample sizes are too small (fewer than 8 participants) to produce reliable and meaningful composite scores.
  • ×As a replacement for qualitative usability testing since SUS measures perception but does not explain why users struggle.
HOW TO RUN

Step-by-Step Process

01

Step 1: Understanding System Usability Scale (SUS)

SUS is a simple, 10-item questionnaire that provides a global view of the perceived usability of a product or system. Each question contributes to a total score out of 100, which can be used to measure overall usability.

02

Step 2: Preparing the SUS questionnaire

Create a questionnaire with the 10 standard SUS questions. 5 questions are positively worded, while the remaining 5 are negatively worded. Each question should be answerable using a 5-point Likert scale, ranging from Strongly Disagree (1) to Strongly Agree (5).

03

Step 3: Participant recruitment

Identify and recruit a representative sample of users who will test the product or system. The number of participants may vary depending on the project size, but it's recommended to have at least 12–20 participants.

04

Step 4: Conducting usability testing

Ask participants to complete a series of tasks using the product or system. Record their interactions and observe how they interact with the interface to identify usability issues or areas for improvement.

05

Step 5: Administering the SUS questionnaire

After completing usability testing, ask participants to fill out the SUS questionnaire to obtain their feedback on the perceived usability of the product or system.

06

Step 6: Scoring the SUS questionnaire

To calculate the SUS score, first subtract 1 from the response values of odd-numbered items (positively worded) and subtract the response values of even-numbered items from 5 (negatively worded). Then sum up the new values and multiply by 2.5. The resulting value is the total SUS score out of 100.

07

Step 7: Analyzing the results

Examine the total SUS scores, as well as individual response patterns, to identify areas of high and low perceived usability. A higher SUS score indicates better usability, with a score above 68 considered above average.

08

Step 8: Reporting and recommendations

Compile the findings from the usability testing and the SUS questionnaire in a comprehensive report. Use these findings to make recommendations for improving the product or system's usability.

09

Step 9: Iterating and retesting

Make the recommended improvements to the product or system and conduct additional rounds of usability testing and SUS questionnaires to measure the impact of these changes on the perceived usability.

EXPECTED OUTCOME

What to Expect

After administering the System Usability Scale, your team will have a single composite usability score for each participant and an aggregated score for the product overall. You can compare this score against the industry average of 68 and use established grade scales to communicate results to stakeholders. Individual question breakdowns will reveal which dimensions of usability are strongest and weakest, such as perceived complexity, need for support, or consistency. When conducted across releases, SUS scores provide a clear trend line showing whether design changes are improving the user experience. The standardized nature of the results makes them credible and easy to communicate, giving your team quantitative evidence to support design decisions and prioritize usability improvements.

PRO TIPS

Expert Advice

Administer SUS immediately after task completion while the experience is fresh in participants' minds.

Use the standard 10 questions exactly as written without modifying wording to preserve the scale's validity.

A SUS score of 68 is the historical average; scores above 80 are considered excellent usability.

Compare scores across releases rather than fixating on absolute numbers to track meaningful improvement.

Combine SUS with qualitative follow-up questions to understand the reasons behind the numeric scores.

Report both the overall score and individual question breakdowns to identify specific usability dimensions.

Ensure participants interact with the product before completing the survey because SUS measures perceived usability.

Calculate confidence intervals when comparing scores between designs to determine statistical significance.

COMMON MISTAKES

Pitfalls to Avoid

Modifying the standard questions

Changing the wording of SUS questions invalidates the scale's psychometric properties and makes benchmark comparisons meaningless. Always use the original 10 questions exactly as published by John Brooke.

Interpreting as a percentage

The SUS score is not a percentage and should not be treated as one. A score of 68 does not mean 68% usability. Use established grade scales (A through F) or adjective ratings (excellent, good, poor) for proper interpretation.

Administering too late

Waiting hours or days after the usability session to administer SUS allows participants' impressions to fade and introduces recall bias. Always administer the questionnaire immediately after task completion while the experience is vivid.

Ignoring individual items

Reporting only the composite score without analyzing individual question responses hides valuable diagnostic information. Examine which specific items score low to understand whether issues relate to complexity, consistency, learnability, or confidence.

DELIVERABLES

What You'll Produce

System Usability Scale Survey

Standardized 10-item questionnaire measuring perceived system usability.

Participant Demographics

Summary of participant background information for segmented analysis.

Data Collection Plan

Outline of when and how SUS surveys will be administered to users.

Quantitative Results

Aggregated SUS scores with individual item breakdowns per user group.

Usability Benchmark

Comparison of SUS scores against industry averages and prior releases.

Insights & Recommendations

Analysis of results with actionable recommendations for improvements.

Data Visualization

Charts and graphs communicating usability scores to stakeholders.

Longitudinal Analysis

Tracking of SUS scores over time to assess improvement trends.

FAQ

Frequently Asked Questions

METHOD DETAILS
Goal
Planning & Analysis
Sub-category
Online surveys
Tags
System Usability ScaleSUSusability testingquestionnaireusability scorebenchmarkingquantitative researchsurveyuser satisfactionusability metrics
Related Topics
Usability TestingUser Experience MetricsQuantitative UX ResearchBenchmarkingSurvey DesignHuman-Computer Interaction
HISTORY

The System Usability Scale was created by John Brooke in 1986 while working at Digital Equipment Corporation (DEC). Brooke developed SUS as a 'quick and dirty' usability scale that could be administered immediately after a user interacted with a system. The scale was originally published in a 1996 book chapter, 'SUS: A Quick and Dirty Usability Scale,' which described its development and scoring methodology. Despite its simplicity, subsequent research by researchers including Jeff Sauro demonstrated that SUS possesses strong psychometric properties, including high reliability and validity. The scale's popularity grew significantly in the 2000s as the UX field expanded and practitioners needed standardized metrics. In 2009, Bangor, Kortum, and Miller published a study establishing adjective ratings and letter grades for SUS scores, making interpretation more accessible. Today, SUS has been used in thousands of studies across industries and is considered one of the most well-validated usability questionnaires available.

SUITABLE FOR
  • Quantitative usability assessment that enables benchmarking against industry standards
  • Tracking usability improvements across multiple product releases over time
  • Comparing the perceived usability of different design alternatives or prototypes
  • Establishing baseline usability metrics before starting a redesign project
  • Supplementing qualitative usability findings with standardized quantitative data
  • Reporting usability status to stakeholders using a well-known industry metric
  • Remote and unmoderated usability testing where quick post-task surveys are needed
  • Validating that design changes have genuinely improved perceived ease of use
RESOURCES
  • System Usability Scale (SUS)The System Usability Scale (SUS) is a reliable tool for measuring the usability. It consists of a 10 item questionnaire with five response options for respondents; from Strongly agree to Strongly disagree.
  • How To Use The System Usability Scale (SUS) To Evaluate The Usability Of Your WebsiteThe System Usability Scale (SUS) is a 10-Question questionnaire that offers a quick, cost-effective yet accurate way to evaluate the usability of a website
  • The System Usability Scale (SUS) (Video)The SUS is a well-established 10-question survey administered at the end of a user test; it gives you a measure of the perceived usability of your product and enables you to compare it with others.
  • How to use the System Usability Scale in modern UX To conduct usability testing without spending a fortune or requiring a large team, turn to the System Usability Scale.
RELATED METHODS
  • Analysis of Cognitive Work
  • Benchmarking
  • Business Model Canvas
RELATED ARTICLES
  • The Mixed-Initiative Interface: Designing Control Handoffs Between Humans and AI
    UX & AI·23 min read