Capture emotional responses and desirability perceptions by having users select adjectives that describe their product experience.
Product Reaction Cards use curated adjective sets to capture users' emotional responses and perceptions of a product, revealing desirability beyond usability.
Product Reaction Cards present participants with a curated set of adjectives -- words like 'intuitive,' 'cluttered,' 'trustworthy,' or 'overwhelming' -- and ask them to select the ones that best describe their experience with a product. Originally developed as part of Microsoft's Desirability Toolkit, this method captures emotional responses and subjective perceptions that standard usability metrics like task completion rates and error counts cannot reveal. After participants select their words, the follow-up conversation about why they chose each adjective provides rich qualitative data about how users feel about the product's personality, brand alignment, and overall desirability. UX researchers, product designers, and brand strategists use reaction cards during usability sessions, prototype reviews, or concept evaluations to measure whether the product evokes the intended emotional response. The method is particularly valuable for comparing design alternatives, tracking perception shifts across iterations, and identifying gaps between what the team intends the product to communicate and what users actually experience. By aggregating word selections across participants, teams can quickly identify patterns in emotional response and translate them into design direction that goes beyond functional usability to address how users feel about the product.
Create a set of cards with adjectives or short phrases that describe various attributes and reactions one might have to a product or experience. These cards can include positive, neutral, or negative emotions or reactions, such as 'intuitive,' 'frustrating,' 'innovative,' 'boring,' etc. Prepare around 50-100 cards to ensure a diverse range of responses from users.
Choose a sample of users representing your target audience. These users should have engaged with the product or prototype under test. Aim for a diverse group of participants to better understand the range of reactions from various user personas.
Explain the purpose of the exercise and the Product Reaction Cards to the participants. Make sure they understand that they'll be selecting cards that best represent their feelings and reactions to the product.
Allow the participants to fully engage with the product or prototype, ensuring they interact with its key elements and functionality. Give them ample time to develop an understanding of the product experience to accurately select reaction cards.
Ask the participants to choose 3-5 cards from the set that best describe their reactions to the product or experience. Encourage them to pick both positive and negative reactions to highlight strengths and pain points in the product.
Once participants have chosen their cards, encourage them to explain their choices. Ask them why they picked those particular cards, and identify any areas where there was consensus among participants. This will provide valuable context for further analysis.
Document the chosen cards and comments from participants, either through note-taking, audio, or visual recordings. Analyze the aggregated data to identify common themes or patterns across users' reactions.
Based on the analysis, draw actionable insights on which aspects of the product need improvement and changes. This can help you create a roadmap for designing a better product experience or further validate the aspects working well already.
Present your findings to the relevant stakeholders or project teams, outlining the key takeaways and suggestions based on the user reactions. Ensure that all insights are clearly communicated and actionable, allowing for informed decision making and improvements to the product.
After conducting a Product Reaction Cards study, your team will have a clear picture of how users emotionally perceive your product, expressed in their own vocabulary rather than abstract ratings. Word frequency analysis reveals which adjectives most participants associate with the experience, highlighting both strengths to preserve and concerns to address. The follow-up explanations provide rich qualitative context that connects emotional reactions to specific design elements, features, or interactions. Teams can compare results across design alternatives to make evidence-based decisions about which direction better aligns with brand intent and user expectations. Over multiple iterations, reaction card data shows whether design changes are moving perceptions in the desired direction. The results translate directly into design guidance about tone, personality, and emotional quality that complements functional usability improvements.
Always follow up card selection with 'Why did you choose this word?' -- the explanation is far more valuable than the card selection itself.
Balance positive and negative words equally in your card set to avoid biasing responses toward favorable or unfavorable reactions.
Ask participants to select both words that apply AND words that definitely do not apply for richer, more nuanced data.
Compare word selections across different user segments to uncover perception gaps between audiences.
Use the same card set consistently across projects and iterations to build benchmark data you can track over time.
Combine with usability testing -- have users select cards immediately after completing tasks for emotional context on task performance.
Track word frequency across participants to identify meaningful patterns; single occurrences from individual users are less significant.
Consider using a shorter 68-word version for quicker sessions, or customize the set while being careful about introducing your own biases.
The card selection alone provides limited insight. The real value comes from asking 'Why did you choose this word?' without the explanation, you miss the context that makes the data actionable.
Having more positive than negative words (or vice versa) biases results. Ensure roughly equal positive, negative, and neutral adjectives so participants can express their authentic experience.
Individual word choices are highly variable. With fewer than 8-10 participants, patterns are unreliable. Collect enough data points to identify meaningful frequency patterns in word selections.
Running reaction cards once provides a snapshot but not a trajectory. Use the same card set across iterations to track whether design changes move perceptions in the intended direction.
Words carry different connotations across cultures. 'Sophisticated' may be positive for some users and negative for others. Test your card set with a diverse pilot group before full deployment.
Printed or digital adjective cards for capturing user perceptions.
Document defining participant demographics and recruitment criteria.
Prepared physical or digital space with required equipment.
Structured guide with tasks, instructions, and follow-up questions.
Documentation securing participant consent for data collection.
Guidelines explaining how participants should select and sort cards.
Pre-formatted document for recording card selections and comments.
Recorded sessions capturing card selection moments and explanations.
Report with word frequency analysis, themes, and perception patterns.
Actionable improvements based on desirability findings and insights.