Selection Criteria: How to Pick Your Participants

Summary:  Rigorous selection criteria protect study validity. Learn how to define inclusion, exclusion, and diversity criteria to avoid costly misrecruits.

Great research starts with choosing the right people. This article explains how to define clear selection criteria that go beyond demographics, so you recruit participants who can actually help you answer your research questions and avoid costly misrecruits.

Why Selection Criteria Matter

Imagine you are running study to research a new fitness tool. You’ve written a flawless research plan, your tasks are ready, and you’ve recruited some people to participate. But when the first participant shows up, things start to fall apart: it’s clear that they’re not into fitness and they would never engage with your product.

You pay them and send them on their way, but after a few more sessions, a frustrating pattern emerges. Your data is all over the place, and you don't have the clarity your team needs.

This scenario is incredibly common, and it usually happens for one reason: in the rush to get feedback, the researcher skipped or rushed the process of defining their selection criteria. While it is tempting to jump straight into recruiting or creating a screener survey, failing to thoughtfully define exactly who you need in your study — and who you don't — is a recipe for wasted time and invalid data.

The Cost of Recruiting the Wrong Participants

Effective selection is the bedrock of external validity, ensuring results are accurate and applicable in the real world.

A study has external validity if the participants and the study setup are representative of the real-world situation in which the design is used.

Poorly chosen participants result in poor external validity. Without it, the insights you gain may not translate to the actual audience you’re designing for, resulting in misleading findings.

Sloppy screening results in misrecruits, categorized into three types:

Poor-fit candidates: Individuals lacking the necessary experience (e.g., an accounting student trying to provide the insights of an accountant with 20 years of experience) Professional testers: Individuals who participate in as many studies as possible to make money. Because they’ve participated in so many studies, they’re too attuned to researchers’ goals and are not representative of “regular” users. Bad actors: Malicious individuals who exploit the system for incentives, often lying about qualifications or using AI to speed through screeners Pick the right participants

Learn to write expert-level screener surveys, to get the right people in your research studies.

When considering the total cost of misrecruitment, it's important to recognize there are two distinct scenarios that can negatively impact the research process.

If the researcher identifies that a participant is a misrecruit during the session, they are still ethically obligated to compensate the individual for their time. It isn’t the participant’s fault they were selected for the study, unless they intentionally lied to qualify (which we’ve seen happen many times in our studies over the decades). NN/G’s standard policy is not to confront the individual in that scenario; especially in in-person studies, we’d rather report the person to the recruitment platform later than risk our researchers’ safety.

In these situations, the immediate costs include the incentive paid to the participant, the researcher’s time spent on the session, and any potential project delays incurred while finding and recruiting a suitable replacement. These are tangible, direct losses that can be accounted for.

The more problematic scenario occurs when the researcher does not realize the participant isn’t the right fit and lets them complete the study. In this case, the inaccurate data collected is incorporated into the findings and recommendations.

Depending on the study setup, this data can lead to misleading insights, potentially causing the business to make poor decisions — such as developing the wrong product, prioritizing incorrect features, or misunderstanding user needs. The consequences here are more insidious and far-reaching, as flawed data can undermine the validity of the entire study and have a lasting negative impact on business outcomes.

Both scenarios highlight the importance of thorough screening and selection to ensure participants genuinely match the desired user profile and help maintain the integrity and external validity of the research.

Move Beyond Demographics

When researchers define selection criteria, they often default to demographics (age, gender, income) because the data is readily available within recruitment platforms. However, relying on these only as a proxy for behavior is a common mistake.

For example, if your criteria are "Men, born between 1947 and 1949, who are wealthy, have been married more than once, and own large estates," you could end up recruiting Ozzy Osbourne, George Foreman, Sir Elton John, and King Charles III. These are four wildly different people with different motivations and behaviors.

To get the most accurate insights, your selection criteria should prioritize:

Behavioral information: What people actually do or have done. Past experiences shape mental models and are the strongest predictor of future behavior. For example, if you are designing an app for international travelers, you want people who have recently traveled internationally, not just people who want to. Attitudinal information: What people believe, value, or prefer. This helps you find participants who are genuinely invested in the topic, which leads to more honest and thoughtful feedback. The Three Types of Selection Criteria

To ensure your study is valid and represents your actual target audience, you need to clearly define three specific types of criteria:

Inclusion criteria Exclusion criteria Diversity criteria Inclusion Criteria = Who You Want

These are the specific attributes that make someone a "good fit" or eligible for your study. These criteria should be specific, relevant, and directly tied to the behaviors you are researching. For example, if you are testing a new bird-watching app, your inclusion criteria should be people who bird-watch as a hobby and own a smartphone.

You should also distinguish between best-fit and good-fit:

Good fit: Likes nature/hiking and has a smartphone. Best fit: Specifically selects "Birding" as a primary hobby and uses a smartphone for outdoor activities.

Exclusion Criteria = Who You Need to Rule Out

A common misconception is that exclusion criteria are just the opposite of inclusion criteria, but that isn't true. Exclusion criteria are attributes that might introduce bias or noise into your study. For example, you might exclude UX professionals, web developers, or industry insiders because they are likely to provide an "expert review" of the interface rather than realistic user data.

Diversity Criteria = Who Provides a Balanced Representation

These are attributes used to ensure your participants represent a realistic population mix (e.g., “a range of tech-savviness or income levels”) and to avoid skewed perspectives. If you are researching an airline app, you wouldn't want to include only first-class passengers in your study; you would want a mix of travel budgets, as well as a mix of domestic and international travelers. A great way to track these goals is by using a recruitment matrix to balance your quotas as candidates apply.

Recruitment Matrix

To ensure a balanced and realistic sample, researchers should build a recruitment matrix that maps primary behavioral or attitudinal segments (as rows) against diversity criteria (usually demographic or location information). This prevents over‑representing one type of user.

Example Matrix: Bird‑Watching Audio Identification AppTarget: 8 Participants

Segment

Goal

Under 40

40+

Urban

Rural/Suburban

Interested in birding

3

       

Hobbyist Birders

3

       

Experienced Birders

2

       

TOTALS

8

4

4

4

4

In this matrix, a single participant can satisfy multiple criteria (for example, an experienced birder over 40 who lives in a suburban area). The matrix is not a rigid quota system but a balancing tool: once you’ve filled your target for one attribute (such as urban users), you should preferentially recruit candidates who help fill remaining gaps.

Key Takeaways for Successful Recruitment

The foundation of effective research lies in selecting the right participants — those who truly represent the diversity and behaviors relevant to your study. Using a balanced recruitment matrix flexibly and focusing on key diversity factors helps researchers create samples that mirror real-world user experiences.

Ultimately, investing care and rigor in your participant-selection process transforms research from a box-checking exercise into a source of genuine insight. Thoughtful planning at this critical stage reduces bias, maximizes the value of your findings, and helps turn user feedback into decisions you can trust.

Comments (0)

AI Article