Recruiting hard-to-reach populations for government research

The phrase “hard-to-reach” gets used as a single category, but it covers at least four distinct populations whose barriers, motivations, and risks are different: older people, people with low digital literacy, people whose first language isn’t the dominant one, and people in active claimant relationships with the state. They often overlap, and the overlap is where standard recruitment most clearly fails.

This guide covers what makes each group hard to reach, the recruitment tactics that actually work, and the ethical issues that get sharper in a public-sector context.

Why standard recruitment misses them

Most user recruitment is built around three assumptions: that participants are reachable digitally, that they will respond to a screener within a few days, and that they are willing to sign up to an unfamiliar entity in exchange for a modest incentive. Each of these assumptions filters out a different part of the population you are meant to be serving.

For a commercial product this is a research problem. For a government service it is a legitimacy problem. The service is mandatory for the citizens it excludes, and the people you don’t research are disproportionately the ones who depend on the service most. If your sample looks like the average user of the existing digital channel, you have built a feedback loop that confirms your own design assumptions.

The first move is to stop treating “hard-to-reach” as a single demographic and start treating it as four overlapping populations with different barriers.

## Older people

The barrier is usually not age but the cluster of things age correlates with: less recent exposure to changing interfaces, fewer trusted digital contacts to vouch for unfamiliar links, more cautious threat models around scams (often well-calibrated), and physical factors like vision, hearing, and dexterity that change which channels work.

What works for recruitment:

– Recruit through trusted intermediaries: senior centres, libraries, charity networks (Age UK or your country’s equivalent), GP practices, faith communities. A flyer on a community board outperforms a Facebook ad.
– Offer landline phone screening. A research email from an unknown domain looks like a scam, and participants are right to think so.
– Pay in vouchers or cash, not bank transfers from an unfamiliar entity.
– In-home sessions get richer data than lab sessions for participants whose mobility is limited, but require a clear safeguarding protocol: two researchers, identifiable cards, advance written confirmation, a check-in contact.
– Allow a companion to be present without participating. Many older participants will only consent on this condition, and the companion’s silent presence rarely contaminates the data.

What does not work: online sign-up forms, gift card codes, “we’ll send you a Zoom link”, or anything that requires installing an app to participate.

## People with low digital literacy

This population overlaps with older people but extends well beyond it: working-age adults whose jobs do not involve computers, people who use a smartphone exclusively, people who learned to read on paper and never bridged to screens, and people whose only device is shared in a household. Various national digital inclusion surveys consistently place this group at 10–20% of working-age adults in developed economies — larger than most government delivery teams assume.

Recruitment cannot use any channel that pre-selects against the barrier. Concretely:

– Screen by phone, not by online form.
– Recruit through assisted-digital networks: libraries, Citizens Advice, council customer service centres, foodbanks, job centres. These organisations already have warm relationships with the people you cannot reach directly.
– Pay attention to which device they actually own. “Have you got internet at home” and “Have you got a smartphone with mobile data” produce different samples.
– Consider running sessions in the assisted-digital location they already use, with a member of staff present whom they trust.

The research design itself needs to change too, but that is a separate topic. For recruitment alone, the priority is that the channel you use to recruit must not require any of the capabilities you are trying to study.

## People whose first language is not the dominant one

Three barriers stack here: language proficiency, cultural distance from how research is framed in the dominant culture, and — often — immigration status creating reasons to avoid official-looking contact.

The single highest-leverage move is to recruit and moderate in the participant’s first language. This is more expensive than it looks because it isn’t just translation — it requires moderators who can hold a research conversation in that language, not interpreters relaying questions. A simultaneously interpreted session loses about half of what makes qualitative research useful. If budget forces a choice, prefer fewer sessions with native-language moderators over more sessions with interpreters.

Tactics:

– Partner with community organisations: diaspora associations, religious institutions, ESOL providers, refugee support charities. These are the recruitment networks; they also help you frame the study in a way that lands.
– Translate not just the screener but the consent form, the incentive description, and the no-pressure-to-continue language. Translated consent forms get read; English ones in front of L2 participants get nodded through.
– Be explicit and credible about confidentiality and data sharing with immigration or enforcement authorities. For some populations this is a make-or-break trust threshold. Don’t say “we don’t share data” unless your legal team has confirmed it for this specific study; if you cannot confirm it, say so honestly and let participants decide.
– Pay in cash or universally accepted vouchers. Recruitment that requires a national bank account, a tax number, or formal ID documents excludes the population you are trying to research.

## Claimants and people in precarious relationships with the state

This is the hardest of the four because the difficulty is structural. Someone who is currently applying for asylum, contesting a benefit decision, awaiting a housing allocation, or under a sanction has reasons to be careful about anyone official approaching them for research. The risks they perceive — even when your study has nothing to do with their case — are not paranoid: information given casually has been used against claimants before.

The principles here are stricter than for any other group:

– Recruit through trusted intermediaries with no enforcement role: charities, advice services, advocacy organisations. Never recruit through the case-handling agency itself.
– Separate the research clearly from any operational system. This means a different email domain, a different brand, different rooms, different people. If your research email contains the name of the agency that handles their case, your response rate from this population collapses, and the responses you do get are filtered to the people least at risk.
– Be honest that participation cannot affect their case in either direction. Do not imply it might help; do not pretend it definitely cannot influence anyone. If they choose to share something about their case, do not record it.
– Pay generously and in a form that does not appear on means-tested benefit calculations. Check the specific rules in your jurisdiction — research incentives are usually disregarded up to a threshold, but the rules change and getting this wrong can cost a participant more than the incentive is worth.
– Offer to do the session at a location of their choice, with a support worker present if they want one.

For some sub-populations within this group — survivors of domestic abuse approaching a refuge service, people leaving prison, undocumented migrants — the standards are higher still and require partnership with specialist organisations from the design of the study, not just the recruitment.

## Cross-cutting tactics

A few things apply across all four groups.

**Intermediaries are not a shortcut.** Charities and community organisations open doors, but they will rightly protect their relationships with the people they serve. Expect to spend time briefing them on what the research is and is not, to share your discussion guide in advance, to pay a fair contribution to their costs, and to share back what you learned. Treat them as research partners, not as recruitment agencies.

**Pay properly.** The default UX incentive scale is built around a participant pool that can give up the time easily. For people on low or precarious incomes, the same amount is a meaningful sum that should compensate for travel, childcare, lost shifts, and the cognitive load of participating. Pay at the top of your scale, not the bottom, and pay in cash or vouchers the participant can actually use.

**Get the location right.** A government building is a recruitment filter — the people who will not enter it are exactly the ones you need to research. Run sessions in libraries, community centres, charity offices, or the participant’s home. Travel to participants more often than you ask them to travel to you.

**Allow more time.** Sessions with first-language barriers, low digital confidence, or trauma-related caution take longer than the equivalent session with a confident digital user. Don’t run 60-minute slots back-to-back. Build in time for the participant to set the pace.

**Match the modality to the participant, not to your operations.** Remote video research is cheap and scalable; for these populations it filters aggressively for the confident, connected, and unworried. In-person, phone-only, and even paper-based research are all legitimate methods and produce different samples. Budget and plan for the mix the study requires, not the one your remote tooling supports.

## On sampling, briefly

Standard qualitative sample sizes (5–8 per segment) were calibrated against general populations. For hard-to-reach groups they are usually too small for two reasons. The within-group variance is higher — a “low digital literacy” sample of five could include a smartphone-native 35-year-old, an offline-only 70-year-old, and three different positions in between. And the cost of misreading the group is asymmetric — a missed pattern excludes real people from a real service.

Plan for 8–12 per segment where the segment is genuinely a target population for the service, accept that the cost per participant will be two to four times your standard, and report sample composition in detail. A research report that says “we spoke to 8 older participants” without disclosing that all eight were independently mobile, in their own homes, with home broadband, is worse than no research, because it confers an unearned sense of coverage.

## A short ethical checklist

Before fielding a study with any of these populations, you should be able to answer yes to all of these:

1. Could a participant withdraw at any point, with their incentive intact, without explanation?
2. Have you confirmed that the incentive does not affect any benefit they receive?
3. Have you confirmed with your legal team what is and isn’t shared with case-handling parts of the same agency?
4. Is the consent form available in the participant’s first language?
5. Is there a route for a participant to raise a concern about the research that does not go through you?
6. Have you briefed the intermediary organisation and offered to share findings back with them?
7. Have you set a clear protocol for what happens if a participant discloses something that suggests they are at risk?

If you cannot answer yes to all seven, the study isn’t ready to field, regardless of timeline pressure.

Leave a Reply

Your email address will not be published. Required fields are marked *