The situation (realistic case study)
A Canadian SaaS company (250 employees) was preparing for ISO 27001 certification.
They had completed an internal audit. They had a long checklist. They had “evidence” in a SharePoint folder.
Pre-assessment feedback from an external auditor
- “Your samples don’t prove the control operates consistently.”
- “I can’t trace what you tested.”
- “I don’t know why you selected these samples.”
The company wasn’t failing security. They were failing audit evidence credibility.
That’s what sampling fixes.
What went wrong (and why auditors didn’t trust it)
Three common mistakes
1) They sampled what was easy, not what was risky
They chose one clean ticket, one recent policy, one “best-case” example.
2) They didn’t define a sampling logic
“They were available” and “They looked representative” is not a method.
3) They didn’t build traceability
No sample IDs. No audit trail. No mapping from control → test → sample → evidence → conclusion.
What “auditor-trusted sampling” looks like (in plain English)
Good sampling answers four questions clearly:
- What population did we sample from?
- How many did we test, and why?
- How did we pick them (method)?
- What did the sample prove (test + result)?
If you can’t answer those questions, you will eventually see weak internal audit results or a nonconformity in certification.
The fix: the vCISO sampling approach they implemented
Canadian Cyber (vCISO support) helped them rebuild their sampling method in one week.
The goal was not more samples. The goal was defensible samples.
Rule 1: Define population first
Write the population clearly before choosing samples.
Example (Access reviews):
Population: all quarterly access reviews completed between Jan 1 and Mar 31
Count: 22 systems reviewed
Rule 2: Sample across time, teams, and risk
Force coverage across time, people, and risk tiers. This prevents cherry-picking.
Rule 3: Link each sample to test + evidence
Every sample gets a sample ID, an evidence link, the test performed, and the result.
Case study: How they sampled three high-risk controls
Control area 1: Joiner–Mover–Leaver (Access provisioning)
Goal: prove access is granted, changed, and removed consistently.
Step 1: Define population
- All joiner tickets in Q1: 31
- All mover tickets in Q1: 14
- All leaver tickets in Q1: 9
Step 2: Risk-based sample selection
- 3 joiners (one per month)
- 2 movers (different departments)
- 2 leavers (one with elevated privileges)
Why auditors trust this: time coverage, process variants, and higher-risk cases.
Step 3: Test steps applied to each sample
- Approval exists
- Role-based access requested
- Access granted matches request
- Access removed within SLA (for leavers)
- Evidence retained (ticket history + logs)
Evidence attached
- Ticket link/export
- Identity admin export (assignment/removal)
- HR termination reference (where allowed)
Result: no major nonconformities. One minor improvement: leaver SLA clarified.
Control area 2: Vulnerability remediation (Patch SLAs)
Goal: prove vulnerabilities are triaged and fixed within defined timelines.
Step 1: Define population
- Critical vulnerabilities identified in Q1: 18
- High vulnerabilities identified in Q1: 62
Step 2: Risk-based sample selection
- 5 critical vulnerabilities (including the oldest open item)
- 5 high vulnerabilities (random + aged mix)
Why auditors trust this: includes worst-case items, not only clean records.
Step 3: Test steps
- Detection date
- Severity classification method
- Assigned owner
- Remediation action evidence
- Closure date vs SLA
- Exception/risk acceptance if overdue
Evidence attached
- Scanner export/snapshot
- Ticket(s) showing remediation
- Change record (if needed)
- Risk acceptance record (if any)
Result: two highs were overdue with no acceptance → corrective action raised (valid finding).
Control area 3: Backups and restore testing
Goal: prove backups exist and restores are tested.
Step 1: Define population
- All backup jobs for critical systems: 12
- All restore tests performed in Q1: 3
Step 2: Sample selection
- 2 critical systems (customer-facing + internal finance)
- Latest restore test + one earlier test
Step 3: Test steps
- Backups completed successfully (job logs)
- Retention meets policy
- Restore test executed
- Restore achieved within target time
- Integrity validated (not just “it restored”)
Evidence attached
- Backup job logs/screenshots
- Restore test record (date, system, outcome)
- Approval sign-off (IT/security)
Result: restore testing frequency too low → improvement action added to management review.
The sampling framework auditors trust (copy/paste)
Use this template for each control you audit. It makes sampling repeatable and defensible.
Sampling record template
Control / Clause: [e.g., A.5.15 / Clause 9.2]
Population definition: [what + time period]
Population size: [count]
Sampling method: risk-based / random / stratified
Sample size: [how many + why]
Sample list: [IDs, dates, owners]
Test steps: [exact checks performed]
Evidence links: [tickets, logs, exports]
Result per sample: pass/fail/exception
Overall conclusion: effective / partially effective / not effective
How many samples should you pick?
There’s no single ISO rule. Auditors expect sampling to be defensible, consistent, and risk-based.
Practical internal audit baseline
| Control risk |
Typical sample size |
Note |
| Low-risk |
2–3 samples |
Prove the control operates at least twice |
| Medium-risk |
3–5 samples |
Include different owners or systems |
| High-risk |
5–10 samples |
Include worst-case/aged items and admin scenarios |
Best practice
Increase sample size if you find failures. One failure often means the control is not operating consistently.
The 5 sampling mistakes that create audit findings
- Only sampling “perfect” records
- Sampling from the wrong population
- No time coverage (all samples from last week)
- No high-risk coverage (no admin, no sensitive systems)
- No traceability (no mapping from sample to evidence and test)
Want an internal audit sampling workbook?
If your team struggles with sampling consistency, we can give you a ready-to-run workbook and a repeatable method.
A vCISO-led sampling setup includes:
- sampling rules by control category
- evidence pack folder structure
- sample register template with auto traceability
- internal auditor question scripts per clause
Outcomes from the case study (what changed)
After implementing the sampling method:
- Internal audit results became reproducible
- Findings were stronger and easier to close
- Management review decisions were clearer
- Certification preparation improved significantly
Most importantly: the audit moved from checklist compliance to risk assurance.
About Canadian Cyber
Canadian Cyber helps Canadian organizations run internal audits that stand up to certification audits.
Our vCISO services include audit program design, evidence readiness, sampling methodology, and management review support so audits drive real improvement, not paperwork.
Follow Canadian Cyber
Practical cybersecurity + compliance guidance for Canadian teams: