Verification converts raw clinical data into credible evidence. Done right, it reduces risk (patient safety, protocol deviations, misclassified endpoints), accelerates timelines (clean data faster), and withstands regulator inspections.

Table of Contents
Key terms (quick SEO glossary)
- SDV (Source Data Verification): Checking eCRF entries against source records.
- SDR (Source Data Review): Reviewing source to assess quality and protocol compliance (not line-by-line).
- RBQM (Risk-Based Quality Management): Prioritize verification where risks are highest.
- ALCOA+: Attributable, Legible, Contemporaneous, Original, Accurate (+ Complete, Consistent, Enduring, Available).
- EDC: Electronic Data Capture.
- DVP: Data Validation Plan (edit checks + listings + roles).
- KRI: Key Risk Indicator (e.g., SAE under-reporting rate).
The Step-by-Step-by-Step Process
Step 1 — Define your verification strategy (before first patient in)
- Map the risks (RBQM):
- Identify critical data & processes (CDPs): eligibility, informed consent, IP dosing, primary endpoint, SAEs.
- Score risks by likelihood × impact; set KRIs and thresholds (e.g., SAE rate/site vs study average).
- Choose the mix: SDV vs SDR vs centralized monitoring:
- Full SDV only for critical fields or high-risk sites.
- SDR to evaluate protocol adherence and data story.
- Central statistical monitoring (CSM) to detect outliers, digit preference, and unusual variance.
- Write the Monitoring Plan & DVP:
- Monitoring Plan: visit frequency, remote/on-site strategy, sampling %, triggers to increase SDV.
- DVP: edit checks (automatic & manual), query workflows, roles (DM/CRA/Medical/Stats), data listings.
- Define success metrics (KPIs):
- Open query aging (days), query rate/subject, missing forms %, ePRO compliance %, time to SAE entry, % critical fields verified, time from visit to EDC entry.
Step 2 — Build quality in the data flow (startup)
- CRF design for verification:
- Align fields to endpoints. Use controlled terms & skip logic to prevent bad data.
- Tag critical fields (eligibility, primary endpoint, dosing, AE/SAE) for focused SDV.
- Edit check specification (ECS):
- Range, consistency, cross-form checks, protocol window checks, univariate & multivariate rules.
- Document severity (hard vs soft), firing logic, and roles for resolution.
- Vendor & data transfer readiness:
- Data Transfer Agreements (including frequency, format, and QC rules).
- Lab, imaging, eCOA/ePRO, PK/PD, IWRS/RTSM feeds validated and traceable.
- Train sites on ALCOA+ and source standards:
- Source worksheets, eSource policy, contemporaneous entries, and corrections with audit trails.
Step 3 — Verify during conduct (ongoing)
- Run your edit checks early and often:
- Auto-queries generated daily. Investigators respond; CRAs/DMs review and close.
- Do risk-based SDV/SDR effectively:
- SDV (targeted): eligibility, consent, dosing, primary endpoint, AEs/SAEs, concomitant meds, IP accountability.
- SDR: review narrative consistency, protocol compliance, missing documentation, and training gaps.
- Centralized statistical monitoring (CSM):
- Compare sites for unusual patterns (flat vitals, identical timestamps, outlier lab variance).
- Escalate: Increase SDV at flagged sites, retrain, or conduct an audit.
- Reconciliations (make them routine):
- SAE reconciliation: EDC AE/SAE vs Safety Database (every 2 weeks until stable).
- Lab reconciliation: out-of-range & critical alerts acknowledged; subject IDs and visit windows match.
- eCOA/ePRO compliance: diary completion rate, back-filling detection.
- Imaging/ECG/Cardiac core labs: read status, adjudication outcomes, timestamps.
- IP accountability: dispensed/returned counts, temperature excursions, deviations.
- Medical coding QC (MedDRA/WHO-DD):
- Consistent verbatim terms; upgrade to higher specificity where justified.
- Periodic coding review listings for outliers and synonym harmonization.
- Query management discipline:
- Triaging (safety > endpoints > others), SLA for responses, avoid ping-ponging.
- Track “query reopens” as a quality signal.
- Documentation & audit trail reviews:
- Regular audit trail spot-checks: late data changes, unusual user behavior, mass edits.
- File monitoring notes, data review minutes, and issue logs.
Step 4 — Interim data review & freezes
- Clean-to-zero for critical data:
- All critical queries closed; critical SDV complete for interim cut.
- Soft freeze then QC check:
- Lock site by site or domain by domain; run listings; reconcile external feeds.
- Blind data review (if blinded):
- Confirm analysis populations, key protocol deviations, and endpoint computability.
- Document decisions:
- Deviations categorizations, exclusion rules, and data corrections—all logged.
Step 5 — Database lock (DBL) & close-out
- Pre-lock checklist:
- 100% SDV/SDR per plan for critical fields, all transfers complete, reconciliations cleared, coding finalized, audit trail reviewed, eSign-offs captured.
- Lock & post-lock controls:
- Fully lock EDC; restrict post-lock changes (CAPA needed).
- Export analysis datasets; confirm traceability from source → eCRF → SDTM/ADaM.
- Inspection readiness pack:
- Monitoring Plan & changes, DVP, issue logs, KRI dashboards, CAPAs, vendor QCs, audit trail review report, training records.
Step 6 — Submission & archiving
- Analysis verification:
- Double-program key endpoints; TLFs cross-checked to SDTM/ADaM; define.xml and reviewer guides validated.
- Traceability narrative:
- Clear “data lineage” diagrams for regulators: where each endpoint originated and how it was transformed.
- Archive per policy:
- Immutable and retrievable storage (EDC, eCOA, Core Labs, Safety, Programming).
Practical tools & templates (copy-paste)
A) Risk assessment & KRI starter
- Critical data/processes: Consent, Eligibility, Dosing, Primary Endpoint, SAEs
- Risks: Under-reporting of AEs; protocol window violations; data entry delays
- KRIs (examples):
- Median days from visit → eCRF entry (>5 days = alert)
- % missing primary endpoint fields (>2% = alert)
- SAE rate/site vs study average (Z-score > |2| = review)
- Query aging >14 days (% of open queries)
B) SDV sampling table (example)
Data Domain | Fields | SDV % |
---|---|---|
Consent & Eligibility | Consent date/time, criteria | 100% |
Safety | AEs/SAEs, relatedness, severity | 100% |
Efficacy (primary) | Endpoint variables | 100% |
Efficacy (secondary) | Key supporting fields | 50% (risk-adjust) |
ConMeds, Medical Hx | Class-level review | SDR + targeted SDV |
C) Query SLAs
- Site response within 5 business days
- CRA/DM follow-up weekly
- Escalate >14 days or >2 reopens
D) Close-out checklist (abridged)
- Critical fields SDV complete
- All edit checks rerun; zero high-severity open queries
- SAE, lab, imaging, eCOA, IP reconciliations = zero mismatches
- Coding finalized; MedDRA version documented
- Audit trail review signed
- Data review meeting minutes filed
- DB lock approvals recorded
Roles & responsibilities (who does what)
- Sponsor/CSPM: Owns RBQM strategy, approves plans.
- Data Management (DM): DVP, edit checks, reconciliations, listings, DBL.
- CRAs/Monitors: SDV/SDR execution, site coaching, escalation.
- Biostats/Programmers: Central monitoring, KRIs, double programming, analysis traceability.
- Safety/Pharmacovigilance: SAE reconciliation, MedDRA oversight.
- Medical Monitor: Clinical review, causality assessments, endpoint adjudication input.
- QA: Process audits, inspection readiness, CAPA effectiveness.
- Vendors (Labs, Imaging, eCOA, RTSM): Clean transfers, QC, documentation.
Common pitfalls (and quick fixes)
- Over-SDV with little impact: Switch to targeted SDV + SDR + CSM.
- Late data entry: Monitor timeliness KRI; retrain sites; enable remote review.
- Uncontrolled external feeds: Enforce DTAs, test transfers in UAT, and add acceptance QC.
- Query ping-pong: Use clear instructions; consolidate queries; set SLAs and coach.
- Last-minute reconciliations: Schedule reconciliations from the first transfer, not last month.
- Weak documentation: Log every decision; keep monitoring notes and minutes audit-ready.
SEO Add-ons (use in H2/H3s & FAQs)
Keywords to weave naturally: clinical trial data verification, source data verification checklist, SDV vs SDR, risk-based monitoring, ALCOA+ principles, centralized monitoring, data reconciliation, query management, database lock, inspection readiness.
FAQ (schema-ready content)
Q1. What’s the difference between SDV and SDR?
A. SDV checks data points against the source; SDR reviews overall data quality and compliance without line-by-line checking.
Q2. Is 100% SDV required?
A. No. Modern RBQM focuses 100% SDV on critical data and uses SDR + analytics for the rest.
Q3. Which reconciliations are mandatory?
A. Typically: SAE vs Safety DB, central labs, imaging/core labs, eCOA compliance, IP accountability—per your plan and risk profile.
Q4. When should I run central monitoring?
A. From the first patient in with periodic refresh. Use KRIs to trigger targeted actions.
Q5. What proves data integrity?
A. ALCOA+ evidence: audit trails, consistent timestamps, controlled corrections, and documented decisions.