Data Collection

How to maintain a rigorous data collection discipline

Regulators scrutinise data collection procedures. Poor collection practice merely suggests broader quality system failures. The difference between trials that generate convincing evidence and those that regulators question reflects whether teams maintained rigorous data collection discipline, documented deviations systematically, and verified data accuracy through ongoing quality monitoring.

Document every deviation, no matter how small

Deviations represent departures from approved study procedures that range from minor timing variations to major eligibility violations. Regardless of perceived significance, every deviation requires documentation to explain what occurred, why, and the corrective actions that were implemented. Silent protocol breaches that go undocumented will create regulatory nightmares during submissions, and when undocumented deviations emerge, regulators will question what else wasn't documented and whether systematic quality failures exist.

Common deviations include assessment timing outside protocol windows, missed visits, procedure sequence variations, eligibility criterion violations discovered post-enrolment, and consent process deficiencies. Each requires immediate documentation in deviation logs signed by the investigator. Categorise your deviations by severity and impact on data integrity. Major deviations that compromise participant safety or data validity require immediate reporting to ethics committees and regulatory agencies. Minor deviations require documentation but not immediate external reporting. Implement deviation tracking systems to ensure nothing goes undocumented. Many sites use electronic systems flagging potential deviations automatically based on visit timing, missed assessments, or data inconsistencies.

Standardising tools and training across sites

Site-to-site variability in data collection procedures brings in a measurement inconsistency that obscures treatment effects and will invite regulatory scepticism. Standardisation requires detailed standard operating procedures, comprehensive training, and ongoing competency verification.

Equipment standardisation matters more than founders typically recognise. Using different devices across sites introduces systematic measurement differences. Make sure to specify approved equipment models, calibration requirements, and measurement techniques in your protocol. Provide equipment to sites when feasible, rather than relying on institutional equipment. Your training programmes will need to extend beyond protocol review to hands-on competency demonstration. Investigators and coordinators should perform mock assessments, complete case report forms using practice data, and demonstrate proper device handling before enrolling participants.

Data quality failures rarely reflect intentional misconduct. They emerge from inadequate training, unclear procedures, and site-to-site variability. Standardisation isn't bureaucracy. It's the kind of data quality that regulators trust.

Professor Stuart Pocock. London School of Hygiene and Tropical Medicine

Monitor training effectiveness through ongoing quality reviews (sites that make frequent errors will require retraining before any additional enrolment). Animal study standardisation will require equal attention. Handler technique variability affects animal stress and physiological measurements, so make sure to standardise your handling process and technique procedures across all personnel.

Auditing data against original sources

Source data verification compares case report form entries against original medical records, laboratory reports, and device outputs to identify transcription errors, missing data, or potential fabrication. Include ongoing source data verification rather than waiting until study completion. Many sponsors verify 100% of data for first-enrolled subjects at each site to identify systematic collection errors early when correction remains feasible. After confirming site proficiency, verification should continue at 10% to 25% of your subjects. Common verification findings include transcription errors, calculation mistakes in derived parameters, missing source documentation for reported events, timing discrepancies, and violated eligibility criteria missed during screening.

Electronic data capture systems reduce transcription errors but introduce different quality challenges. Missing data fields, implausible value entries, and internal inconsistencies all require query resolution before you lock your database. Add edit checks to catch impossible values at data entry (rather than cleaning up later). Maintain clear audit trails to document all data changes, who made the changes, when they occurred, and the justification for these changes. Regulators will scrutinise audit trails during inspections by seeking evidence of data manipulation or inadequate oversight.

Logging device performance systematically

Medical device trials require comprehensive device performance monitoring beyond clinical endpoint assessment. Device malfunctions, user errors, unexpected behaviours, and technical specifications all inform regulatory understanding of device characteristics.

Implement systematic device performance logs documenting every use. Record pre-use checks, setup procedures, operational parameters, alarms or alerts, procedural difficulties, and post-use inspections. These logs will capture device behaviour patterns that might not trigger formal adverse event reporting but inform overall performance assessment. Device deficiencies require immediate documentation (even when participant harm doesn't occur). Cumulative deficiency patterns might reveal design flaws, manufacturing variations, or user training needs. Maintain device accountability tracking for every device from receipt through use or destruction. This will enable targeted investigation if manufacturing issues emerge for specific production lots.

Device performance data often receives less attention than clinical endpoints, yet proves critical for regulatory submissions. Regulators want a comprehensive understanding of how devices behave across varied conditions. Systematic device logs demonstrating consistent performance build confidence.

Dr Carl Heneghan. Professor of Evidence-Based Medicine at the University of Oxford

Reporting and classifying adverse events rigorously

Adverse event collection represents perhaps the most scrutinised aspect of trial data. Incomplete, delayed, or inadequately investigated adverse events will only create regulatory concerns about trial conduct and device safety. Report each one, regardless of perceived relationship to device or procedure. Investigators cannot know with certainty whether events are device-related. Classify by seriousness, severity, and relationship to the device. Serious adverse events include death, life-threatening conditions, hospitalisation, persistent disability, or congenital anomalies requiring immediate reporting. Maintain detailed adverse event narratives by providing context that coding alone cannot convey. These narratives will enable reviewers to understand the circumstances, the investigator's reasoning about causality, and the resolution details.

Conducting interim data reviews

Interim data reviews during ongoing trials will catch systematic errors, identify training needs, and verify data quality before problems accumulate. Review enrolment screening logs, verifying sites apply eligibility criteria consistently. Wide site-to-site variation in screening failure rates might indicate criteria interpretation differences requiring clarification.

Data collection workshop

Data collection determines whether trials generate evidence that regulators accept and clinicians trust. By taking this workshop, you’ll soon learn how to document every deviation, regardless of perceived significance. With standardised tools, procedures, and training across all sites, we’ll show you how to audit data against sources by verifying accuracy. Log device performance systematically. Report and classify all adverse events rigorously. Conduct interim data reviews to catch errors early. These disciplines will transform data collection from an administrative necessity into your strategic advantage, generating the credible evidence that regulatory approval and market acceptance demand.

Waypoint checklist

Critical oversights when collecting trial data for a MedTech device can include:

  • Silent protocol breaches. Make sure to document every deviation, no matter how small
  • Site/handler variability, where you need to standardise tools and training
  • Unverified data which needs an audit against original sources
  • Device blind spots, so log technical performance systematically
  • The need to report and classify all events rigorously
  • Conducting interim data reviews to catch errors early

This article is for informational purposes only and does not constitute legal, financial, or professional advice. It is not intended to be a substitute for professional counsel, and the information provided should not be relied upon to make decisions. All actions taken based on this content are at your own risk.
If you believe something is inaccurate, incorrect or needs changing, contact us.

Get in touch

Stop worrying about funding
Access expert strategy
Get to market faster
Contact us