Faculty, Staff and Student Publications

Language

English

Publication Date

5-23-2025

Journal

Implementation Science Communications

DOI

10.1186/s43058-025-00751-8

PMID

40410922

PMCID

PMC12102923

PubMedCentral® Posted Date

5-23-2025

PubMedCentral® Full Text Version

Post-print

Abstract

Background: Self-report measures are essential in implementation science since many phenomena are difficult to assess directly. Nevertheless, cognitively demanding surveys increase the prevalence of careless and inattentive responses. Assessing response quality is critical to improving data validity, yet recommendations for determining response quality vary. To address this, we aimed to 1) apply a multi-method approach to assess the quality of self-report survey data in a study aimed to validate a measure of organizational readiness, 2) compare readiness scores among responses categorized as high- and low-quality, and 3) examine individual characteristics associated with low-quality responses.

Methods: We surveyed federally qualified health center staff to assess organizational readiness for implementing evidence-based interventions to increase colorectal cancer screening. The survey was informed by the R = MC2 heuristic, which proposes that readiness consists of three components: Motivation (M), Innovation-Specific Capacity (ISC), and General Capacity (GC). We determined response quality (high/low) using two assessment methods: survey completion time and monotonic response patterns (MRPs). T-tests examined associations between readiness scores and response quality, and regression models examined differences in response quality by individual characteristics (e.g., age, role, implementation involvement).

Results: The sample consisted of 474 responses from 57 clinics. The average survey time was 24.3 min, and 42 respondents (8.9%) had MRPs on all readiness components. The number of low-quality responses varied by assessment method (range = 42-98). Survey responses classified as low quality had higher readiness scores (M, ISC, GC, p < 0.01). Age (p = 0.01), race (p < 0.01), and implementation involvement (p = 0.04) were inversely associated with survey completion time, whereas older age (p = 0.01) and more years worked at the clinic (p = 0.03) were associated with higher response quality. Quality improvement staff and clinic management were less likely to provide low-quality responses (p = 0.04).

Conclusions: Our findings suggest that readiness scores can be inflated by low-quality responses, and individual characteristics play a significant role in data quality. There is a need to be aware of who is completing surveys and the context in which surveys are distributed to improve the interpretation of findings and make the measurement of implementation-related constructs more precise.

Keywords

Organizational readiness, Data quality, Survey research methods, Measure development

Published Open-Access

yes

Included in

Public Health Commons

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.