Review Article
Comparative effectiveness research in the “real” world: lessons learned in a study of treatment-resistant hypertension

https://doi.org/10.1016/j.jash.2012.12.002Get rights and content

Abstract

Comparative effectiveness research (CER) is vital to translate new efficacious diagnostic and therapeutic approaches into effectiveness in usual clinical practice settings. Studying the practice environment in which effectiveness protocols are implemented is necessary to identify the complex challenges that can limit translation of evidence. These issues were addressed in our National Heart, Lung, and Blood Institute-funded R34, “Controlling Blood Pressure in Treatment-Resistant Hypertension (TRH): A Pilot Study.” Qualitative methods were used in this cluster (clinic)-randomized, four-arm pilot study of TRH in eight diverse, community-based practices including: (i) focus group discussions with practice staff and physicians; (ii) conference calls with physicians; and (iii) discussions with research coordinators. Sources were summarized and analyzed by content analysis. Results include data segregated into categories representing facilitators of and barriers to research. Key facilitators included: (i) early success in controlling challenging TRH patients (ii) improved management of TRH, and (iii) reimbursement for study time and expenses. Barriers included: (i) time-consuming regulatory requirements; (ii) limited training and research experience of some study coordinators; and (iii) reluctance of some physicians to refer to Hypertension Specialists. Qualitative assessment is valuable for identifying facilitators and barriers to CER. This information is important in designing and implementing CER to accelerate translation of clinical efficacy into effectiveness.

Introduction

Comparative effectiveness research (CER) and practical clinical trials (PCT) are vital tools to translate new drugs, devices, and other treatment approaches from proven efficacy to effectiveness.1, 2, 3 Efficacy studies establish that a treatment produces the desired effect in the controlled environment of one or a few experienced, research-intensive sites. Subjects are typically recruited through a variety of approaches, both from within and outside the entity conducting the trial, and subjects tend to be more demographically homogeneous. The research often involves a placebo. Effectiveness studies are conducted in the “real world” of a variety of busy clinical practices, with heterogeneous subjects that are more representative of the general population and may (PCT) or may not (CER) include a placebo.

If we are to translate efficacy into effectiveness and widespread adoption, we must understand the process by which the evidence produced in controlled research sites may vary from the desired effect in clinical practice. We must also appreciate the “adjustments” necessary to tailor efficacious interventions to specific populations or practice types to improve their rate of adoption. Published work in CER and PTC is relatively new, but a general consensus has formed that it must be representative of sites that provide care and the diverse spectrum of patients that can benefit.3 This requires that we view this research as a collaborative effort, which includes input from practices in the design of the research and includes them in the presentation and publication of results. Specifically, it is important to involve staff from representative practice types, locations, and with varying degrees of resource-intensity in the design and implementation of the approach and interpretation of results.4, 5 Studying the practice environment in which effectiveness protocols are conducted adds a layer of insight into the complexity of issues and challenges that can limit translation of evidence. This requires understanding the culture, resources, and capacity of primary care practice in a variety of community settings.6 For this reason, we included several qualitative methods to study the process of implementing a recent National Heart, Lung, and Blood Institute-funded CER R34, “Controlling Blood Pressure in Treatment-Resistant Hypertension (TRH): A Pilot Study.”7 The lessons we learned are important to consider in designing future research conducted in community-based practices.

Section snippets

Outpatient QUality Improvement Network (OQUIN)

The OQUIN is comprised of 210 diverse practice sites in South Carolina, North Carolina, Georgia, Alabama, Tennessee, and Virginia.8 These include individual and small group practices operating at a single site, multi-site physician- and hospital-owned practices, federally qualified health centers, rural health clinics, Veterans Affairs clinics, and free clinics. Each clinic signed a Business Associate Agreement approved by the Institutional Review Board (IRB) at the Medical University of South

Results

Focus group discussions were held with all practices. Saturation (no new information) was attained by the fifth focus group. Summaries of focus groups and field notes from research monitoring activities were reviewed by the two lead investigators, and a list of facilitators and barriers to implementing the research protocol in practice settings was generated. Items were placed under content categories: Facilitators of research, University/regulatory barriers, Practice-structural barriers, and

Discussion

The key findings are that conducting research in the real world setting of busy practices, while critical to determining effectiveness, requires additional time, effort, and expense compared with conducting research in academic and other research-intensive settings. Although we encountered challenges that our pre-proposal planning and training sessions did not detect, the lessons learned and changes practices made in clinical policies to improve routine care of patients produced results that

References (20)

  • R. Kessler et al.

    A proposal to speed translation of healthcare research into practice: Dramatic change is needed

    Am J Prev Med

    (2011)
  • D. Cohen et al.

    Fidelity versus flexibility: Translating evidence-based research into practice

    Am J Prev Med

    (2008)
  • R. Glasgow et al.

    Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-effectiveness transition

    Am J Publ Health

    (2003)
  • S. Tunis et al.

    Practical Clinical Trials. Increasing the value of clinical research for decision making in clinical and health policy

    JAMA

    (2003)
  • R. Glasgow et al.

    Practical clinical trials for translating research to practice: Design and measurement recommendation

    Med Care

    (2005)
  • A. Hoffman et al.

    How best to engage patients, doctors, and other stakeholders in designing comparative effectiveness studies

    Health Affairs

    (2010)
  • J. Schmitdiel et al.

    System-based participatory research in health care: An approach for sustainable translational research and quality improvement

    Ann Fam Med

    (2010)
  • J. Schensul

    Community, culture and sustainability in multilevel dynamic systems intervention science

    Am J Community Psychol

    (2009)
  • N. Dreyer et al.

    Why observational studies should be among the tools used in comparative effectiveness research

    Health Affairs

    (2010)
  • B. Egan et al.

    Impacting population cardiovascular health through a community-based practice network: Update on an ASH-supported collaborative

    J Clin Hypertens

    (2011)
There are more references available in the full text version of this article.

Cited by (12)

  • The GetReal Trial Tool: design, assess and discuss clinical drug trials in light of Real World Evidence generation

    2022, Journal of Clinical Epidemiology
    Citation Excerpt :

    Data generated from these so called pragmatic trials offer an attractive opportunity to bridge the gap between conventional randomized clinical trial (RCT) derived efficacy and RWE from observational studies [1–4]. However, progress towards the implementation of real world elements into trials is slow [6,7], for several reason including uncertainty about acceptability of the evidence, lack of experience with the methodology and operational challenges associated with such trials [8–19]. In this paper we describe the development, structure and possible applications of a decision support tool for incorporating Real World Elements into clinical trials.

  • Series: Pragmatic trials and real world evidence: Paper 8. Data collection and management

    2017, Journal of Clinical Epidemiology
    Citation Excerpt :

    Because of an increase in workload that is generally associated with this approach, the conditions necessary for using eCRFs may often only be fulfilled at specialized sites that have the expertise and manpower to meet all requirements. When detailed data collection is required on top of the usual workload, general practitioners or smaller hospitals, which often treat the patient population of interest, may not be able to cope with this approach [5,10]. In addition, (extensive) staff training in the use of data-collection systems as well as changes in data handling as such may influence routine clinical practice due to the Hawthorne effect [3].

  • Series: Pragmatic trials and real world evidence: Paper 1. Introduction

    2017, Journal of Clinical Epidemiology
    Citation Excerpt :

    Pragmatic, in our view, is not synonym to “easy to conduct” or “sloppy.” Depending on several factors, including the stage of drug development and the type of treatment, pragmatic trails can be designed to be point-of-care or large simple trials and thus relatively easy to conduct [34–36] or, due to ethical and legal requirements, can prove to be much more challenging to conduct than traditional highly controlled explanatory trials [36–40]. Opting for rather pragmatic trial characteristics may lead to different and unanticipated operational challenges compared to explanatory trials [6,36–42].

  • Series: Pragmatic trials and real world evidence: Paper 2. Setting, sites, and investigator selection

    2017, Journal of Clinical Epidemiology
    Citation Excerpt :

    In general, primary care physicians are reported to be supportive of conducting clinical research. However, from a practical perspective, many primary care physicians often find it challenging to dedicate time to participate in research [39]. In two recent pragmatic trials, 59% of sites initially expressed their interest in trial participation, but less than 4% of the physicians actually recruited participants [30].

View all citing articles on Scopus

This study was funded by NHLBI R34 HL105880.

Financial or other conflict of interest. Drs. Laken, Dawson, Engelman, Lovelace, Way: None.

Dr. Egan received research support in the past year from Medtronic, Novartis, Takeda and income as a consultant from Blue Cross Blue Shield South Carolina, Medtronic, Takeda.

View full text